id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.07268
Fault Tolerance in Euclidean Committee Selection
In the committee selection problem, the goal is to choose a subset of size $k$ from a set of candidates $C$ that collectively gives the best representation to a set of voters. We consider this problem in Euclidean $d$-space where each voter/candidate is a point and voters' preferences are implicitly represented by Euclidean distances to candidates. We explore fault-tolerance in committee selection and study the following three variants: (1) given a committee and a set of $f$ failing candidates, find their optimal replacement; (2) compute the worst-case replacement score for a given committee under failure of $f$ candidates; and (3) design a committee with the best replacement score under worst-case failures. The score of a committee is determined using the well-known (min-max) Chamberlin-Courant rule: minimize the maximum distance between any voter and its closest candidate in the committee. Our main results include the following: (1) in one dimension, all three problems can be solved in polynomial time; (2) in dimension $d \geq 2$, all three problems are NP-hard; and (3) all three problems admit a constant-factor approximation in any fixed dimension, and the optimal committee problem has an FPT bicriterion approximation.
Chinmay Sonar, Subhash Suri, Jie Xue
2023-08-14T16:50:48Z
http://arxiv.org/abs/2308.07268v1
# Fault Tolerance in Euclidean Committee Selection ###### Abstract In the committee selection problem, the goal is to choose a subset of size \(k\) from a set of candidates \(C\) that collectively gives the best representation to a set of voters. We consider this problem in Euclidean \(d\)-space where each voter/candidate is a point and voters' preferences are implicitly represented by Euclidean distances to candidates. We explore _fault-tolerance_ in committee selection and study the following three variants: (1) given a committee and a set of \(f\) failing candidates, find their optimal replacement; (2) compute the worst-case replacement score for a given committee under failure of \(f\) candidates; and (3) design a committee with the best replacement score under worst-case failures. The score of a committee is determined using the well-known (min-max) Chamberlin-Courant rule: minimize the maximum distance between any voter and its closest candidate in the committee. Our main results include the following: (1) in one dimension, all three problems can be solved in polynomial time; (2) in dimension \(d\geq 2\), all three problems are NP-hard; and (3) all three problems admit a constant-factor approximation in any fixed dimension, and the optimal committee problem has an FPT bicriterion approximation. Multiwinner elections, Fault tolerance, Geometric Hitting Set, EPTAS 10.4230/LIPIcs.CVIT.2016.23 ## 1 Introduction and Problem Statement We consider the computational complexity of adding fault tolerance into spatial voting. In spatial voting [6, 1, 32], the voters and the candidates are both modeled as points in some \(d\)-dimensional space, where each dimension represents an independent _policy issue_ that is important for the election, and each voter's preference among the candidates is implicitly encoded by a distance function. For example, in the simplest 1-dimensional setting, voters and candidates are points on a line indicating their real-valued preference on a single issue. The specific setting for our work is _multiwinner_ spatial elections, also called _committee selection_, in \(d\) dimensions where we have a set \(V\) of \(n\) voters, a set \(C\) of \(m\) candidates, and a committee size (integer) \(k\). The goal is to choose a subset of \(k\) candidates, called the _winning committee_, that collectively best represents the preferences of all the voters [9, 11, 12]. One aspect of committee selection that appears not to have been investigated is _fault tolerance_, that is, how robust a chosen committee is against the possibility that some of the winning members may default. Committee selection problems model a number of applications in the social sciences and in computer science where such defaults are not uncommon, such as democratic elections, staff hiring, choosing public projects, locations of public facilities, jury selection, cache management, etc. [23, 14, 29, 2, 3, 25, 13]. In this paper, we are particularly interested in designing algorithms to address questions of the following kind: If some of the winning members default, how badly does this affect the overall score of the committee? Or, how much does the committee score suffer if a _worst-case_ subset of size \(f\) defaults? Finally, can we _proactively_ choose a committee in such a way that it can tolerate up to \(f\) faults with the minimum possible score degradation? We begin by formalizing these problems more precisely and then describing our results. Suppose \(V=\{v_{1},\ldots,v_{n}\}\) is a set of \(n\) voters and \(C=\{c_{1},\ldots,c_{m}\}\) is a set of \(m\) candidates, modeled as points in \(d\)-dimensional Euclidean space. (We occasionally call the tuple \((V,C)\) an election \(E\).) Given a positive integer \(k\), we want to elect \(k\) candidates, called the _committee_, using the well-known Chamberlin-Courant voting rule [4]. This rule assigns a score to each committee as follows. Let \(T\subseteq C\) be a committee. For each voter \(v\), the score of \(T\) for \(v\) is defined as \(\sigma(v,T)=\min_{c\in T}d(v,c)\), namely, the distance from \(v\) to its closest candidate in \(T\).1 The _score_ of the committee \(T\) is defined as \(\sigma(T)=\max_{v\in V}\sigma(v,T)\), namely, the largest distance between any voter and its closest neighbor in \(T\). (In facility location parlance, this is the well-known \(k\)-center problem.) Footnote 1: Originally, Chamberlin and Courant [4] defined a voting rule on Borda scores (also known as Borda-CC). In this paper, similarly to [2], we study a min-max version of this rule on a more general scoring function, which in our case is based on voter-candidate distances. The fault tolerance of a committee is parameterized by a positive integer \(f\), which is the upper bound on the number of candidates that can fail.2 Throughout the paper, we use the notation \(J\) to denote a failing set of candidates. We are allowed to replace the failing members of \(J\) with any set of at most \(|T\cap J|\) candidates from \(C\setminus J\). We often denote this set of _replacement_ candidates by \(R\). However, _we must keep all the non-failing members of \(T\) in the committee_ -- that is, the replacement committee is the set \((T\setminus J)\cup R\) -- and throughout the paper our goal is to optimize this committee's score, namely \(\sigma((T\setminus J)\cup R)\). Footnote 2: In our work, we will allow any subset of size \(f\) from \(C\) to fail, so the faults can also include candidates not in the selected committee \(T\). This only makes the problem harder because the adversary can always limit the faults to \(T\), and elimination of candidates from \(C\setminus T\) makes finding replacements for failing committee members more difficult. We consider the following three versions of fault-tolerant committee selection, presented in increasing order of complexity. The first problem is the simplest: given a committee and a failing set, find the best replacement committee. **Optimal Replacement Problem** (ORP) **Input:** An election \(E=(V,C)\), a committee \(T\subseteq C\) and a failing set \(J\subseteq C\). **Goal:** Find a replacement set \(R\subseteq C\setminus J\) of size at most \(|T\cap J|\) minimizing \(\sigma((T\setminus J)\cup R)\). Our second problem is to quantify the fault tolerance of a given committee \(T\) over worst-case faults. That is, what is the largest score of \(T\)'s replacement when a worst-case subset of \(f\) faults occur? We introduce the following notation as \(T\)'s measure of \(j\)-fault-tolerance, for any \(0\leq j\leq f\): \(\sigma_{j}(T)=\max_{J\subseteq C\ s.t.\ |J|\leq j}\sigma((T\setminus J)\cup R)\), where \(R\) is an optimal replacement set with size at most \(|T\cap J|\). We want to compute \(\sigma_{f}(T)\). Occasionally, we also use the notation \(\sigma_{0}(T)\) for the no-fault score of \(T\), namely \(\sigma(T)\). **Fault-Tolerance Score** (FTS) **Input:** An election \(E=(V,C)\), a committee \(T\subseteq C\) and a fault-tolerance parameter \(f\). **Goal:** Compute \(\sigma_{f}(T)\). Our third and final problem is to compute a committee with optimal fault-tolerance score. **Optimal Fault-Tolerant Committee** (OFTC) **Input:** An election \(E=(V,C)\), a committee size \(k\) and a fault-tolerance parameter \(f\). **Goal:** Find \(T\subseteq C\) of size at most \(k\) minimizing \(\sigma_{f}(T)\). ### Our Results We first show that even in one dimension, fault-tolerant committee problems are nontrivial. In particular, while the Optimal Replacement Problem (ORP) is easily solved by a simple greedy algorithm, the other two problems, Fault-Tolerance Score (FTS) and Optimal Fault-Tolerant Committee (OFTC), do not appear to be easy. Our main result in one dimension is the design of efficient dynamic-programming-based algorithms for these two problems. Along the way, we solve a _fault-tolerant_ Hitting Set problem for points and unit intervals, which may be of independent interest. In two dimensions and higher, OFTC is NP-hard because of its close connection to the \(k\)-center problem. However, we show that even the seemingly simpler problem of optimal replacement (ORP) is also NP-hard. Our main results include a constant-factor approximation for all three problems in any fixed dimension (in fact, in any metric space), as well as a novel bicriterion FPT approximation via an EPTAS whose running time has the form \(f(\epsilon)n^{\mathcal{O}(1)}\). For ease of reference, we show these results in the following table. ### Related Work To the best of our knowledge, the issue of fault tolerance in committee selection has not been studied in voting literature -- their primary focus is on protocols and algorithms for choosing candidates [11, 12, 27, 30, 31, 8]. However, the following two lines of work consider some related issues. First, in the "unavailable candidate model" [24, 16] the goal is to choose a _single winner_ with maximum expected score when candidates fail according to a given probability distribution; in contrast, we consider multiwinner elections under worst-case faults. In the second line of work, a set of election control problems are considered where candidates are added [10] or deleted [18] to change the outcome of the election. In this setting, the candidate set is modified to obtain a favorable election outcome, which is a rather different problem than ours. In the facility-location research, there has been prior work on adding fault tolerance to \(k\)-center or \(k\)-median solutions [5, 22, 21, 34, 17], but the main approach there is to assign each user (voter) to multiple facilities (candidates). In particular, the "\(p\)-neighbor \(k\)-center" framework [5] minimizes the maximum distance between a user and its \(p\)th center as a way to protect against \(p-1\) faults. This formulation, however, differs from our optimal fault-tolerant committee problem (OFTC) because in our setting the replacement candidates are chosen _after_ failing candidates are announced. Therefore, in the OFTC problem, the designer does not have to _simultaneously_ allocate \(p\) neighbors for all the voters. Furthermore, to the best of our knowledge, neither of our first two problems -- Optimal Replacement (ORP) and \begin{table} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**One-dimensional**} & \multicolumn{3}{c|}{**Dimension \(d\geq 2\)**} \\ \cline{3-5} \multicolumn{1}{c|}{} & **instances** & **Complexity** & **Approximation** & **Bounded \(f\)** \\ \hline \multirow{2}{*}{**ORP**} & P & NP-hard & 3-approx. & P \\ & (Theorem 1) & (Theorem 11) & (Lemma 17) & (Section 3.1.3) \\ \hline \multirow{2}{*}{**FTS**} & P & NP-hard & 3-approx. & P \\ & (Theorem 6) & (Theorem 11) & (Lemma 19) & (Section 3.1.3) \\ \hline \multirow{3}{*}{**OFTC**} & P & \multirow{3}{*}{NP-hard} & 5-approx. & NP-hard \\ & P & NP-hard & (Lemma 23) & (Section 3.1.3) \\ \cline{1-1} & (Theorem 9) & (Theorem 11) & Bicriterion-EPTAS & 3-approx. \\ \cline{1-1} & & & (Theorem 28) & (Theorem 22) \\ \hline \end{tabular} \end{table} Table 1: Summary of our results. Fault-Tolerance Score (FTS) -- have been studied in the facility-location literature, and initiate a new research direction. We also formulate and solve a fault-tolerant hitting set problem in one dimension, which may be of independent interest. ## 2 Fault-Tolerant Committees in One Dimension Even in one dimension, computing the fault-tolerance score of a given committee or finding a committee with minimum fault-tolerance score is nontrivial. The optimal replacement problem, however, is easy -- a simple greedy algorithm works. Our main result in this section is to design efficient dynamic-programming algorithms for the former two problems. In doing so, we also solve the fault-tolerant version of Hitting Set for points and unit segments. ### Optimal Replacement Problem In the Optimal Replacement Problem (ORP), we are given a committee \(T\subseteq C\) and a failing set \(J\subseteq C\), and we must find a replacement set \(R\) minimizing the score \(\sigma((T\setminus J)\cup R)\), where \(|R|\leq|T\cap J|\). Since this score is always the distance between some voter-candidate pair, it suffices to solve the following decision problem: Is there a replacement set with score at most \(r\)? We can then try all possible \(O(nm)\) distances to find the smallest feasible replacement score. This decision problem is equivalent to the following hitting set problem: for each voter \(v\in V\), let \(I_{v}\) be the interval of length \(2r\) centered at \(v\), and let \(\mathcal{I}=\{I_{v}:v\in V\}\) be the set of these \(n\) (voter) intervals. A subset of candidates is a hitting set for \(\mathcal{I}\) if each interval contains at least one of the candidates. In our problem, we are given a hitting set \(T\) and a failing subset of candidates \(J\), and we must find the minimum-size replacement hitting set. Such a replacement is easily found using the standard greedy algorithm, as follows. We first remove all of the intervals from \(\mathcal{I}\) that are already hit by a candidate in \(T\setminus J\), and we also remove all the failing candidates \(J\) from \(C\). For the leftmost remaining interval, we then choose the rightmost candidate \(c\) contained in it, add it to \(R\), delete all intervals hit by \(c\), and iterate until all remaining intervals are hit. If we ever encounter an interval containing no candidate, or if the size of the replacement set is larger than \(|T\setminus J|\), the answer to the decision problem is no. Otherwise, the solution is \(R\). The greedy algorithm is easily implemented to run in time \(\mathcal{O}((m+n)\log(m+n))\). To find the optimal replacement set, we can do a binary search over \(O(nm)\) values of \(r\) and find the smallest \(r\) for which \(|(T\setminus J)\cup R|\leq k\). The Optimal Replacement Problem can be solved in time \(\mathcal{O}((m+n)\log^{2}(m+n))\) for one-dimensional Euclidean elections. ### Computing the Fault-Tolerance Score (FTS) of a Committee We now come to the more difficult problem of computing the fault-tolerance score \(\sigma_{f}(T)\) of a committee \(T\) in one dimension, which is the worst case over all possible failing sets of \(T\). Once again it suffices to solve the following decision problem: given a size-\(k\) committee \(T\) and a real number \(r\), can we find a replacement with score at most \(r\) for _every_ failing subset of size \(f\)? Using our hitting set formulation, \(\sigma_{f}(T)\leq r\) if and only if \(T\) is an \(f\)_-tolerant hitting set_ of \(\mathcal{I}\), that is, for any failing set \(J\subseteq C\) of size at most \(f\), there exists a replacement set \(R\subseteq C\setminus J\) such that \(|(T\setminus J)\cup R|\leq|T|\) and \((T\setminus J)\cup R\) hits \(\mathcal{I}\). (Recall that each member of \(\mathcal{I}\) is an interval of length \(2r\) centered at one of the voter positions.) We can then compute the fault-tolerance score of \(T\) by trying each of the \(O(nm)\) voter-candidate distances to find the smallest \(r\) for which this decision problem has a positive answer. We solve this fault-tolerant hitting set decision problem by observing that the size of a _smallest_ hitting set equals the size of a _maximum_ independent set, defined with respect to candidate points and voter intervals in the following way. Suppose the intervals of \(\mathcal{I}=\{I_{1},\ldots,I_{n}\}\) are sorted left to right. First, we can assume without loss of generality that \(|I_{i}\cap C|>f\) for all \(i\in[n]\), since otherwise there is no \(f\)-tolerant hitting set for \(\mathcal{I}\). Given a set of points \(X\) in \(\mathbb{R}\), we say that a set of intervals is _\(X\)-disjoint_ if each point in \(X\) is contained in at most one interval. (That is, \(X\)-disjoint intervals can be thought of as _independent_ in that they contain disjoint sets of points in \(X\)). The following claim is easy to prove. Given a set of points \(X\) and a set of intervals \(\mathcal{J}\) on the real line, the size of a minimum hitting set \(X^{\prime}\subseteq X\) of \(\mathcal{J}\) equals the maximum size of an \(X\)-disjoint subset of \(\mathcal{J}\). Thus, if \(T\subseteq C\) is an \(f\)-tolerant hitting set for \(\mathcal{I}\), then for any failing set \(J\subseteq C\), the size of any \((C\setminus J)\)-disjoint subset of \(\mathcal{I}\) is at most \(|T|\). One should note that the size of the maximum \((C\setminus J)\)-disjoint subset in \(\mathcal{I}\) is a monotonically increasing function of \(|J|\) -- as more candidates fail, more intervals can become disjoint. Our goal is to find the maximum size of such a disjoint interval family over all possible failure sets \(J\) of size at most \(f\). We will do this using dynamic programming, by combining solutions of subproblems, where each subproblem corresponds to an index range \([i,j]\), over the set of candidate points \(c_{1},\ldots,c_{m}\). Assuming that the candidate points \(C=\{c_{1},\ldots,c_{m}\}\) are ordered from left to right, our subproblems are defined as follows, for \(1\leq i\leq j\leq m\): * \(C_{i,j}=\{c_{i},\ldots,c_{j}\}\) is the set of candidates in the range \([c_{i},c_{j}]\). * \(\mathcal{I}_{i,j}=\{I\in\mathcal{I}:I\cap C\subseteq C_{i,j}\}\) is the set of intervals that only contain points from \(C_{i,j}\). * For any \(J\subseteq C_{i,j}\), \(\delta_{i,j}(J)\) is the maximum size of a \((C_{i,j}\setminus J)\)-disjoint subset of \(\mathcal{I}_{i,j}\). * The subproblems we want to solve are the values \(\delta_{i,j}(f)=\max_{J\subseteq C_{i,j},|J|\leq f}\delta_{i,j}(J)\). The key technical lemma of this section is the following claim. \(T\subseteq C\) is an \(f\)-tolerant hitting set of \(\mathcal{I}\) if and only if \(|T\cap C_{i,j}|\geq\delta_{i,j}(f)\), for all \(1\leq i\leq j\leq m\). Proof.: We first show the "if" part of the lemma. Assume \(|T\cap C_{i,j}|\geq\delta_{i,j}(f)\) for all \(i,j\in[m]\) with \(i\leq j\). To see that \(T\) is an \(f\)-tolerant hitting set of \(\mathcal{I}\), consider a failing set \(J\subseteq C\) of size at most \(f\). We have to show the existence of a replacement set \(R\subseteq C\backslash J\) such that \(|(T\backslash J)\cup R|\leq|T|\) and \((T\backslash J)\cup R\) is a hitting set of \(\mathcal{I}\). We write \(T\backslash J=\{c_{i_{1}},\ldots,c_{i_{p}}\}\), where \(i_{1}<\cdots<i_{p}\). For convenience, set \(i_{0}=0\) and \(i_{p+1}=m+1\). By our assumption, every interval \(I\in\mathcal{I}\) is hit by some point in \(C\). Thus, either \(I\) is hit by \(T\backslash J\) or \(I\) belongs to \(\mathcal{I}_{i,j}\) where \(i=i_{t-1}+1\) and \(j=i_{t}-1\) for some index \(t\in[p+1]\). Now consider an index \(t\in[p+1]\). We write \(T_{t}=T\cap C_{i,j}\) and define \(R_{t}\subseteq C_{i,j}\backslash J\) as a minimum hitting set of \(\mathcal{I}_{i,j}\). By Lemma 2, the size of \(R_{t}\) is equal to the maximum size of a \((C_{i,j}\backslash J)\)-disjoint subset of \(\mathcal{I}_{i,j}\), which is nothing but \(\delta_{i,j}(J\cap C_{i,j})\). Also, by assumption, we have \(|T_{t}|=|T\cap C_{i,j}|\geq\delta_{i,j}(f)\geq\delta_{i,j}(J\cap C_{i,j})\). Therefore, \(|R_{t}|\leq|T_{t}|\). Finally, we define \(R=\bigcup_{t=1}^{p+1}R_{t}\). Clearly, \((T\backslash J)\cup R\) hits \(\mathcal{I}\). So it suffices to show that \(|(T\backslash J)\cup R|\leq|T|\). Since \(|R_{t}|\leq|T_{t}|\) for all \(t\in[p+1]\), we have \[|(T\backslash J)\cup R|=|T\backslash J|+\sum_{t=1}^{p+1}|R_{t}|\leq|T\backslash J |+\sum_{t=1}^{p+1}|T_{t}|=|T|,\] Figure 1: The figure shows an interval hitting set instance with four intervals and five points. The set \(\{c_{2},c_{4}\}\) is a feasible hitting set. For \(X=\{c_{2},c_{3},c_{5}\}\), the intervals \(I_{1},I_{3},I_{4}\) are \(X\)-disjoint. which completes the proof of the "if" part. Next, we prove the "only if" part of the lemma. Assume \(T\subseteq C\) is an \(f\)-tolerant hitting set of \(\mathcal{I}\). Consider two indices \(i,j\in[m]\) with \(i\leq j\). To show \(|T\cap C_{i,j}|\geq\delta_{i,j}(f)\), it suffices to show that \(|T\cap C_{i,j}|\geq\delta_{i,j}(J)\) for all \(J\subseteq C_{i,j}\) with \(|J|\leq f\). Since \(T\) is an \(f\)-tolerant hitting set of \(\mathcal{I}\), there exists \(R\subseteq C\backslash J\) such that \(|(T\backslash J)\cup R|\leq|T|\) and \((T\backslash J)\cup R\) is a hitting set of \(\mathcal{I}\). For brevity, let \(T^{\prime}=(T\backslash J)\cup R\). By definition, the intervals in \(\mathcal{I}_{i,j}\) can only be hit by the points in \(C_{i,j}\). Thus, \(T^{\prime}\cap C_{i,j}\) is a hitting set of \(\mathcal{I}_{i,j}\). As \(T^{\prime}\cap C_{i,j}\subseteq C_{i,j}\backslash J\), by Lemma 2, the size of \(T^{\prime}\cap C_{i,j}\) is at least the maximum size of a \((C_{i,j}\backslash J)\)-disjoint subset of \(\mathcal{I}_{i,j}\), i.e., \(|T^{\prime}\cap C_{i,j}|\geq\delta_{i,j}(J)\). Furthermore, because \(J\subseteq C_{i,j}\), we have \((T\backslash J)\backslash C_{i,j}=T\backslash C_{i,j}\). It follows that \(T\backslash C_{i,j}\subseteq T^{\prime}\backslash C_{i,j}\) and thus \(|T\backslash C_{i,j}|\leq|T^{\prime}\backslash C_{i,j}|\). For a committee \(T\), we can partition \(T\) into two parts: the part containing candidates in \(C_{i,j}\) and the part containing candidates outside of \(C_{i,j}\). Hence, \(|T|=|T\cap C_{i,j}|+|T\backslash C_{i,j}|\) and \(|T^{\prime}|=|T^{\prime}\cap C_{i,j}|+|T^{\prime}\backslash C_{i,j}|\). Because \(|T^{\prime}|\leq|T|\) and \(|T^{\prime}\backslash C_{i,j}|\geq|T\backslash C_{i,j}|\), we have \(|T^{\prime}\cap C_{i,j}|\leq|T\cap C_{i,j}|\). Therefore, \(|T\cap C_{i,j}|\geq\delta_{i,j}(J)\). This completes the proof of Lemma 3. In order to decide if \(\sigma_{f}(T)\leq r\), therefore, we just have to compute \(\delta_{i,j}(f)\), for all \(i,j\), and check the condition \(|T\cap C_{i,j}|\geq\delta_{i,j}(f)\). We now show how to do that efficiently. #### 2.2.1 Efficiently Computing \(\delta_{i,j}(f)\) For ease of presentation, we show how to compute \(\delta_{1,m}(f)\); computing other \(\delta_{i,j}(f)\) is similar. We have \(C_{1,m}=C\), \(\mathcal{I}_{1,m}=\mathcal{I}\), and \(\delta_{1,m}(f)\) is size of the largest subset of \(\mathcal{I}\) that is \((C\backslash J)\)-disjoint for any failing set \(J\subseteq C\) with \(|J|\leq f\). The intervals of \(\mathcal{I}=\{I_{1},I_{2},\ldots,I_{n}\}\) are in the left to right sorted order and, for each \(i\in[n]\), let \(C(I_{i})=C\cap I_{i}\) be the set of points in \(C\) that hits \(I_{i}\). Define \(\Gamma[i][j]\) as the maximum size of an \((C\setminus X)\)-disjoint subset \(\mathcal{J}\subseteq\{I_{1},\ldots,I_{i}\}\) such that \(X\subseteq C\) and \(|X|\leq j\). We have the following recurrence \[\Gamma[i][j]\ =\ \max\left\{\begin{array}{l}\Gamma[i-1][j],\\ \max_{0\leq i^{\prime}\leq i}1+\Gamma[i^{\prime}][j-|C(I_{i})\cap C(I_{i^{ \prime}})]|\end{array}\right\}\] Clearly, \(\delta_{1,m}(f)=\Gamma[n][f]\). The base case for our dynamic program is \(\Gamma[0,j]=0\) for all \(j\in[f]\) and \(\Gamma[i][j]=-\infty\) for \(j<0\) and all \(i\in[n]\). Our dynamic program runs in time \(\mathcal{O}(n^{2}mf)\). In the same way, we can compute the values of \(\delta_{i,j}(f)\) for all \(i,j\in[m]\) with \(i\leq j\). \(\delta_{i,j}(f)\), for all \(1\leq i\leq j\leq m\), can be computed in time \(\mathcal{O}(n^{2}m^{3}f)\). Given a hitting set \(T\subseteq C\) and the values \(\delta_{i,j}(f)\), we can verify the condition in Lemma 3 in time \(\mathcal{O}(m^{3})\). We can then use binary search to find the smallest value of \(r\) for which \(T\) is an \(f\)-tolerant hitting set. This establishes the following result. The fault-tolerance score of a \(1\)-dimensional committee \(T\) can be computed in time \(\mathcal{O}(n^{2}m^{3}f\log(nm))\). ### Optimal Fault-Tolerant Committee We now address the problem of _designing_ a fault-tolerant committee: select a committee \(T\) of size \(k\) whose fault-tolerance score \(\sigma_{f}(T)\) is minimized. Thus, our goal is _not_ to optimize the fault-free score of \(T\), namely \(\sigma_{0}(T)\), but rather the score that the best replacement will have after a worst-case set of \(f\) faults in \(T\), namely \(\sigma_{f}(T)\). Following the earlier approach, we again focus on the decision question: given some \(r\geq 0\), is there a committee of size \(k\) with \(\sigma_{f}(T)\leq r\)? For a given value of \(r\), we construct our hitting set instance with candidate-points and voter-intervals, and compute a minimum-sized \(f\)-tolerant hitting set \(T\subseteq C\) as follows: 1. Compute the value of \(\delta_{i,j}(f)\), for all \(1\leq i\leq j\leq m\). 2. Compute a minimum subset \(T\subseteq C\) satisfying \(|T\cap C_{i,j}|\geq\delta_{i,j}(f)\), for all \(1\leq i\leq j\leq m\). 3. If \(|T|\leq k\), we have a solution; otherwise, the answer to the decision problem is no. Step (1) is implemented using the dynamic program of the previous subsection, and so it suffices to explain how to implement step (2). We assume without loss of generality that \(|C_{i,j}|\geq\delta_{i,j}(f)\) for all \(i,j\), because otherwise there is no solution. We compute a set \(T\) using the following greedy algorithm. * Initialize \(T=\emptyset\). * For each \(c_{k}\) for \(k\in[m]\), if there exists \(i,j\in[m]\) with \(i\leq k\leq j\leq m\) such that \(\delta_{i,j}(f)\geq|T\cap C_{i,j}|+(j-k+1)\), then add \(c_{k}\) to \(T\). The algorithm runs in time \(\mathcal{O}(m^{3})\). To prove correctness, we first claim the following. [] \(|T\cap C_{i,j}|\geq\delta_{i,j}(f)\), for all \(1\leq i\leq j\leq m\). Suppose not, so we have \(|T\cap C_{i,j}|<\delta_{i,j}(f)\), for some \(i\leq j\). We recall that for any interval \(I_{i}\in\mathcal{I}\), \(|I_{i}\cap C|>f\). Therefore, for any failing set \(J\), \(C_{i,j}\setminus J\) is a hitting set of \(\mathcal{I}_{i,j}\), and \(|C_{i,j}|\geq\delta_{i,j}(f)\). This implies that there exists some point among \(c_{i},\ldots,c_{j}\) that is not in \(T\). Let \(k\in\{i,\ldots,j\}\) be the largest index such that \(c_{k}\notin T\). For convenience, we use \(T^{\prime}\) to denote the set \(T\) in the iteration of our algorithm that considers \(c_{k}\). Note that \(T\cap C_{i,j}=(T^{\prime}\cap C_{i,j})\cup\{c_{k+1},\ldots,c_{j}\}\) and \((T^{\prime}\cap C_{i,j})\cap\{c_{k+1},\ldots,c_{j}\}=\emptyset\). Therefore, \(|T^{\prime}\cap C_{i,j}|=|T\cap C_{i,j}|-(j-k)<\delta_{i,j}(f)-(j-k)\). This implies \(|T^{\prime}\cap C_{i,j}|+(j-k+1)\leq\delta_{i,j}(f)\). By our algorithm, in this case we should include \(c_{k}\) in \(T\), which contradicts the fact that \(c_{k}\notin T\). We now argue that \(T\) has the minimum size among all subsets of \(C\) satisfying the property of Lemma 3. Let opt be the minimum size of a subset of \(C\) satisfying the desired property. We write \(T=\{c_{k_{1}},\ldots,c_{k_{r}}\}\), where \(k_{1}<\cdots<k_{r}\). For any \(t\in[r]\), there exists a subset \(T^{*}\subseteq C\) such that (1) \(|T^{*}\cap C_{i,j}|\geq\delta_{i,j}(f)\) for all \(i,j\in[m]\) with \(i\leq j\), (2) \(|T^{*}|=\mathsf{opt}\), and (3) \(\{c_{k_{1}},\ldots,c_{k_{t}}\}\subseteq T^{*}\). We prove the observation by induction on \(t\). For \(t=0\), the statement trivially holds. Suppose the statement holds for \(t-1\), i.e., there exists a subset \(T^{*}\subseteq C\) satisfying the first two conditions in Lemma 3 and \(\{c_{k_{1}},\ldots,c_{k_{t-1}}\}\subseteq T^{*}\). We show the statement holds for \(t\). Specifically, we shall modify \(T^{*}\) to make it satisfy \(\{c_{k_{1}},\ldots,c_{k_{t}}\}\subseteq T^{*}\) while maintaining the first two conditions in the observation. First, we notice that \(T^{*}\) must contain a point other than \(c_{k_{1}},\ldots,c_{k_{t-1}}\). To see this, suppose \(T^{*}=\{c_{k_{1}},\ldots,c_{k_{t-1}}\}\). Since our algorithm added \(c_{k_{t}}\) to \(T\), there exist \(i,j\in[m]\) with \(i\leq k_{t}\leq j\) such that \(\delta_{i,j}(f)\geq|\{c_{k_{1}},\ldots,c_{k_{t-1}}\}\cap C_{i,j}|+(j-k_{t}+1)\). This implies that \(\delta_{i,j}(f)>|\{c_{k_{1}},\ldots,c_{k_{t-1}}\}\cap C_{i,j}|\), that is, \(\delta_{i,j}(f)>|T^{*}\cap C_{i,j}|\), which contradicts the fact that \(T^{*}\) satisfies the first condition in the lemma. Thus, \(T^{*}\) contains a point other than \(c_{k_{1}},\ldots,c_{k_{t-1}}\). Now let \(k\) be the smallest index such that \(c_{k}\in T^{*}\backslash\{c_{k_{1}},\ldots,c_{k_{t-1}}\}\). If \(k=k_{t}\), then \(\{c_{k_{1}},\ldots,c_{k_{t}}\}\subseteq T^{*}\) and we are done. Otherwise, we remove \(c_{k}\) from \(T^{*}\) and add \(c_{k_{t}}\) to \(T^{*}\). After this modification, it is clear that \(|T^{*}|=\mathsf{opt}\) and \(\{c_{k_{1}},\ldots,c_{k_{t}}\}\subseteq T^{*}\). So it suffices to show \(|T^{*}\cap C_{i,j}|\geq\delta_{i,j}(f)\) for all \(i,j\in[m]\) with \(i\leq j\). Consider indices \(i,j\in[m]\) with \(i\leq j\). By assumption, before the modification we have \(|T^{*}\cap C_{i,j}|\geq\delta_{i,j}(f)\). If \(j\geq k_{t}\), then \(|T^{*}\cap C_{i,j}|\) does not decrease after the modification, and is thus at least \(\delta_{i,j}(f)\). So assume \(j<k_{t}\). In this case, \(T\cap C_{i,j}=\{c_{k_{1}},\ldots,c_{k_{t-1}}\}\cap C_{i,j}\subseteq T^{*}\cap C_{ i,j}\). Since \(|T\cap C_{i,j}|\geq\delta_{i,j}(f)\) by Lemma 3, we have \(|T^{*}\cap C_{i,j}|\geq\delta_{i,j}(f)\). We use binary search to find the smallest \(r\) such that the reduced instance has an \(f\)-tolerant hitting set of size at most \(k\). Therefore, the following theorem holds. Optimal Fault-Tolerant Committee can be solved in time \(\mathcal{O}(n^{2}m^{3}f\log(nm))\) for one-dimensional Euclidean elections. Our dynamic programming algorithm works as long as either the set \(V\) or the set \(C\) is embedded in \(\mathbb{R}\) (i.e., has a linear ordering), while the other set can have an arbitrary \(d\)-dimensional embedding. Moreover, we can also extend our algorithms to ordinal elections with (widely studied) single-peaked preferences [2, 26] to compute an optimal fault-tolerant Chamberlin-Courant committee. ## 3 Fault-Tolerant Committees in Multidimensional Space We now consider fault tolerance in multidimensional elections. Unsurprisingly, the optimal committee design problem is intractable -- it is similar to facility location -- but it turns out that the seemingly simpler variants ORP and FTS are also intractable. ### Hardness Results All three problems (Optimal Replacement, Fault-Tolerance Score, and Optimal Fault-Tolerant Committee) are NP-hard, in any dimension \(d\geq 2\) under the Euclidean norm, where size of the committee \(k\) and the failure parameter \(f\) are part of the input. We will use a single construction to show NP-hardness all three problems. Our proof uses a reduction from the NP-complete problem Planar Monotone 3-SAT (PM-3SAT) [7]. An input to this problem is a _monotone_ 3-CNF formula \(\phi\) where each clause contains either three positive literals or three negative literals, and whose variable-clause incidence graph has a planar embedding which is given as a part of the input. Given an instance \(\phi\) of PM-3SAT, our reduction constructs a 2-dimensional Euclidean election. The general outline follows a scheme used in [32] to show the hardness of committee selection under _ordinal_ preferences, but generalizing the proof to _fault-tolerant committees_ requires several technical modifications and a new proof of correctness. For ease of referencing, we adapt the terminologies from [32]. In the planar embedding of the formula \(\phi\), each variable/clause is drawn as an (axis-parallel) rectangle in the plane, and so this is called a _rectangular embedding_. See Figure 1(a) for an illustration. The rectangles for the variables are drawn along the \(x\)-axis, while the rectangles for the positive (resp., negative) clauses lie above (resp., below) the \(x\)-axis. If a clause contains a variable, then there is a vertical segment connecting the clause rectangle and the variable rectangle. Each such vertical segment is disjoint from all the rectangles except the two it connects. The rectangular embedding of \(\phi\) can be modified to another embedding which is easier to work with called _orthogonal embedding_. We refer the reader to [33] for details of the modification (See figure 1(b) for intuition). The intersection points of vertical and horizontal segments in the orthogonal embedding are _connection points_. To build the intuition for the orthogonal embedding, we now give its properties as stated in [33]: 1. Vertical and horizontal segments do not cross. 2. Each horizontal segment corresponds to a clause in \(\phi\). Moreover, it intersects exactly three vertical segments corresponding to the literals in that clause. 3. The endpoints of all segments are connection points. For each horizontal segment, we will refer to the middle connection point as the _reference point_ of a the clause (notice that from properties (ii) and (iii) each horizontal segment has three connection points and two of those are the left and the right endpoint of the segment). The reference points for the variables \(x_{1},\ldots,x_{n}\) are denoted by \(\hat{x_{1}},\ldots,\hat{x_{n}}\) and for the clauses \(z_{1},\ldots,z_{m}\) they are denoted by \(\hat{z_{1}},\ldots,\hat{z_{m}}\). With shifting/scaling of the orthogonal embedding without changing its topology, we can ensure that the \(x\)-coordinates (\(y\)-coordinates) of vertical (resp., horizontal) segments are distinct even integers in the range \(\{1,\ldots,2n\}\) (resp., \(\{-2m,\ldots,2m\}\)). This guarantees that all connection points have even integer coordinates and the embedding is contained in \([1,2n]\times[-2m,2m]\) rectangle. Now using the integral points on each segment \(s\), we can partition \(s\) into \(\ell(s)\) parts each of unit length where \(\ell(s)\) is the length of \(s\). These unit length segments are called _pieces_ of the orthogonal embedding. We use \(N\) to denote the total number of pieces. Note that \(N=\mathcal{O}(nm)\). We now construct a Euclidean election \(E=(V,C)\) with voters and candidates as points in \(\mathbb{R}^{2}\). **Variable gadgets.** For each variable \(x_{i}\), we choose two additional points near (but not equal to) \(x_{i}\) as follows. Recall that there are two vertical pieces incident to \(\hat{x}_{i}\) in the orthogonal embedding: one above \(\hat{x}_{i}\), and the other below \(\hat{x}_{i}\). We choose a point with distance \(0.2\) from \(\hat{x}_{i}\), on each of the two pieces. Next, we put \(f+1\) candidates on each of these two points and a (single) candidate at \(\hat{x_{i}}\) (we set the value of \(f\) later in the construction). Furthermore, we place a voter on each of these three points. We call these candidates/voters the \(x_{i}\)_-gadget_. See figure 2(a). For \(i\in[n]\), we construct the \(x_{i}\)-gadget for each variable \(x_{i}\). Overall, the variable gadgets have \((2f+3)n\) candidates and \(3n\) voters. **Clause gadgets.** Next, we construct a set of candidates/voters for the clauses \(z_{1},\ldots,z_{m}\). For each clause \(z_{i}\), we put a voter at the reference point \(\hat{z}_{i}\), and call this voter the \(z_{i}\)_-gadget_. See figure 2(b). The total number of voters in the clause gadgets is \(m\). Clause gadgets do not have any candidates. **Piece gadgets.** Finally, we construct a set candidates/voters to connect the variable and clause gadgets. Consider a piece \(s\) of the orthogonal embedding. Recall that \(s\) is a unit-length segment. Let \(s^{-}\) and \(s^{+}\) be the two endpoints of \(s\). We identify these endpoints as follows: Figure 2: Figure (a) shows rectangular embedding of the PM-3SAT instance given as a part of the input. In figure (b), we show a transformation of rectangular embedding to orthogonal embedding useful for our construction. Here, each variable \(x_{i}\) in the rectangular embedding is replaced by the variable reference point \(\hat{x_{i}}\) and each clause is replaced \(z_{i}\) is replaced by the clause reference point \(\hat{z_{i}}\). For a vertical piece \(s\) above (resp., below) the \(x\)-axis, we say \(s^{-}\) is the bottom (resp., top) endpoint of \(s\) and \(s^{+}\) is the top (resp., bottom) endpoint of \(s\). For a horizontal piece \(s\), \(s\) must belong to the horizontal segment of some clause \(z_{i}\). Suppose \(s\) is to the left (resp., right) of the reference point \(\hat{z}_{i}\), then \(s^{-}\) is the left (resp., right) endpoint of \(s\) and \(s^{+}\) be the right (resp., left) endpoint of \(s\). For every piece \(s\) that is _not_ adjacent to any clause reference point, we choose four points near \(s^{+}\) and add candidates/voters on them as follows: We place \(f+1\) candidates each on a point which is \(0.3\) below and \(0.3\) to the right of \(s^{+}\), and on a point which is \(0.4\) above and \(0.3\) to the left of \(s^{+}\), and on a point at \(s^{+}\). Further, we place a candidate at a point which is \(0.3\) above \(s^{+}\). Lastly, we place one voter at each of these four points. We call these the candidates/voters of the _s-gadget_. See figure 2(c). Note that pieces adjacent some clause reference points do not have gadgets. Therefore, the total number of candidates in the piece gadgets is \((3f+4)(N-3m)\), as each clause reference point is adjacent to three pieces, and the number of voters is \(4(N-3m)\). Combining these three constructed gadgets, our election \(E=(V,C)\) has \(4N-11m+3n\) voters and \((3N-9m+2n)f+4N-12m+3n\) candidates. We set the committee size \(k\) equals to \(N+n-3m\). Clearly, the construction can be done in a polynomial time. The main intuition behind the construction is the following. Candidates in the constructed instance can be partitioned into two types: * _Robust candidates_ (\(C_{rob}\subseteq C\)) is the set of candidates such that for each candidate, there are \(f\) other candidates at the exact same location as it. Note that for any failing set \(J\subseteq C\), at least one candidate is live in each of these locations. * _Covering candidates_ (\(C_{cov}\subseteq C\)) is the set of candidates such that for each candidate in the set, it is the unique candidate at its location. Note that \(|C_{cov}|=k\). In the constructed election, the following lemma holds. In the constructed election \(E=(V,C)\), we have \(\sigma(C_{cov})\leq 0.75\) where \(C_{cov}\subseteq C\) is the set of covering candidates. Proof.: For all voters \(v\in V\), we will show that \(d(v,C_{cov})\leq 0.75\). First, we consider the voters in the variable gadget. For a variable \(x_{i}\), the candidate at \(\hat{x}_{i}\) belongs to \(C_{cov}\). All three voters in the variable gadget are within distance \(0.2\) from \(\hat{x}_{i}\). Therefore, all the voters in the variable gadget have \(d(v,C_{cov})\leq 0.75\). Figure 3: Gadgets in our construction. Here, a disk and a cross at the same location indicates a voter and a candidate at the same location. Similarly, a circle and a cross at the same location indicates a voter and \((f+1)\) candidates at the same location. Next, we consider the clause-gadgets. The closest candidate in \(T\) for a voter placed at \(\hat{z}_{i}\) is at a distance \(0.7\) from it; therefore, all voters in the clause gadgets all have \(d(v,C_{cov})\leq 0.75\). Finally, we consider the piece gadgets. For each piece \(s\), \(C_{cov}\) contains a candidate at a distance \(0.3\) above \(s^{+}\). It can be verified that all four voters in the piece gadget have their closest candidate in \(C_{cov}\) within distance \(\sqrt{0.45}<0.7\) (See figure 2(c)). Hence, for all voters in the piece gadgets, \(d(v,C_{cov})\leq 0.75\). This completes the proof of Lemma 3. The election constructed above will be used to show hardness for all three problems. Let the distance \(r=0.75\). Recall that \(k=N+n-3m\). We will now, we describe the decision version of each of our problems along with the construction of additional elements necessary in their input: 1. [label=()] 2. **ORP**: In the input of ORP we additionally need a committee \(T\subseteq C\) and a failing set \(J\subseteq C\). We set \(T=J=C_{cov}\). We ask if there exists a replacement \(R\subseteq C\setminus C_{cov}\) such that \(\sigma(R)\leq r\)? 3. **FTS**: In the input of FTS we additionally need a committee \(T\subseteq C\) and a fault-tolerance parameter \(f\). We set \(T=C_{cov}\) and \(f=k\). We ask if \(\sigma_{f}(T)\leq r\)? 4. **OFTC**: In the input of OFTC we additionally need a committee size \(k^{\prime}\) and a fault-tolerance parameter \(f\). We set \(k^{\prime}=k\) and \(f=k\). We ask if there exists a \(k\)-sized committee \(T\subseteq C\) with \(\sigma_{f}(T)\leq r\)? To show the equivalence, we will show that the answer to each of the above question is yes if and only if \(\phi\) is satisfiable. The following lemma shows the proof of equivalence for problem 1. There exists a committee \(R\subseteq C\setminus C_{cov}\) with size \(k\) such that \(\sigma(R)\leq r\) if and only if \(\phi\) is satisfiable where \(k=|C_{cov}|=N+n-3m\), \(N\) is the total number of pieces, and \(m\) (resp., \(n\)) is the number of clauses (resp., variables) in \(\phi\), respectively. In the next two subsections, we give the proof of Lemma 3. #### Forward direction of Lemma 3.1 In this section, we will show that if \(\phi\) is satisfiable, then there exists a \(k\)-sized committee \(R\subseteq C\setminus C_{cov}\) such that \(\sigma(R)\leq r\) (recall that \(r=0.75\)). Suppose \(\phi\) is satisfiable. Let \(\pi:X\rightarrow\{\texttt{true},\texttt{false}\}\) be an assignment which makes \(\phi\) true. We construct a \(k\)-sized committee \(R\subseteq C\setminus C_{cov}\) with \(\sigma(R)\leq r\) using \(\phi\). We include one candidate from the every variable gadget and every piece gadget to \(R\) as follows: * _Replacement candidates from variable gadgets:_ Consider a variable \(x_{i}\). By our construction, the \(x_{i}\)-gadget contains \(2f+3\) candidates which have the same \(x\)-coordinates as \(\hat{x}_{i}\). If \(\pi(x_{i})=\texttt{true}\) (resp., \(\pi(x_{i})=\texttt{false}\)), we include in \(R\) one of the topmost (resp., bottommost) candidates in the \(x_{i}\)-gadget. * _Replacement candidates from piece gadgets:_ Consider a piece \(s\) not adjacent to a clause reference point (recall that pieces adjacent to some clause reference point do not have gadgets on them). We begin by defining a variable as an _associated_ variable of \(s\) in the same way as described in [33]: When \(s\) is vertical, the associated variable of \(s\) is just the variable whose vertical segment contains \(s\). For when \(s\) is horizontal, then \(s\) must belong to the horizontal segment of some clause \(z_{j}\). Then, if \(s\) to the left of the reference point \(\hat{z_{j}}\) then the associated variable of \(s\) is the variable whose vertical segment intersects the left endpoint of the horizontal segment of \(z_{j}\), and vice versa when \(s\) to the right of the reference point \(\hat{z_{j}}\). Let \(x_{i}\) be the associated variable of \(s\). Then, 1. If \(\pi(x_{i})=\mathsf{true}\): We include in \(R\) a candidate in the \(s\)-gadget that is \(0.4\) above and \(0.3\) to the left (resp., \(0.4\) below and \(0.3\) to the right) of \(s^{+}\) if \(s\) is above (resp., below) the \(x\)-axis. 2. If \(\pi(x_{i})=\mathsf{false}\): We include in \(R\) a candidate in the \(s\)-gadget that is \(0.3\) below and \(0.3\) to the right (resp., \(0.3\) above and \(0.3\) to the left) of \(s^{+}\) if \(s\) is below (resp., above) the \(x\)-axis. This finishes the construction of the committee \(R\). Recall that the total number of variable and the piece gadgets is \(N+n-3m=k\). Therefore, \(|R|=k\). The following lemma completes the "if" part of Lemma 13. \(\sigma(R)\leq r\). Proof.: For all voters \(v\in V\) in the constructed instance, we will show that \(d(v,R)\leq r\). First, we consider the voters in the variable gadget. For a variable \(x_{i}\), either a candidate \(0.2\) above or \(0.2\) below \(\hat{x}_{i}\) belongs to \(R\). Hence, for all three voters, \(d(v,R)\leq 0.6\). Recall that \(r=0.75\). Therefore, all the voters in the variable gadget have \(d(v,T)\leq r\). Next, we consider the clause-gadgets. For a clause \(z_{i}\), consider the voter \(v\) placed at \(\hat{z}_{i}\). The closest candidate in \(R\) for this voter, is the candidate placed at \(0.4\) above and \(0.3\) to the left of \(s^{+}\) from a clause gadget with \(d(\hat{z}_{i},s^{+})=1\). Hence, \(d(v,R)=\sqrt{0.45}<r\). Finally, we consider the piece gadgets. We consider two cases: piece gadget not adjacent to a variable reference point and piece gadget adjacent to a variable reference point. Let \(x_{i}\) be the associated variable with the piece gadget \(s\), and \(\pi(x_{i})=\mathsf{true}\) (the case when \(\pi(x_{i})=\mathsf{false}\) is symmetric). Let \(s\) be a piece gadget not adjacent to any variable reference point. Suppose \(s\) is above \(x\)-axis, and \(\hat{s}\) is the gadget below \(s\) (the case when \(s\) is below the \(x\)-axis is symmetric; hence, we leave the proof for that to the reader). Recall that the piece gadgets contains four voters: Let \(v_{1},v_{2},v_{3}\), and \(v_{4}\) be the voters placed at distances \(0.4\) above and \(0.3\) to the left of \(s^{+}\), \(0.3\) above \(s^{+}\), at \(s^{+}\), and \(0.3\) below and \(0.3\) to the right of \(s^{+}\), respectively. We know that a candidate placed at the location of voter \(v_{1}\) (say \(c_{1}\)) belongs to \(R\). Notice \(v_{1},v_{2}\) and \(v_{3}\) have a distance at most \(0.5\) from this voter but \(v_{4}\) has a distance \(\sqrt{0.85}>r\). But consider the gadget corresponding to \(\hat{s}\). Here, we know \(R\) contains a candidate (say \(c^{\prime}_{1}\)) placed at \(0.4\) above and \(0.3\) to the left of \(\hat{s}^{+}\). It can be verified that \(d(v_{4},c^{\prime}_{1})=\sqrt{0.45}<r\). Therefore, \(d(v_{4},R)\leq r\). We now consider the case when \(s\) is adjacent to a variable reference points (say \(\hat{x}_{i}\)). We know \(s\) has four voters \(v_{1},v_{2},v_{3}\), and \(v_{4}\) placed at distances \(0.4\) above and \(0.3\) to the left of \(s^{+}\), \(0.3\) above \(s^{+}\), at \(s^{+}\), and \(0.3\) below and \(0.3\) to the right of \(s^{+}\), respectively. We know that a candidate placed at the location of voter \(v_{1}\) (say \(c_{1}\)) belongs to \(R\), and this candidate is at a distance at most \(0.5\) from \(v_{1},v_{2}\), and \(v_{3}\). Recall that \(\pi(x_{i})=\mathsf{true}\), \(R\) contains a candidate at a distance \(0.2\) above \(\hat{x}_{i}\). Notice that this candidate is at a distance \(\sqrt{0.34}\) from \(v_{4}\). Therefore, \(d(v_{4},R)<r\). This completes the proof of Lemma 14. #### Reverse direction of Lemma 13 Suppose \(R\subseteq C\setminus C_{cov}\) is a \(k\)-sized committee with \(\sigma(R)\leq r\). We will show how to recover a satisfying assignment \(\pi:X\to\{\mathsf{true},\mathsf{false}\}\) using \(R\). The structure of our proof is similar to the reverse direction argument in [33]. First, we observe the following property of \(R\). **Lemma 15**.: \(R\) _contains exactly one candidate from every variable and piece gadget._ Proof.: We begin with the variable gadgets. Consider an \(x_{i}\)-gadget corresponding to the variable \(x_{i}\). Recall that we place three voters in the \(x_{i}\)-gadget: One voter at \(\hat{x_{i}}\) (say \(v_{1}\)), and one voter each at a distance \(0.2\) above and \(0.2\) below of \(\hat{x_{i}}\). Observe that for \(v_{1}\), its distance to any candidate from the adjacent piece gadgets is at least \(\sqrt{(0.7)^{2}+(0.3)^{2}}=\sqrt{0.58}\) which is strictly greater than \(r\). Hence, \(R\) must include at least one candidate from the \(x_{i}\)-gadget to ensure \(d(v_{1},R)\leq r\). We now consider the piece gadgets. Let \(v_{1}\) be the voter placed at \(s^{+}\). The nearest candidate to \(v_{1}\) from the adjacent piece gadgets is at a distance at least \(\sqrt{0.58}\) which is strictly greater than \(r\). Therefore, \(R\) must contain at least one candidate from each piece gadget to ensure \(d(v_{1},R)\leq r\). Finally, recall that the committee size is \(k=N+n-3m\) and the number of variable (piece) gadgets is \(n(N-3m)\), respectively. Therefore, by a simple counting argument, we can conclude that \(T\) must contain exactly one candidate from each variable gadget and each piece gadget. This completes the proof of Lemma 15. We will now use \(R\) to recover a satisfying assignment \(\pi\) for \(\phi\). For an arbitrary variable \(x_{i}\), using Lemma 15, we know \(R\) contains exactly one candidate (say \(c_{i}\)) from the \(x_{i}\)-gadget. We set \(\pi(x_{i})\) as follows: * If \(c_{i}\) is above the \(x\)-axis, we set \(\pi(x_{i})=\mathsf{true}\). * If \(c_{i}\) is below the \(x\)-axis, we set \(\pi(x_{i})=\mathsf{false}\). To complete the proof, we need to show that \(\pi\) is a satisfying assignment of \(\phi\). It is enough to show that for each clause, at least one of its variables is set to \(\mathsf{true}\). Since the argument for positive and negative clauses is similar, we will only that for each positive clause, at least one its variables is set to \(\mathsf{true}\). We begin by proving the following important structural property of the committee \(R\). **Lemma 16**.: _For a piece \(s\) above the \(x\)-axis that is not adjacent to any clause reference point, suppose \(x_{i}\) is the associated variable of \(s\). If \(R\) contains the candidate in the \(s\)-gadget which is \(0.4\) above and \(0.3\) to the left of \(s^{+}\), then \(\pi(x_{i})=\mathsf{true}\)._ Proof.: Let \(s\) be piece as described in the above lemma with the associated variable \(x_{i}\). We will show that if \(R\) contains the candidate in the \(s\)-gadget which is \(0.4\) above and \(0.3\) to the left of \(s^{+}\), then \(R\) contains a candidate \(0.2\) above \(\hat{x_{i}}\) in the \(x_{i}\)-gadget. The choice of candidate in \(R\) from the \(s\)-gadget to the \(x_{i}\)-gadget percolates as follows: * When \(s\) is not adjacent to a variable reference point \(\hat{x_{i}}\): Let \(v_{1},v_{2},v_{3}\), and \(v_{4}\) be the voters placed at distances \(0.4\) above and \(0.3\) left of \(s^{+}\), \(0.3\) above \(s^{+}\), at \(s^{+}\), and \(0.3\) below and \(0.3\) to the right of \(s^{+}\), respectively. We know that the candidate placed at \(s^{+}\) belongs to \(J\). From the set of \(f+1\) candidates placed at the locations of \(v_{1},v_{2}\) and \(v_{4}\), we denote a candidate by \(c_{1},c_{2}\), and \(c_{3}\), respectively. Moreover, let \(s^{\prime}\) be the piece below (resp., to the left of) \(s\) when \(s\) is a vertical (resp., horizontal) piece. We denote the corresponding candidate in \(s^{\prime}\) by \(c^{\prime}_{1},c^{\prime}_{2}\), and \(c^{\prime}_{3}\). Assume that \(c_{1}\in R\). We observe that \(d(v_{4},c_{1})=\sqrt{0.85}>r\) but \(d(v_{4},c^{\prime}_{1})=\sqrt{0.45}<r\). Moreover, \(c^{\prime}_{1}\) is the only alive candidate from \(s^{\prime}\)-gadget within distance \(r\) from \(v_{4}\). Using Lemma 2, we know \(R\) only includes \(c_{1}\) from \(s\). Hence, to satisfy \(d(v_{4},R)\leq r\), \(R\) must include the candidate \(c^{\prime}_{1}\). Observe that we can repeat the above argument for all pieces below (resp., to the left of) \(s\), which implies that for all pieces \(s_{i}\) below (resp., to the left of) \(s\), \(R\) includes the candidate in \(s_{i}\)-gadget placed \(0.4\) above and \(0.3\) to the left of \(s^{+}_{i}\). * When \(s\) is adjacent to a variable reference point \(\hat{x_{i}}\): Let \(v_{1},v_{2},v_{3}\), and \(v_{4}\) be the voters placed at distances \(0.4\) above and \(0.3\) left of \(s^{+}\), \(0.3\) above \(s^{+}\), at \(s^{+}\), and \(0.3\) below and \(0.3\) to the right of \(s^{+}\), respectively. Moreover, let \(c_{1},c_{2}\), and \(c_{3}\) denote an arbitrary candidate placed at the locations of voters \(v_{1},v_{2}\), and \(v_{4}\), respectively. We know that for voter \(v_{4}\), the set of candidates within distance \(r\) is \(\{c_{2},c_{3},c_{4}\}\) where \(c_{4}\) is the candidate placed \(0.2\) above \(\hat{x}_{i}\). Using Lemma 15, we know \(R\) only includes \(c_{1}\) from the piece \(\hat{s}\). Hence, to ensure \(d(v_{4},R)\leq r\), \(R\) must include candidate \(c_{4}\). Recall that for a variable \(x_{j}\) for \(j\in[n]\), if \(R\) includes a candidate above the reference point \(\hat{x}_{j}\), we set \(x_{j}=\mathsf{true}\). Since, \(c_{4}\in R\) and \(c_{4}\) lies above \(\hat{x}_{i}\), we set \(x_{i}=\mathsf{true}\). This completes the proof of Lemma 16. Using Lemma 15 and Lemma 16 we are now ready to show that the constructed assignment \(\pi\) satisfies \(\phi\). Since the argument for positive and negative clauses is similar, we will only show for the positive clauses. Our argument is similar to the one in [33] but we include it here for completeness. Consider a positive clause \(z_{i}\). We will show that at least one variable of \(z_{i}\) is set to \(\mathsf{true}\) by \(\pi\). We denote the pieces adjacent to the reference point \(\hat{z}_{i}\) by \(s_{1},s_{2},s_{3}\). Without loss of generality, let \(s_{1}\) be to the left of \(\hat{z}_{i}\), \(s_{2}\) be to the right of \(\hat{z}_{i}\), and \(s_{3}\) be below \(\hat{z}_{i}\). Notice that \(\hat{z}_{i}=s_{1}^{+}=s_{2}^{+}=s_{3}^{+}\). Recall that \(\hat{z}_{i}\) is a connection point. Since all connection points have even coordinates and \(s_{1}^{-},s_{2}^{-},s_{3}^{-}\) are at a unit distance from \(\hat{z}_{i}\), \(s_{1}^{-},s_{2}^{-},s_{3}^{-}\) are not connection points. Hence, let \(s_{4},s_{5},s_{6}\) be the pieces such that the right endpoint of \(s_{4}\) is \(s_{1}^{-}\), the left endpoint of \(s_{5}\) is \(s_{2}^{-}\), and the top endpoint of \(s_{6}\) is \(s_{3}^{-}\). Therefore, \(s_{1}^{-}=s_{4}^{+}\), \(s_{2}^{-}=s_{5}^{+}\), and \(s_{3}^{-}=s_{6}^{+}\). Let \(c_{4}\) be a candidate in \(s_{4}\)-gadget such that \(c_{4}\) is \(0.4\) above and \(0.3\) to the left of \(s_{4}^{+}\). Moreover, let the candidates \(c_{5}\) and \(c_{6}\) defined in the similar way such that \(c_{5}\) belongs to the \(s_{5}\)-gadget and \(c_{6}\) belongs to the \(s_{6}\)-gadget. For the voter at the reference point \(\hat{z_{i}}\), only the candidates \(c_{4},c_{5},c_{6}\) are within distance \(r\). This is because all pieces except \(s_{1},\ldots,s_{6}\) have distances at least \(2\) from \(\hat{z}_{i}\). Since \(\sigma(R)\leq r\), \(R\) must contain at least one of these three candidates. Therefore, using Lemma 16, we conclude that at least one of the associated variables of \(s_{4},s_{5},s_{6}\) is true. Since these are exactly the three variables in clause \(z_{i}\); hence, \(z_{i}\) is true under the assignment \(\pi\). This completes the "only if" part of our proof. The argument above completes the proof of Lemma 13 and completes the argument of equivalence for our decision problem \((i)\). ### Argument of equivalence for the decision problem \((ii)\): Recall that for the FTS decision problem instance stated above, the input committee is \(T=C_{cov}\) and the fault-tolerance parameter is \(f=k\). Notice that \(T\) contains one candidate from each vertex and each piece gadget. Suppose a subset \(J\subset C\) with \(|J|\leq f\) fails. Consider the committee \(T^{\prime}=T\setminus J\). All voters in the vertex gadgets and the piece gadgets which have a non-empty intersection with \(T^{\prime}\) have a committee member within distance \(r\) (i.e., suppose \(V^{\prime}\subseteq V\) is the subset of all such voters then \(\sigma(V^{\prime},T^{\prime})\leq r\)). For the voters in \(V\setminus V^{\prime}\), we build a committee \(R\) using the candidates in vertex and piece gadgets with have an empty intersection with \(T^{\prime}\) in the same way as we constructed a replacement committee in the forward direction of proof of Lemma 13 (Section 3.1.1) (to replace the candidates from set \(T\cap J\)). Notice that it is always possible to construct such a replacement \(R\) because all candidates in \(R\) are robust candidates (meaning that there are \(f+1\) identical copies of each of these candidates) and the failure set \(J\) has size at most \(f\). Since the total number of vertex and edge gadgets is \(k\), the new committee \((T\setminus J)\cup R\) has size \(k\). Using the same argument as in Section 3.1.1, we can show that \(\sigma((T\setminus J)\cup R)\leq r\). This completes the argument in the forward direction (that is, if \(\phi\) is satisfiable then \(\sigma_{f}(T)\leq r\)). To show the reverse direction, suppose \(\sigma_{f}(T)\leq r\). Therefore, when failing set \(J=C_{cov}\), there exists a \(k\)-sized replacement \(R\subseteq C_{rob}\) such that \(\sigma(R)\leq r\). Hence, using Lemma 13, we can conclude that \(\phi\) is satisfiable. Argument of equivalence for the decision problem \((iii)\):Recall that for the OFTC decision problem instance stated above, the input is the committee size \(k=N+n-3m\) and the fault-tolerance parameter \(f=k\). The argument for the forward direction is trivial, as we know when \(\phi\) is satisfiable, the \(k\)-sized committee \(T=C_{cov}\) has \(\sigma_{f}(T)\leq r\). In the reverse direction, suppose \(T\subseteq C\) is a \(k\)-sized committee with \(\sigma_{f}(T)\leq r\). Therefore, in this case, for a size \(f\) failing set \(J=C_{cov}\), there exists a replacement \(R\subseteq C\setminus C_{cov}\) such that \(|(T\setminus J)\cup R|\leq k\) and \(\sigma((T\setminus J)\cup R)\leq r\). Since \((T\setminus J)\cup R=R\) and \(R\subseteq C_{rob}\); using Lemma 13, we can conclude that \(\phi\) is satisfiable. #### 3.1.3 Hardness when \(f\) is bounded Orp and FTSConsider an election \(E=(V,C)\), a committee \(T\subseteq C\) of size \(k\) and a fault-tolerance parameter \(f\) which is a constant. It is easy to see that we can solve ORP optimally in time \(nm^{\mathcal{O}(f)}k\) by trying all possible replacement sets and choosing the best one. Similarly, by trying all possible failing sets of size at most \(f\) (note that there are \(m^{\mathcal{O}(f)}\) such sets) and computing optimal replacement for each set, we can compute \(\sigma_{f}(T)\) in time \(nm^{O(f)}k\). OftcWe will now show that for any integer \(f\geq 0\), OFTC is NP-hard using a simple reduction from the \(k\)-supplier problem [28]: Fix \(f\geq 0\). Let \((\mathcal{C},F)\) along with an integer \(k\) be a \(k\)-supplier instance where \(\mathcal{C}\) is the set of customers and \(F\) is a set of facilities embedded in \(\mathbb{R}^{2}\). In the decision version of \(k\)-supplier, given a real number \(r\), we ask if there exists a size \(k\) set \(F^{\prime}\subseteq F\) such that \(\sigma(\mathcal{C},F^{\prime})\leq r\). We construct an election \(E=(V,C)\) in \(\mathbb{R}^{2}\) as follows. We set \(V=\mathcal{C}\). Further, we construct the set of candidates \(C\) by adding \(f+1\) identical candidates on each point in \(F\). We set the committee size to \(k\). It is easy to see that there exists a \(k\)-sized committee \(T\subset C\) with \(\sigma_{f}(T)\leq r\) if and only if there exists a \(k\)-sized subset \(F^{\prime}\subseteq F\) such that \(\sigma(\mathcal{C},F^{\prime})\leq r\) where \(r\) is a real number. ### Optimal Replacement Problem A simple greedy algorithm achieves a \(3\)-approximation for the Optimal Replacement Problem in any fixed dimension \(d\) as well as in any metric space. We can find a \(3\)-approximation for ORP in time \(O(k(nk+m))\). Proof.: Let \(T\subseteq C\) be the given committee and let \(J\subseteq T\) be the failing set. In order to find the replacement set \(R\), we initialize \(\hat{T}=T\setminus J\), and then repeat the following two steps \(|T\cap J|\) times: (1) Choose the farthest voter from \(\hat{T}\), namely, choose \(\hat{v}=\operatorname*{arg\,max}_{v\in V}d(v,\hat{T})\), and (2) Add to \(\hat{T}\) the candidate \(\hat{c}\notin\hat{T}\) that is closest to \(\hat{v}\). Upon termination, we clearly have \(|\hat{T}|=|T|\). We will now show that \(R\) is a \(3\)-approximate replacement committee. Let \(|T\cap J|=r\). Suppose \(R^{*}=\{c_{1}^{*},c_{2}^{*},\ldots,c_{r}^{*}\}\) is an optimal replacement such that \(T^{*}=(T\setminus J)\cup R^{*}\) has \(\sigma(T^{*})=\sigma^{*}\). Let \(V_{1}^{*},V_{2}^{*},\ldots,V_{r}^{*}\subseteq V\) be disjoint set of voters such that \(c_{i}^{*}\) is the closest candidate in \(T^{*}\) for all voters in \(V_{i}^{*}\) for \(i\in[r]\). We define \(V^{\prime}=\bigcup_{i=1}^{r}V_{i}^{*}\). Let \(R=\{c_{1},c_{2},\ldots,c_{r}\}\) be the replacement set constructed by our algorithm and let \(\hat{v_{1}},\hat{v_{2}},\ldots,\hat{v_{r}}\) be the voters chosen by our algorithm. Recall that \(\hat{T}=(T\setminus J)\cup R\). It is easy to see that \(\sigma(V\setminus V^{\prime},T^{*})=\sigma(V\setminus V^{\prime},\hat{T}) \leq\sigma^{*}\) since \((T\setminus J)\subseteq T^{*}\) and \((T\setminus J)\subseteq\hat{T}\). Next, using a simple case analysis we will show that \(\sigma(V^{\prime},\hat{T})\leq 3\sigma^{*}\). Our cases are based on the voters \(\hat{v_{1}},\hat{v_{2}},\ldots,\hat{v_{r}}\) as follows: * If \(\hat{v_{i}}\in V\setminus V^{\prime}\) for \(i\in[r]\), then the farthest voter (aka \(\hat{v_{i}}\)) in the \(i^{th}\)-iteration of the algorithm satisfies \(\sigma(\hat{v_{i}},\hat{T})\leq\sigma(V\setminus V^{\prime},\hat{T})\leq \sigma^{*}\), and hence, \(\sigma(V^{\prime},\hat{T})\leq\sigma^{*}\). * Next, suppose for some \(i,j,k\in[r]\), we have \(\hat{v_{i}},\hat{v_{j}}\in V_{k}^{*}\). Without loss of generality, let \(j>i\). Then, in the \(j^{th}\)-iteration, the farthest voter (aka \(\hat{v_{j}}\)) among all voters in \(V\) has \(d(\hat{v_{j}},\hat{T_{j-1}^{{}^{\sim}}})\leq 3\sigma^{*}\) where \(\hat{T_{j-1}^{{}^{\sim}}}\) is a committee after \(j-1\) iterations of adding replacement candidates. This is because \(d(\hat{v_{j}},\hat{T_{j-1}^{{}^{\sim}}})\leq d(\hat{v_{j}},\hat{v_{i}})+d(\hat {v_{i}},\hat{c_{i}})\), and we know \(d(\hat{v_{j}},\hat{v_{i}})=d(\hat{v_{j}},c_{i}^{*})+d(c_{i}^{*},v_{i})\leq 2\sigma ^{*}\) and \(d(\hat{v_{i}},\hat{c_{i}})\leq\sigma^{*}\) since \(c_{i}\) is the closest candidate to \(v_{i}\) which is not in the committee. This implies that \(\sigma(\hat{T_{j-1}})\leq 3\sigma^{*}\). Since \(\hat{T_{j-1}^{{}^{\sim}}}\subseteq\hat{T}\); therefore, \(\sigma(V^{\prime},\hat{T})\leq 3\sigma^{*}\). * Finally, we consider the case for when \(i\in[r]\), we have \(\hat{v_{i}}\in V_{i}^{*}\). We will now show that for \(v\in V_{i}^{*}\), \(\sigma(v,\hat{T})\leq 3\sigma^{*}\). We know \(d(v,\hat{c_{i}})\leq d(v,\hat{v_{i}})+d(\hat{v_{i}},\hat{c_{i}})\). Notice that \(d(v,\hat{v_{i}})\leq d(v,c_{i}^{*})+d(c_{i}^{*},\hat{v_{i}})\leq 2\sigma ^{*}\), and \(d(\hat{v_{i}},\hat{c_{i}})\leq\sigma^{*}\). Therefore, \(d(v,\hat{c_{i}})\leq 3\sigma^{*}\). This implies, \(\sigma(V^{\prime},R)\leq 3\sigma^{*}\), and hence, \(\sigma(V^{\prime},\hat{T})\leq 3\sigma^{*}\). Finally, it is easy to see that the algorithm runs for at most \(k\) iterations and each iteration can be trivially implemented in time \(O(nk+m)\). This completes the proof of Lemma 17. ### Computing the Fault-Tolerance Score We can also approximate the optimal fault-tolerance score of a committee within a factor of 3. Specifically, if the optimal fault-tolerance score of \(T\) is \(\sigma_{f}(T)=\sigma^{*}\), then our algorithm returns a real number \(\sigma^{\prime}\) such that \(\sigma^{*}\leq\sigma^{\prime}\leq 3\sigma^{*}\). For each voter \(v\), let \(d_{f}(v)\) be \(v\)'s distance to its \((f+1)^{th}\) closest candidate, and let \(d^{\prime}=\max_{v\in V}d_{f}(v)\) be the maximum of these values over all voters. The basic idea behind our approximation is simple and uses the following two facts: (1) \(\sigma^{*}\geq d^{\prime}\), and (2) \(\sigma^{*}\geq\sigma(T)\). The first one holds because \(d^{\prime}\) is the best score possible if some voter's \(f\) nearest candidates fail, and the second one holds because a failure can only worsen the score (that is, \(\sigma_{f}(T)\geq\sigma(T)\) for any \(f>0\)). Therefore, the distance \(\sigma^{\prime}=d^{\prime}+2\sigma(T)\) is clearly within a factor of 3 of the optimal \(\sigma^{*}\). We claim that for any failing set \(J\subseteq C\), there exists a replacement \(R\subseteq C\setminus J\) of size at most \(|T\cap J|\) such that \(\sigma((T\setminus J)\cup R)\leq\sigma^{\prime}\). \(\rhd\) Claim 18. For a committee \(T\subseteq C\) and a failing set \(J\subseteq C\), there exists a replacement \(R\subseteq C\setminus J\) of size at most \(|T\cap J|\) such that \(\sigma((T\setminus J)\cup R)\leq\sigma^{\prime}\) where \(\sigma^{\prime}=d^{\prime}+2\sigma(T)\). Proof.: Let \(T\cap J=\{c_{1},\ldots,c_{r}\}\) and let \(V_{1},\ldots,V_{r}\) be disjoint sets of voters such that \(c_{i}\) is the closest candidate in \(T\) to all voters in \(V_{i}\) for \(i\in[r]\). We define \(\overline{V}=\bigcup_{i=1}^{r}V_{i}\) and \(V^{\prime}=V\setminus\overline{V}\). For each voter \(v\in V^{\prime}\), its closest candidate in \(T\) is still available; hence, \(\sigma(v,T\setminus J)\leq\sigma(T)\). Therefore, we only need to show that \(\sigma(\overline{V},R)\leq\sigma^{\prime}\). We build a replacement \(R\) as follows: We initialize \(R=\emptyset\). Next, for \(r\) iterations, let \(i\) be the iteration index then, * Select an arbitrary voter \(v_{i}\in V_{i}\). * Let \(\hat{c_{i}}\in C\setminus J\) be the closest available candidate to \(v_{i}\). Add \(\hat{c_{i}}\) to \(R\). To show \(\sigma(\overline{V},R)\leq\sigma^{\prime}\), we will now show that for all \(i\in[r]\), \(\sigma(V_{i},\hat{c_{i}})\leq\sigma^{\prime}\). For \(v\in V_{i}\), we know \(d(v,\hat{c_{i}})\leq d(v,v_{i})+d(v_{i},\hat{c_{i}})\) where \(v_{i}\) is the voter selected in the \(i^{th}\)-iteration. First, observe that \(d(v_{i},\hat{c_{i}})\leq d^{\prime}\) as at most \(f\) closest candidates to \(v_{i}\) belong to \(J\). Second, \(d(v,v_{i})\leq d(v,c_{i})+d(c_{i},v_{i})\leq 2\sigma(T)\) (recall that \(c_{i}\in T\cap J\)). Hence, \(d(v,\hat{c_{i}})\leq 2\sigma(T)+d^{\prime}=\sigma^{\prime}\). Therefore, for the constructed replacement \(R\), \(\sigma(\overline{V},R)\leq\sigma^{\prime}\). We have the following result. The fault-tolerance score of a committee can be approximated within a factor of \(3\) in time \(\mathcal{O}(nm\log(f))\). ### Optimal Fault-Tolerant Committee We now discuss how to design approximately optimal fault-tolerant committees in multiwinner elections. Specifically, given a set of voters \(V\) and a set of candidates \(C\) in \(d\)-space, along with parameters \(k\) (committee size) and \(f\) (number of faults), we want to compute a size \(k\) committee \(T\subseteq C\) with the minimum fault-tolerance score \(\sigma_{f}(T)\). We prove two approximation results for this problem: (1) We can solve this problem in polynomial time within an approximation factor of \(3\) in polynomial time if the parameter \(f\) is treated as a constant (while \(k\) remains possibly unbounded). If \(f\) is not assumed to be a constant, we can solve the problem within an approximation factor of \(5\). (2) We give an EPTAS with running time \((1/\varepsilon)^{O(1/\varepsilon^{2d})}(m+n)^{O(1)}\) which is a _bichterior_ approximation, where the output committee \(T\) is fault-tolerant for at least \((1-\varepsilon)n\) voters with \(\sigma_{f}(T)\leq(1+\varepsilon)\sigma^{*}\). The next two subsections discuss these results. #### 3-Approximation for Bounded \(f\) Let \(\sigma^{*}\) be the optimal \(f\)-tolerant score of a committee of size \(k\). We compute the approximation solution via an approximate decision algorithm, which takes as input a number \(\sigma\geq\sigma_{f}(C)\) and returns a committee \(T\subseteq C\) of size at most \(k\) with \(\sigma_{f}(T)\leq 3\sigma\) if \(\sigma\geq\sigma^{*}\). (We slightly abuse notation to introduce a convenient quantity \(\sigma_{f}(C)\), which is the \(f\)-fault-tolerance score of a committee with all the input candidates. This is clearly a lower bound on any size \(k\) committee's score.) For a committee \(T\subseteq C\) and a failing set \(J\subseteq C\), let \(\delta(T,J)\) denote the score obtained after finding an optimal replacement \(K\). That is, \[\delta(T,J)=\min_{K\in C\setminus J,|K|=|T\cap J|}\sigma_{0}((T\setminus J) \cup K).\] Thus, \(\sigma_{f}(T)=\max_{J\subseteq C,|J|\leq f}\delta(T,J)\). Our approximation algorithm is shown in Algorithm 1. It begins with an empty committee \(T\) (line 1), and as long as there exists a failing set \(J\) of size at most \(f\) for which \(\delta(T,J)>3\sigma\), 3 we do the following. Footnote 3: We can check this condition by iterating over all failing sets of size \(f\) and computing an optimal replacement set in each case. First, we remove all candidates in \(J\) from \(T\) (line 3). Then, whenever there exists a voter \(v\in V\) with \(d(v,T)>3\sigma\), we add to \(T\) a candidate \(c\in C\setminus J\) whose distance to \(v\) is at most \(\sigma\) (lines 5-6). Such a \(c\) always exists because \(\sigma\) is at least the distance to the \((f+1)^{th}\) closest neighbor to \(v\). We call this voter \(v\) the _witness_ of \(c\), denoted by \(\mathsf{wit}[c]\) (line 1). Adding \(c\) to \(T\) guarantees that \(d(v,T)\leq\sigma\). We repeat this procedure (the inner while loop) until \(d(v,T)\leq 3\sigma\) for all \(v\in V\). Finally, the outer while loop terminates when \(\delta(T,J)\leq 3\sigma\) for all \(J\subseteq C\) of size at most \(f\), i.e., \(\sigma_{f}(T)\leq 3\sigma\). At this point, we return the committee \(T\). ``` 1:Input: a set \(V\) of voters, a set \(C\) of candidates, the committee size \(k\), the fault-tolerance parameter \(f\), and a number \(\sigma\geq\sigma_{f}(C)\) 2:\(T\leftarrow\emptyset\) 3:while\(\exists\)\(J\subseteq C\) such that \(|J|\leq f\) and \(\delta(T,J)>3\sigma\)do 4:\(T\gets T\backslash J\) 5:while\(\exists\)\(v\in V\) such that \(d(v,T)>3\sigma\)do 6:\(c\leftarrow\) a candidate in \(C\backslash J\) satisfying \(d(v,c)\leq\sigma\) 7:\(T\gets T\cup\{c\}\) 8:\(\mathsf{wit}[c]\gets v\) 9:return\(T\) ``` **Algorithm 1** Approximate decision algorithm **Lemma 20**.: _Let \(T\) be the committee computed by Algorithm 1. Then \(d(\mathsf{wit}[c],\mathsf{wit}[c^{\prime}])>2\sigma\) for any two distinct \(c,c^{\prime}\in T\)._ Proof.: Let \(c,c^{\prime}\in T\) such that \(c\neq c^{\prime}\). When the committee \(T\) is constructed in Algorithm 1, the candidates are added to \(T\) one by one (line 1). Therefore, without loss of generality, we can assume that \(c^{\prime}\) is added to \(T\) after \(c\). Consider the iteration of the inner while-loop (line 1-1) of Algorithm 1 in which we add \(c^{\prime}\) to \(T\). At the beginning of this iteration, we have \(d(v,T)>3\sigma\) where \(v=\mathsf{wit}[c^{\prime}]\). Note that \(c\in T\) at this time, and thus \(d(\mathsf{wit}[c^{\prime}],c)>3\sigma\). Furthermore, we have \(d(\mathsf{wit}[c],c)\leq\sigma\) by construction. Therefore, \[d(\mathsf{wit}[c],\mathsf{wit}[c^{\prime}])\geq d(\mathsf{wit}[c^{\prime}],c) -d(\mathsf{wit}[c],c)>2\sigma,\] by the triangle inequality. If \(\sigma\geq\sigma^{*}\), then Algorithm 1 outputs a size \(k\) committee \(T\) with \(\sigma_{f}(T)\leq 3\sigma\). Proof.: The condition of the outer while loop of Algorithm 1 guarantees that \(\delta(T,J)\leq 3\sigma\) for all \(J\subseteq C\) of size at most \(f\), which implies \(\sigma_{f}(T)\leq 3\sigma\). To prove \(|T|\leq k\), suppose \(T=\{c_{1},\ldots,c_{r}\}\). By Lemma 20, the pairwise distances between the voters \(\mathsf{wit}[c_{1}],\ldots,\mathsf{wit}[c_{r}]\) are all larger than \(2\sigma\) and thus larger than \(2\sigma^{*}\) (as \(\sigma\geq\sigma^{*}\) by our assumption). Now consider a committee \(T^{*}\subseteq C\) of size \(k\) satisfying \(\sigma_{f}(T^{*})=\sigma^{*}\). For each \(\mathsf{wit}[c_{i}]\), there exists \(c_{i}^{*}\in T^{*}\) such that \(d(\mathsf{wit}[c_{i}],c_{i}^{*})\leq\sigma^{*}\). Observe that \(c_{1}^{*},\ldots,c_{r}^{*}\) are all distinct. Indeed, if \(c_{i}^{*}=c_{j}^{*}\) and \(i\neq j\), then by the triangle inequality, \[d(\mathsf{wit}[c_{i}],\mathsf{wit}[c_{j}])\leq d(\mathsf{wit}[c_{i}],c_{i}^{*} )+d(\mathsf{wit}[c_{j}],c_{j}^{*})\leq 2\sigma^{*},\] contradicting the fact that \(d(\mathsf{wit}[c_{i}],\mathsf{wit}[c_{j}])>2\sigma^{*}\). Since \(|T^{*}|=k\) and \(c_{1}^{*},\ldots,c_{r}^{*}\in T^{*}\), we have \(r\leq k\), which completes the proof. Using these two lemmas, we can compute a \(3\)-approximate solution using Algorithm 1 as follows. First, we compute \(\sigma_{f}(C)\) in \(O(nm^{f+1})\) time by enumerating all failing sets \(J\subseteq C\) of size at most \(f\). For every voter \(v\in V\) and every candidate \(c\in C\) such that \(d(v,c)\geq\sigma_{f}(C)\), we run Algorithm 1 with \(\sigma=d(v,c)\). Among all the committees returned of size at most \(k\) we pick the one, say \(T^{*}\), that minimizes \(\sigma_{f}(T^{*})\). To see that \(\sigma_{f}(T^{*})\leq 3\sigma^{*}\), note that \(\sigma^{*}\) must be the distance between a voter and a candidate. Thus, there is one call of Algorithm 1 with \(\sigma=\sigma^{*}\), which returns a committee \(T\subseteq C\) of size at most \(k\) such that \(\sigma_{f}(T)\leq 3\sigma=3\sigma^{*}\), by Lemma 21. We have \(\sigma_{f}(T^{*})\leq\sigma_{f}(T)\) by construction, which implies \(\sigma_{f}(T^{*})\leq 3\sigma^{*}\). _Overall running time._ We will show that each run of Algorithm 1 takes \(O(nm^{2f+1})\) time. We can check the condition of while loop in Step 2 in time \(O(m^{2f})\). This is because there are at most \(O(m^{f})\) failing sets of size at most \(f\) (the precise upper bound is \(2m^{f}\)), and for each failing set, we can find an optimal replacement in time \(O(m^{f})\) by bruteforce. Next, each iteration of the while loop takes \(O(nm)\) time, that is, the time required to compute all voter-candidate pairwise distances. Therefore, the overall running time of the algorithm is \(O(n^{2}m^{2f+2})\). Thus, we have the following result. We can find a \(3\)-approximation for Optimal Fault-tolerant Committee in time \(O(n^{2}m^{2f+2})\), assuming the fault-tolerance parameter \(f\) is a constant. Next, for a non-constant \(f\), we give a \(5\) approximation using a greedy rule. We can find a \(5\)-approximation for Optimal Fault-Tolerant Committee in time \(\mathcal{O}(mnk)\). Our algorithm is quite simple and uses the classical "farthest next" greedy rule [15]. Specifically, let \(C\) and \(V\) be the set of candidates and voters, respectively. We begin with an empty committee \(T=\emptyset\) and an empty set \(\hat{V}\) of picked voters. Then we repeat the following step: pick the voter \(\hat{v}\in V\backslash\hat{V}\) farthest to the current committee \(T\), add \(\hat{v}\) to \(\hat{V}\), and add to \(T\) the candidate \(\hat{c}\in C\) closest to \(\hat{v}\). The procedure terminates when \(|T|=k\) or the candidate \(\hat{c}\) computed is already in \(T\). Formally, our algorithm is shown in Algorithm 2. ``` 1:Input: a set \(V\) of voters, a set \(C\) of candidates, the committee size \(k\), and the fault-tolerance parameter \(f\) 2:\(i\gets 0\) and \(T\leftarrow\emptyset\) 3:while\(|T|\leq k\)do 4:\(i\gets i+1\) 5:\(v_{i}\leftarrow\arg\max_{v\in V\backslash\hat{V}}d(v,T)\) 6:\(c_{i}\leftarrow\arg\min_{c\in C}d(v_{i},c)\) 7:if\(c_{i}\in T\)then 8:break 9:\(T\gets T\cup\{c_{i}\}\) 10:return\(T\) ``` **Algorithm 2** \(5\)-approximation algorithm for OFTC We now move on to the proof of correctness. Denote by \(\sigma^{*}\) the optimal \(f\)-tolerant score of a size-\(k\) committee. First, using the same analysis as the one for the \(k\)-center problem [20], we can show that \(\sigma(T)\leq 3\sigma^{*}\). Let \(T\) be the committee computed by Algorithm 2. Then \(\sigma(T)\leq 3\sigma^{*}\). The proof Lemma 24 is easy and we refer the reader to [20] for details. We now show that \(T\) is a \(5\)-approximate solution to OFTC. Let \(T\) be the committee computed by Algorithm 2. Then \(\sigma_{f}(T)\leq 5\sigma^{*}\). Proof.: It suffices to show that for any failing set \(J\subseteq C\) of size at most \(f\), there exists a replacement set \(K\subseteq C\backslash J\) such that \(|K|=T\cap J\) and \(\sigma_{0}((T\backslash J)\cup K)\leq 5\sigma^{*}\). Suppose \(T=\{c_{1},\ldots,c_{r}\}\), where \(c_{i}\) is the candidate selected in the \(i\)-th iteration of Algorithm 2. Let \(v_{1},\ldots,v_{r}\) be the voters computed in line 4 of Algorithm 2. For a failing set \(J\subseteq C\), we construct the replacement set \(K\) as follows: For each index \(i\in[r]\) such that \(c_{i}\in J\), we include in \(K\) the candidate \(c^{\prime}_{i}\in C\backslash J\) closest to \(v_{i}\). Clearly, \(|K|=|T\cap J|\). Now we show that \(\sigma_{0}((T\backslash J)\cup K)\leq 5\sigma^{*}\). Using Lemma 24, we know that \(d(v,T)\leq 3\sigma^{*}\) for any voter \(v\in V\). Based on this, we bound \(\sigma_{0}((T\backslash J)\cup K)\) as follows. Observe that \(\sigma_{0}((T\backslash J)\cup K)=\max_{v\in V}d(v,(T\backslash J)\cup K)\). So it suffices to show that \(d(v,(T\backslash J)\cup K)\leq 5\sigma^{*}\) for all \(v\in V\). Let \(c_{i}\in T\) be the candidate closest to \(v\); thus, \(d(v,c_{i})=d(v,T)\leq 3\sigma^{*}\). If \(c_{i}\notin J\), we are done. Otherwise, \(c^{\prime}_{i}\in K\) and hence \(d(v,(T\backslash J)\cup K)\leq d(v,c^{\prime}_{i})\). By the triangle inequality, we have \[d(v,c^{\prime}_{i})\leq d(v,c_{i})+d(c_{i},v_{i})+d(v_{i},c^{\prime}_{i}).\] As argued before, \(d(v,c_{i})\leq 3\sigma^{*}\). Furthermore, \(d(c_{i},v_{i})\leq d(v_{i},c^{\prime}_{i})\leq\sigma^{*}\), because \(c^{\prime}_{i}\) is the candidate in \(C\backslash J\) closest to \(v_{i}\). Therefore, the above inequality implies \(d(v,c^{\prime}_{i})\leq 5\sigma^{*}\). By the above lemma, we know that Algorithm 2 achieves an approximation ratio of 5. Its running time is clearly \(O(mnk)\). This completes the proof of Lemma 23. All of these approximations hold not just for \(d\)-dimensional Euclidean space, for any fixed \(d\), but also for any metric space. #### A bicriterion EPTAS Finally, we design a bicriterion FPT approximation scheme with running time \(f(\varepsilon)\cdot n^{\mathcal{O}(1)}\), which finds a size-\(k\) committee whose fault-tolerance score for at least a \((1-\varepsilon)\) fraction of the voters is within a factor of \((1+\varepsilon)\) of the optimum. Formally, we say a committee \(T\) is \((r,\rho)\)_-good_ if there exists a subset \(V^{\prime}\subseteq V\) of size at least \(\rho n\) such that the \(f\)-tolerant score of \(T\) with respect to only the voters in \(V^{\prime}\) is at most \(r\). Then our approximation scheme can output a size-\(k\) committee which is \(((1+\varepsilon)\sigma^{*},1-\varepsilon)\)-good. The core of our approximation scheme is the following (approximation) decision algorithm. The decision algorithm takes the problem instance and an additional number \(r>0\) as input. The output of the algorithm has two possibilities: it either **(i)** returns YES and gives a size-\(k\) committee that is \(((1+\varepsilon)r,1-\varepsilon)\)-good or **(ii)** simply returns NO. Importantly, the algorithm is guaranteed to give output **(i)** as long as \(r\geq\sigma^{*}\). Note that this decision algorithm directly gives us the desired approximation scheme. Indeed, we can apply it with \(r=d(v,c)\) for all \(v\in V\) and \(c\in C\). Let \(r^{*}\) be the smallest \(r\) that makes the algorithm give output **(i)**. The size-\(k\) committee \(T^{*}\) obtained when applying the algorithm with \(r^{*}\) is \(((1+\varepsilon)r^{*},1-\varepsilon)\)-good. We have \(r^{*}\leq\sigma^{*}\) because the algorithm must be applied with \(r=\sigma^{*}\) at some point and it is guaranteed to give output **(i)** at that time. Thus, \(T^{*}\) is \(((1+\varepsilon)\sigma^{*},1-\varepsilon)\)-good, as desired. For simplicity of exposition, we describe our decision algorithm in two dimensions. By scaling, we may assume that the given number is \(r=1\). To solve the decision problem, our algorithm uses the shifting technique [19]. Let \(h\) be an integer parameter to be determined later. For a pair of integers \(i,j\in\mathbb{Z}\), let \(\Box_{i,j}\) denote the \(h\times h\) square \([i,i+h]\times[j,j+h]\). A square \(\Box_{i,j}\) is _nonempty_ if it contains at least one voter or candidate. We first compute the index set \(\tilde{I}=\{(i,j):\Box_{i,j}\text{ is nonempty}\}\). This can be easily done in time \(\mathcal{O}((n+m)h^{2})\). Consider a pair \((x,y)\in\{0,\ldots,h-1\}^{2}\). Let \(L_{x,y}\) be the set of all integer pairs \((i,j)\) such that \(i\pmod{h}\equiv x\) and \(j\pmod{h}\equiv y\). We write \(\tilde{I}_{x,y}=\tilde{I}\cap L_{x,y}\). For a voter \(v\in V\) and a square \(\square_{i,j}\), we say \(v\) is a _boundary voter_ for \(\square_{i,j}\) if \(v\notin[i+2,i+h-2]\times[j+2,j+h-2]\). Furthermore, we say \(v\)_conflicts_ with \((x,y)\) if \(v\) is a boundary voter in \(\square_{i,j}\) for some \((i,j)\in\widetilde{I}_{x,y}\). There exists a pair \((x,y)\in\{0,\ldots,h-1\}^{2}\) such that at most \(\frac{4h-4}{h^{2}}\cdot|V|\) voters conflict with \((x,y)\). Proof.: A voter \(v\in V\) may conflict with \((x,y)\) only if for some \((i,j)\in\widetilde{I}_{x,y}\), we have \(v\in\square_{i,j}\) but \(v\notin[i+2,i+h-2]\times[j+2,j+h-2]\). Therefore, out of the total of \(h^{2}\) pairs \((x,y)\), \(v\) can only conflict with at most \(h^{2}-(h-2)^{2}\) pairs. Hence, using an averaging argument, there exists a pair \((x,y)\) with at most \(\frac{h^{2}-(h-2)^{2}}{h^{2}}\cdot|V|\) conflicting voters from \(V\). We fix a pair \((x,y)\in\{0,\ldots,h-1\}^{2}\) that conflicts with the minimum number of voters. For \((i,j)\in\widetilde{I}_{x,y}\), we define the set of (non-boundary) voters \(V_{i,j}=\{v\in\square_{i,j}:v\in[i+2,i+h-2]\times[j+2,j+h-2]\}\), and the set of candidates \(C_{i,j}=\{c\in C:c\in\square_{i,j}\}\). Note that for \((i,j)\in\widetilde{I}_{x,y}\), the \(C_{i,j}\)'s are disjoint and form a partition of \(C\). Next, we show an important lemma which allows our algorithm to divide our problem into smaller subproblems, solve them individually, and combine the solutions to solve the overall problem. Let \(V_{1},V_{2},\ldots,V_{s}\) be subsets of \(V\) and let \(T_{1},T_{2},\ldots,T_{s}\) be pairwise disjoint subsets of \(C\) such that \(T_{i}\) is a fault-tolerant committee for \(V_{i}\) with \(\sigma_{f}(T_{i})=\sigma\). Then, \(T=\bigcup_{i=1}^{s}T_{i}\) is a fault-tolerant committee of \(\bigcup_{i=1}^{s}V_{i}\) with \(\sigma_{f}(T)=\sigma\). Proof.: We will show that for any failing set \(J\subseteq C\), there exists a replacement set \(R\) with \(|R|\leq|J\cap T|\) such that \(\sigma_{0}((T\setminus J)\cup R)\leq\sigma\). For \(i\in[s]\), let \(J_{i}\subseteq J\) be the restriction of \(J\) to \(T_{i}\), i.e., \(J_{i}=J\cap T_{i}\). We know \(|J|\leq f\). Since \(T_{i}\) is a fault-tolerant committee for \(V_{i}\); hence, there exists a valid replacement \(R_{i}\subseteq C\backslash J\) such that \(|R_{i}|\leq|J_{i}|\) and \(\sigma_{0}((T_{i}\backslash J_{i})\cup R_{i})\leq\sigma\). Let \(R=\bigcup_{i=1}^{s}R_{i}\) (note that the \(R_{i}\)'s need not be disjoint). Then we have \(|R|\leq\sum_{i=1}^{s}|J_{i}|\leq|J\cap T_{i}|\) which implies \(|R|\leq|J\cap T|\), and we have \(\sigma_{0}((T\backslash J)\cup R)\leq\sigma\). This completes the proof of Lemma 4. Consider a pair \((i,j)\in\widetilde{I}_{x,y}\). Let \(\overline{T}_{i,j}\) be a smallest fault-tolerant committee for \(V_{i,j}\) with \(\sigma_{f}(\overline{T}_{i,j})\leq 1\). We observe that any inclusion-minimal fault-tolerant committee \(T_{i,j}\) for \(V_{i,j}\) satisfies \(T_{i,j}\subseteq C_{i,j}\). This is because any candidate outside \(C_{i,j}\) has distance more than \(1+6/h\) to any voter in \(V_{i,j}\) (for a large enough value of \(h\)). In the next section we will show how to compute a fault-tolerant committee \(T_{i,j}\subseteq C_{i,j}\) for \(V_{i,j}\) such that \(|T_{i,j}|\leq|\overline{T}_{i,j}|\) and \(\sigma_{f}(T_{i,j})\leq 1+6/h\) in \(h^{O(h^{4})}n^{O(1)}\) time. Assuming we can compute the above-mentioned committee \(T_{i,j}\), our overall algorithm is as follows: 1. Fix a pair \((x,y)\in\{0,\ldots,h-1\}^{2}\) conflicting with the minimum number of voters, and set \(h\) to be the smallest integer such that \((4h-4)/h^{2}\leq\varepsilon\) and \(6/h\leq\varepsilon\). 2. For each pair \((i,j)\in\widetilde{I}_{x,y}\), compute \(T_{i,j}\subseteq C_{i,j}\). 3. Let \(T=\bigcup_{(i,j)\in\widetilde{I}_{x,y}}T_{i,j}\). If \(|T|\leq k\), return YES (along with \(T\)); otherwise, return NO. Let \(V^{\prime}=\bigcup_{(i,j)\in\widetilde{I}_{x,y}}V_{i,j}\). Since the \(C_{i,j}\)'s are disjoint, using Lemma 4, we conclude that \(T\) is a fault-tolerant committee for \(V^{\prime}\). Furthermore, from our choice of \((x,y)\), we have \(|V^{\prime}|\geq(1-\varepsilon)n\). It is easy to show that the \(f\)-tolerant score of \(T\) with respect to the voters in \(V^{\prime}\) is at most \(1+\varepsilon\), and in addition, if \(\sigma^{*}\geq 1\), we have \(|T|\leq k\); we give a formal argument below. This proves correctness of our decision algorithm. The overall algorithm takes \((1/\varepsilon)^{O(1/\varepsilon^{4})}(m+n)^{O(1)}\) time. We note that the algorithm can be directly generalized to the \(d\)-dimensional case with running time \((1/\varepsilon)^{O(1/\varepsilon^{2^{4}})}(m+n)^{O(1)}\). Therefore, we have the following result. **Theorem 28**.: _Given a \(d\)-dimensional Fault-Tolerant Committee Selection instance, we can compute a size-\(k\) committee \(T\) such that the \(f\)-tolerant score of \(T\) with respect to at least \((1-\varepsilon)n\) voters is at most \((1+\varepsilon)\sigma^{*}\), where \(\sigma^{*}\) is the optimal \(f\)-tolerant score of a size-\(k\) committee (with respect to the entire set \(V\)). This algorithm runs in time \((1/\varepsilon)^{O(1/\varepsilon^{2d})}(m+n)^{O(1)}\)._ #### 3.4.2.1 Correctness for the Decision Algorithm. Recall that, in our decision algorithm, we set \(h\) to be the smallest integer such that \((4h-4)/h^{2}\leq\varepsilon\) and \(6/h\leq\varepsilon\). Moreover, using Lemma 26 and our choice of \((x,y)\), we have \(|V^{\prime}|\geq(1-\varepsilon)n\). If the computed committee \(T\) has size at most \(k\), our algorithm returns YES; otherwise, it returns NO. To see the correctness our algorithm, recall that \(\sigma^{*}\) is an optimum score of a fault-tolerant committee for \(V\). If \(r\geq\sigma^{*}\), then there exists a size-\(k\) fault-tolerant committee with score \(r\) for \(V\), and hence for \(V^{\prime}\) (as \(V^{\prime}\subseteq V\)). Recall that for \((i,j)\in\tilde{I}_{x,y}\), the computed committee \(T_{i,j}\) has \(|T_{i,j}|\leq|\overline{T}_{i,j}|\) where \(\overline{T}_{i,j}\) is the smallest committee for \(V_{i,j}\) with \(\sigma_{f}(\overline{T}_{i,j})\leq 1\). Therefore, when \(r\geq\sigma^{*}\), for \(T=\bigcup_{(i,j)\in\tilde{I}_{x,y}}T_{i,j}\), we have \(|T|\leq k\) and our algorithm returns YES. On the other hand, if there is no size-\(k\) committee whose \(f\)-tolerant score is at most \((1+\varepsilon)r\) for at least \((1-\varepsilon)n\) voters, we must have \(|T|>k\) and thus our algorithm returns NO. This completes the argument for the proof of correctness. #### 3.4.2.2 Algorithm to Compute \(T_{i,j}\) We now present the most challenging piece of our algorithm: the computation of the \(T_{i,j}\)'s. Consider a box \(\Box_{i,j}\). Suppose there exists a fault-tolerant committee \(T\subseteq C\) for \(V_{i,j}\) with \(\sigma_{f}(T)\leq 1\). Our task is to compute a fault-tolerant committee \(T_{i,j}\subseteq C\) for \(V_{i,j}\) such that \(|T_{i,j}|\leq|T|\) and \(\sigma_{f}(T_{i,j})\leq 1+6/h\). We divide \(\Box_{i,j}\) into \(h^{4}\) smaller cells each with size \(\frac{1}{h}\times\frac{1}{h}\), and we denote the set of these cells by \(L=\{l_{1},\ldots,l_{h^{4}}\}\). (See Figure 4.) Our algorithm is based on two key observations: \((i)\) A committee with a candidate in every nonempty cell has \(f\)-tolerant score within a difference of at most \(2/h\) from the optimum score. Since the number of cells is \(h^{4}\), this implies that the size of a smallest approximately optimal committee is bounded by \(h^{4}\) (formally shown in Lemma 29). \((ii)\) All candidates in a cell can be treated as identical, causing only a loss of \(2/h\) in the score. This implies that for any \(T_{i,j}\), to approximately compute the \(f\)-tolerant score of \(T_{i,j}\), we _only_ need to consider the failing sets where either all or none of the candidates in a cell fail. Note that the number of such failing sets is at most \(2^{O(h^{4})}\) (formally shown in Lemma 30). Using these two observations, at a high level, our algorithm goes through all committees of size at most \(h^{4}\) (there are \(h^{\mathcal{O}(h^{4})}\) of these as we can assume that each cell has at most \(h^{4}\) candidates), approximately computes the \(f\)-tolerant score of each of these committees in time \(2^{O(h^{4})}\), and returns the smallest one with the desired score. **Lemma 29**.: _Let \(T,T^{*}\subseteq C\) be fault-tolerant committees for \(V_{i,j}\). If \(|T^{*}\cap l_{a}|=1\) for all \(a\in[h^{4}]\) such that \(C\cap l_{a}\neq\emptyset\), then \(\sigma_{f}(T^{*})-\sigma_{f}(T)\leq 2/h\)._ Proof.: Consider a failing set \(J\subseteq C\). Since \(T\) is a fault-tolerant committee, there exists a valid replacement set \(R\) such that \(|(T\backslash J)\cup R|\leq|T|\) and \(\sigma_{0}((T\backslash J)\cup R)\leq\sigma_{f}(T)\). Let \(L^{\prime}=\{l_{a}\in L:l_{a}\cap((T\backslash J)\cup R)\neq\emptyset\}\). Moreover, let \(J^{*}=J\cap T^{*}\). Then we will show that there exists a replacement set \(R^{*}\) for \(J^{*}\) such that \(|R^{*}|\leq|J^{*}|\) and \(\sigma_{0}((T^{*}\backslash J^{*})\cup R^{*})-\sigma_{0}((T\backslash J)\cup R) \leq 2/h\). We construct the set \(R^{*}\) as follows: Consider a cell \(l_{a}\in L^{\prime}\). Since \(l_{a}\cap((T\setminus J)\cup R)\neq\emptyset\), \(l_{a}\) is nonempty. This implies \(T^{*}\cap l_{a}\neq\emptyset\) from the way we construct \(T^{*}\). Let \(c_{a}\) be the only candidate in \(T^{*}\cap l_{a}\). If \(c_{a}\in J\), then we replace \(c_{a}\) with an arbitrary \(c^{\prime}_{a}\in l_{a}\setminus J\) (i.e., we add \(c^{\prime}_{a}\) to \(R^{*}\)). We know that such a candidate \(c^{\prime}_{a}\) exists because \(l_{a}\cap((T\setminus J)\cup R)\neq\emptyset\). Since we only add a candidate to \(R^{*}\) from \(l_{a}\) such that \(J^{*}\cap l_{a}\neq\emptyset\), we have \(|R^{*}|\leq|J^{*}|\). We will now show that \(\sigma_{0}((T^{*}\setminus J^{*})\cup R^{*})-\sigma_{0}((T\setminus J)\cup R) \leq 2/h\). Observe that for a pair of candidates \(c_{1},c_{2}\in l_{a}\), \(d(c_{1},c_{2})\leq 2/h\) (see Figure 4). Let \(C_{a}=l_{a}\cap((T\setminus J)\cup R)\). Moreover, let \(V_{a}\subseteq V\) be the set of voters which have a candidate in \(C_{a}\) as their closest candidate in the committee \((T\setminus J)\cup R\). Using the triangle inequality, we know that \(d(V_{a},c^{\prime}_{a})\leq d(V_{a},C_{a})+2/h\). The above statement holds for all cells \(l_{a}\in L^{\prime}\). Therefore, \(\sigma_{0}((T^{*}\setminus J^{*})\cup R^{*})-\sigma_{0}((T\setminus J)\cup R )\leq 2/h\). Note that our proof works for an arbitrary failing set \(J\) including \(J=\emptyset\). This completes the proof of Lemma 4.2. Based on the above observation, we solve the problem as follows. We enumerate all maps \(\chi\colon L\to\{0,1,\ldots,h^{4}\}\) where \(\chi(l_{a})\) is the number of candidates from \(l_{a}\) in the committee. The total number of such maps is \(h^{O(h^{4})}\). For each feasible map, i.e., \(\chi\) satisfying \(\chi(l_{a})\leq|C\cap l_{a}|\) for all \(a\in[h^{4}]\), we construct a fault-tolerant committee \(T^{*}_{\chi}\) for \(V_{i,j}\) by picking (arbitrarily) \(\chi(l_{a})\) candidates in \(C\cap l_{a}\) for all \(a\in[h^{4}]\) and including them in \(T^{*}_{\chi}\). For each constructed \(T^{*}_{\chi}\), we compute a number \(\widetilde{\sigma_{f}}(T^{*}_{\chi})\) that approximates \(\sigma_{f}(T^{*}_{\chi})\) using the following lemma. Given \(T^{*}_{\chi}\), one can compute a number \(\widetilde{\sigma_{f}}(T^{*}_{\chi})\) in \(2^{O(h^{4})}n^{O(1)}\) time such that \(|\widetilde{\sigma_{f}}(T^{*}_{\chi})-\sigma_{f}(T^{*}_{\chi})|\leq 2/h\). Proof.: For a pair of candidates \(c_{i},c_{j}\) in a cell \(l_{i}\in L\), we know \(d(c_{i},c_{j})\leq 2/h\). Since we want to compute the number \(\widetilde{\sigma_{f}}(T^{*}_{\chi})\) within an absolute error of \(2/h\) compared to the actual value, it is sufficient to only consider the failing sets for which either all or none of the candidates from a cell fail. The total number of cells is \(h^{4}\); therefore, we only need to consider at most \(2^{O(h^{4})}\) distinct failing cells (see Figure 4). For each of these failing sets (say \(J\subseteq C\)), we can compute a best replacement committee \(R\) in time \(2^{O(h^{4})}\) by either choosing one or zero candidates from each cell. For each replacement, \(\sigma_{0}(T^{*}_{\chi}\setminus J\cup R)\) can be computed in \(O(nh^{4})\) time. Therefore, we can compute \(\sigma_{f}(T^{*}_{\chi})\) in time \(2^{O(h^{4})}n^{O(1)}\). Finally, we let \(T_{i,j}\) be the smallest among all committees \(T^{*}_{\chi}\) satisfying \(\widetilde{\sigma_{f}}(T^{*}_{\chi})\leq 1+4/h\), and we return it as our solution. The running time of our algorithm is clearly \(h^{O(h^{4})}n^{O(1)}\) Figure 4: The figure shows a cell in the shifted grid. The solid lines around the sides are the grid lines (and the region inside them is a cell). The shaded (green) region is the boundary region. Inside the boundary region, we divide the cell into \(1/h\times 1/h\) smaller cells. The distance between any two points in a smaller cell is \(<2/h\). All candidates in smaller cells are identical (i.e., candidates in blue regions). In this example, since only five cells are nonempty, we have at most \(2^{5}\) distinct failing sets. The following lemma shows that our algorithm is correct. We have \(\sigma_{f}(T_{i,j})\leq 1+6/h\). Furthermore, \(|T_{i,j}|\leq|T|\) for any fault-tolerant committee \(T\) for \(V_{i,j}\) with \(\sigma_{f}(T)\leq 1\). Proof.: The fact that \(\sigma_{f}(T_{i,j})\leq 1+6/h\) follows directly from our construction and Lemma 3. Let \(T\) be a fault-tolerant committee for \(V_{i,j}\) with \(\sigma_{f}(T)\leq 1\). We consider two cases: \(|T|>h^{4}\) and \(|T|\leq h^{4}\). First, assume \(|T|>h^{4}\). Define \(\chi\colon L\to\{0,1,\ldots,h^{4}\}\) by setting \(\chi(l_{a})=1\) for all \(a\in[h^{4}]\) with \(C\cap l_{a}\neq\emptyset\), and \(\chi(l_{a})=0\) whenever \(C\cap l_{a}=\emptyset\). Clearly, \(|T_{\chi}^{*}|\leq h^{4}<|T|\). By Lemma 29, \(\sigma(T_{\chi}^{*})\leq 1+2/h\). Thus, \(\widetilde{\sigma_{f}}(T_{\chi}^{*})\leq 1+4/h\) by Lemma 30. This further implies that \(|T_{i,j}|\leq|T_{\chi}^{*}|<|T|\). Now assume \(|T|\leq h^{4}\). Define \(\chi\colon L\to\{0,1,\ldots,h^{4}\}\) by setting \(\chi(l_{a})=|T\cap l_{a}|\) for all \(a\in[h^{4}]\). Clearly, \(|T_{\chi}^{*}|=|T|\) and \(|T_{\chi}^{*}\cap l_{a}|=|T\cap l_{a}|\) for all \(a\in[h^{4}]\). We show that \(\sigma(T_{\chi}^{*})\leq 1+2/h\). Since \(|T_{\chi}^{*}\cap l_{a}|=|T\cap l_{a}|\), for each \(a\) pick a bijection \(\pi_{a}\colon C\cap l_{a}\to C\cap l_{a}\) such that for all \(x\in C\cap l_{a}\), \(x\in T_{\chi}^{*}\) if and only if \(\pi_{a}(x)\in T\). Observe that the distance between \(x\) and \(\pi_{a}(x)\) is at most \(2/h\) for all \(x\in C\cap l_{a}\). Combining all bijections \(\pi_{a}\), we obtain a bijection \(\pi\colon C_{i,j}\to C_{i,j}\) with the property that for all \(x\in C_{i,j}\), the distance between \(x\) and \(\pi(x)\) is at most \(2/h\), and \(x\in T_{\chi}^{*}\) if and only if \(\pi(x)\in T\). Because of this bijection, it is obvious that \(|\sigma_{f}(T_{\chi}^{*})-\sigma_{f}(T)|\leq 2/h\) and in particular \(\sigma_{f}(T_{\chi}^{*})\leq 1+2/h\). Thus, \(\widetilde{\sigma_{f}}(T_{\chi}^{*})\leq 1+4/h\) by Lemma 30. This further implies that \(|T_{i,j}|\leq|T_{\chi}^{*}|=|T|\).
2308.09245
Masked Spatio-Temporal Structure Prediction for Self-supervised Learning on Point Cloud Videos
Recently, the community has made tremendous progress in developing effective methods for point cloud video understanding that learn from massive amounts of labeled data. However, annotating point cloud videos is usually notoriously expensive. Moreover, training via one or only a few traditional tasks (e.g., classification) may be insufficient to learn subtle details of the spatio-temporal structure existing in point cloud videos. In this paper, we propose a Masked Spatio-Temporal Structure Prediction (MaST-Pre) method to capture the structure of point cloud videos without human annotations. MaST-Pre is based on spatio-temporal point-tube masking and consists of two self-supervised learning tasks. First, by reconstructing masked point tubes, our method is able to capture the appearance information of point cloud videos. Second, to learn motion, we propose a temporal cardinality difference prediction task that estimates the change in the number of points within a point tube. In this way, MaST-Pre is forced to model the spatial and temporal structure in point cloud videos. Extensive experiments on MSRAction-3D, NTU-RGBD, NvGesture, and SHREC'17 demonstrate the effectiveness of the proposed method.
Zhiqiang Shen, Xiaoxiao Sheng, Hehe Fan, Longguang Wang, Yulan Guo, Qiong Liu, Hao Wen, Xi Zhou
2023-08-18T02:12:54Z
http://arxiv.org/abs/2308.09245v1
# Masked Spatio-Temporal Structure Prediction for Self-supervised Learning ###### Abstract Recently, the community has made tremendous progress in developing effective methods for point cloud video understanding that learn from massive amounts of labeled data. However, annotating point cloud videos is usually notoriously expensive. Moreover, training via one or only a few traditional tasks (_e.g., classification_) may be insufficient to learn subtle details of the spatio-temporal structure existing in point cloud videos. In this paper, we propose a Masked Spatio-Temporal Structure Prediction (MaST-Pre) method to capture the structure of point cloud videos without human annotations. MaST-Pre is based on spatio-temporal point-tube masking and consists of two self-supervised learning tasks. First, by reconstructing masked point tubes, our method is able to capture the appearance information of point cloud videos. Second, to learn motion, we propose a temporal cardinality difference prediction task that estimates the change in the number of points within a point tube. In this way, MaST-Pre is forced to model the spatial and temporal structure in point cloud videos. Extensive experiments on MSRAction-3D, NTU-RGBD, NvGesture, and SHREC'17 demonstrate the effectiveness of the proposed method. The code is available at [https://github.com/JohnsonSign/MaST-Pre](https://github.com/JohnsonSign/MaST-Pre). ## 1 Introduction In physics, motion is the phenomenon in which position changes over time. Because point clouds provide precise position information, _i.e_., 3D coordinates, point cloud videos, which evolve over time, can accurately describe the 3D motion in the real world. Effectively understanding point cloud videos can significantly improve intelligent agents on the interaction with environments. Therefore, the community has developed a few effective methods for point cloud video understanding, including video classification [9, 10, 11, 2, 40, 50] and semantic segmentation [6, 24, 42, 43]. However, most of these methods are based on supervised learning and that requires much effort to carefully annotate massive amounts of labels. Moreover, learning via only classification or segmentation may make deep neural networks take too much emphasis on the task itself but largely ignore the subtle details of the instinct spatio-temporal structure in point cloud videos. To alleviate those problems, we propose a self-supervised learning method on point cloud videos. Self-supervised learning uses supervisory signals from the data itself and enables deep neural networks to learn from massive data without human annotations. This is important to recognize more subtle patterns in data. Networks pre-trained with self-supervised learning usually yield higher performance than when solely trained in a supervised manner [15, 16, 5, 19]. Although self-supervised Figure 1: Our MaST-Pre is based on spatio-temporal point-tube masking. To enable a model to capture the appearance structure in point cloud videos, we ask it to reconstruct masked point tubes. To equip the model with motion modeling ability, we develop a temporal cardinality difference prediction task. learning has been applied to images [15], videos [13, 37] and static point clouds [29, 47], it has not been promoted on 4D signals, such as point cloud videos. Visual signals in point cloud videos can be divided into appearance and motion. While appearance specifies which objects are in videos, motion describes their dynamics. Therefore, self-supervised learning on point cloud videos should carefully make the most of the appearance and motion structure. In this paper, we propose a Masked Spatio-Temporal Structure Prediction (MaST-Pre) method for self-supervised learning on point cloud videos (Fig. 1). MaST-Pre is based on a masking strategy, which has been proven effective in a range of applications. For example, because of the canonical structure, images can be easily segmented into multiple patches for masking [8, 15], which in the case of video are extended to patch tubes [13, 37]. For unstructured static point clouds, spherical support domain masking can be used for masked autoencoder [29, 47]. However, the spatial irregularity and temporal regularity make point cloud videos require a more elaborate masking strategy. Our method is based on a masked point tube mechanism, where a point tube is a local area expanding over a short time [11]. Based on point-tube masking, our MaST-Pre employs two self-supervised tasks to capture the appearance and motion structure, respectively. First, to learn the appearance structure, MaST-Pre is asked to predict the invisible parts of the input from unmasked points. Second, to capture the dynamics in point tubes, we propose the _temporal cardinality difference_, which can be calculated online from inputs without additional parameters. Cardinality can reflect basic structures (_e.g_., line, edge, and plane) of static point clouds [21]. In this paper, we extend it to a temporal version so that it can model the dynamics of point cloud videos. Intuitively, the temporal cardinality difference characterizes the flow of points within a short time. Therefore, inferring the temporal cardinality difference of masked point tubes facilitates MaST-Pre to learn motion-informative representations. Our contributions are summarized as follows: * We design a 4D scheme of masked prediction for self-supervised learning on point cloud videos, termed as MaST-Pre. Our MaST-Pre jointly learns the appearance and motion structure of point cloud videos. * We propose the temporal cardinality difference, a simple and effective motion feature directly captured from raw input points. It explicitly guides MaST-Pre to learn motion-informative representations. * Extensive experiments and ablation studies on several benchmark datasets validate that our MaST-Pre learns rich representations of point cloud videos. ## 2 Related Work In this section, we first briefly review visual mask prediction for self-supervised learning. Then, we present recent advances in self-supervised learning on point clouds and dynamics modeling for point cloud videos. ### Visual Mask Prediction Mask prediction has been proven to be an excellent self-supervised task for visual representation learning [8, 15, 45]. By reconstructing target signals from the masked input, mask prediction enables the network to learn rich representations and boosts self-supervised learning [41, 39, 2, 4]. Chen _et al_. [4] extended GPT [3] to operate the pixel sequence for prediction. Bao _et al_. [2] and Wang _et al_. [39] introduced another successful framework, BERT [19], to predict the identities of masked tokens. Then, He _et al_. [15] proposed MAE as a scalable vision learner to predict the pixels of masked patches. Feichtenhofer _et al_. [13] and Tong _et al_. [37] extended MAE to video representation learning by masking patch tubes. Wei _et al_. [41] developed MaskFeat to predict the HOG features of masked spatio-temporal tubes for self-supervised video pre-training. The regular structure of images and videos makes it easy to obtain patches or patch tubes, which facilitates the design of masking strategies. Pang _et al_. [29] extended MAE to unstructured static point clouds and designed a masking strategy based on the local spatial neighborhoods. However, point cloud videos are not only spatially irregular but also temporally misaligned across frames [11, 9]. To remedy this, we design a point-tube masking strategy. ### Self-supervised Learning on Point Clouds Contrastive learning has made significant progress on static point clouds [44, 49, 17, 32]. Xie _et al_. [44] proposed PointContrast to discriminate two geometric views of matched points using contrastive loss. Hou _et al_. [17] introduced contextual contrastive learning to PointContrast for data-efficient point pre-training. Rao _et al_. [32] mapped the local and global features to shared representation space and applied a contrastive loss on them. Zhang _et al_. [49] used the instance discrimination task on two augmented versions of a point cloud, while Huang _et al_. [18] pre-trained static point clouds using spatio-temporal augmentations. They transformed different views at point, region, object, and scene levels, and then used contrastive learning to judge their semantic consistency. However, there are limited augmentation methods that can guarantee the semantic consistency of point clouds, let alone point cloud videos. Prediction-based methods on point clouds also attract a lot of attention. Yang _et al_. [46] proposed a folding-based autoencoder that deforms a 2D grid to reconstruct the target 3D point cloud. Recently, mask prediction has been extended to static point clouds. Liu _et al_. [22] designed a point discrimination task for masked patches. Yu [47] proposed PointBERT with an offline point tokenizer. Pang [29] used a simpler task, reconstructing masked point coordinates, for point pre-training in an end-to-end manner. However, these methods solely focus on the geometric representation learning of static point clouds. In this paper, on top of learning appearance information, an explicit motion information learning method is designed for point cloud videos. ### Dynamics Modeling for Point Cloud Videos Supervised learning dominates point cloud video research. [11, 12] and [9, 10, 42, 43] use convolution-based methods and attention-based methods to implicitly learn the long-term features of point cloud videos, respectively. In addition, [24, 40, 50] apply empirical-based dynamics methods modeling point cloud videos. Wang [40] introduced temporal rank pooling [14] to capture the frame-level dynamics. Zhong [50] proposed a two-stream framework and used feature-level ST-surfaces [30] in the dynamics learning branch. Liu [24] used a scene flow estimator [23] or an alternative grouping method to do point tracking for dynamics modeling. Although these methods are effective, they require complex calculations or additional modules. Self-supervised learning on point cloud videos is understudied [34, 35]. Wang [38] pre-trained the encoder by predicting the temporal order of shuffled segments to learn the dynamics of point cloud videos. Zhang [48] developed complete-to-partial 4D distillation to predict the representations of point cloud frames within a short time window. However, these methods learn motion information using clip-level or frame-level pretext tasks. In this paper, we propose the temporal cardinality difference prediction for fine-grained dynamics learning on point cloud videos. ## 3 Method The architecture of our MaST-Pre is illustrated in Fig. 2. Given a point cloud video, it is first divided into multiple point tubes. Next, a masking operation is performed, separating these point tubes into visible and invisible parts. The visible ones are fed to an encoder, and then their updated embeddings are fed to the decoder along with the masked point tubes. Afterward, two-stream prediction tasks are implemented to recover the point coordinates within masked point tubes and to infer their temporal cardinality differences, respectively. Intuitively, enabling the decoder to perform well on two-stream prediction tasks demands the encoder to learn representations rich in appearance and motion information jointly. ### Masking Strategy The masking strategy includes three steps: input division, embedding, and masking operation. **Division.** Point tubes are introduced as division units of the input point cloud video. Specifically, given a point cloud video \(\mathbf{P}\), Farthest Point Sampling is used to select \(N\) key points \(\mathbf{\hat{p}}\) from the input. Next, we construct one point tube for each key point. The point tube centered at \(i\)-th key point \(\hat{p}_{i}\) is denoted as \(\mathbf{Tube}_{\hat{p}_{i}}=\{p\ |\ p\in\mathbf{P},\ \mathcal{D}_{s}(p,\hat{p}_{i}) <\frac{l}{2}\}\), where \(p\) is one of the input points, \(\mathcal{D}_{s}\) is the Euclidean distance, \(\mathcal{D}_{t}\) is the difference in frame timestamps of two points, \(r\) is the radius of a spatial neighborhood and \(l\) is the number of frames in a point tube. Then, we use random sampling to select \(n\) points in each spatial neighborhood. In this way, a point cloud video is divided into \(N\) point tubes, and each point tube contains \(l\times n\) points. To en Figure 2: Illustration of the proposed MaST-Pre method. First, given a point cloud video, MaST-Pre divides it into several point tubes and masks part of them. Then, based on an encoder-decoder architecture, MaST-Pre attempts to recover masked point tubes and predict their temporal cardinality difference. sure all points are covered by point tubes, \(r\) and \(l\) are set to maintain a minor overlap between adjacent point tubes. **Embedding.** Following our baseline [9], each point tube is encoded into an embedding: \[\mathbf{E}_{\hat{p}_{i}}=\sum_{t=1}^{l}\sum_{p\in\mathbf{Tube}_{\hat{p}_{i}}^{t}}f(p-\hat {p}_{i}), \tag{1}\] where \(\mathbf{Tube}_{\hat{p}_{i}}^{t}\) is the \(t\)-th frame of the point tube centered at \(\hat{p}_{i}\) and \(f(\cdot)\) is an MLP-based feature extractor. More details can be found in [9]. **Masking.** The design of the masking operation is related to the information redundancy of input [13, 15, 37]. Because of spatio-temporal coherence, the redundancy of the point cloud video is higher than low-dimensional data. Therefore, our MaST-Pre uses a high masking ratio on point cloud videos and the empirical result is 75%. A high ratio is helpful to alleviate information leakage and make reconstruction a meaningful self-supervised task. In addition, our MaST-Pre uses random masking of point tubes. As a spacetime-agnostic method, random masking is more effective than structure-aware strategies [13]. ### Autoencoding Our autoencoder is based on vanilla Transformers of point cloud videos [9]. An asymmetric encoder-decoder design is introduced to MaST-Pre. **Encoder.** To better capture the dynamics of point cloud videos, joint spatio-temporal attention is adopted [1, 25]. In addition, only the visible point tubes with spatio-temporal positional embeddings are fed into the encoder during pre-training. **Decoder.** The decoder is similar to our encoder but a lightweight vanilla Transformer [9]. It takes both encoded point tubes and masked ones as input. By adding a full set of spatio-temporal positional embeddings to all tokens, location clues are provided for self-supervised learning. After decoding, only the embeddings of masked point tubes are fed to the following prediction heads. ### Two-stream Prediction It is demonstrated in [31, 36] that effective video representation integrates appearance and motion information. Therefore, two-stream self-supervised tasks are proposed to explicitly predict motions and reconstruct the appearance of masked point cloud videos. **Appearance Stream.** The reconstruction objectives are the point coordinates of masked point tubes. The \(l_{2}\) Chamfer Distance loss is used between predictions \(\mathbf{P}_{pre}\in\mathbb{R}^{l\times n\times 3}\) and the ground truth \(\mathbf{P}_{gt}\in\mathbb{R}^{l\times n\times 3}\): \[\begin{split}\mathcal{L}_{app}=\frac{1}{l}\sum_{i=1}^{l}\left\{ \frac{1}{|\mathbf{P}_{pre}^{i}|}\sum_{a\in\mathbf{P}_{pre}^{i}}\min_{b\in\mathbf{P}_{gt}^{ i}}\lVert a-b\rVert_{2}^{2}+\right.\\ \left.\frac{1}{|\mathbf{P}_{gt}^{i}|}\sum_{b\in\mathbf{P}_{gt}^{i}}\min_{ a\in\mathbf{P}_{pre}^{i}}\lVert b-a\rVert_{2}^{2}\right\}.\end{split} \tag{2}\] **Motion Stream.** We propose the _temporal cardinality difference_ as the target of the motion prediction stream. As shown in Fig. 3(a), the spherical support domain of the key point is divided into 8 octants. We follow the conventional rule that the area where the \(xyz\)-coordinates are greater than 0 is the first octant 1 and then increases counterclockwise. Figure 3: Illustration of Cardinality Histogram (a) and Temporal Cardinality Difference (b). Then, count the cardinality of each octant into the corresponding bin of a histogram. To alleviate the noise in real-world point clouds, a probabilistic approach is employed. Specifically, calculate the angular distance \(d\) of each point to the central line of its current octant, and divide it by \(90^{\circ}\) to normalize. For example, the current octant of **Point2** (the green one in Fig. 3(a)) is octant III. The angular difference \(d2\) between **Point2** and the central dashed line of octant III is \(30^{\circ}\). Consequently, the probabilities of **Point2** belonging to octant IV and III are \(\frac{1}{3}\) (_i.e._, \(\frac{30^{\circ}}{90^{\circ}}\)) and \(\frac{2}{3}\) (_i.e._, \(\frac{60^{\circ}}{90^{\circ}}\)), respectively. In particular, when a point falls on an axis (_e.g._, the \(-y\) axis), its probabilities belonging to oct IV and III are both 0.5. Next, as shown in Fig. 3(b), the cardinality histograms \(\mathbf{C}\in\mathbb{R}^{8}\) between adjacent frames of a point tube are subtracted to obtain its temporal cardinality difference \(\mathbf{C}\mathbf{D}\in\mathbb{R}^{8}\), which constitutes the ground truth \(\mathbf{M}_{gt}\in\mathbb{R}^{(l\!-\!1)\times 8}\) of the motion stream. The decoded embeddings of masked point tubes are passed through a linear layer to obtain the motion prediction \(\mathbf{M}_{pre}\in\mathbb{R}^{(l\!-\!1)\times 8}\). The smooth \(l_{1}\) loss of \(i\)-th \(\mathbf{C}\mathbf{D}\) between prediction and ground truth is denoted as \(\mathcal{L}_{m}^{i}\). The loss of our motion stream is denoted as \(\mathcal{L}_{motion}\): \[\mathcal{L}_{m}^{i}\!=\!\left\{\!\!\begin{array}{ll}\!\!0.5\!\times\!(\mathbf{M }_{pre}^{i}\!\!-\!\mathbf{M}_{gt}^{i})^{2},&\!\!\!\text{if}\ |\mathbf{M}_{pre}^{i}\!\!-\!\mathbf{M}_{gt}^{i}|\!<\!\!1\\ |\mathbf{M}_{pre}^{i}\!\!-\!\mathbf{M}_{gt}^{i}|\!-\!0.5,&\!\!\!\text{otherwise}\end{array} \right., \tag{3}\] \[\mathcal{L}_{motion}\!=\!\frac{1}{l-1}\sum_{i=1}^{l-1}\mathcal{L}_{m}^{i}. \tag{4}\] Overall, the total loss of our MaST-Pre is defined as: \[\mathcal{L}_{total}=\mathcal{L}_{app}+\mathcal{L}_{motion}. \tag{5}\] With both loss terms, our MaST-Pre can simultaneously learn the geometry and dynamics of point cloud videos. ## 4 Experiments Our experiments are conducted on four point cloud video datasets. Following [13, 15, 37], we implement end-to-end fine-tuning, semi-supervised learning, and transfer learning to evaluate the pre-trained MaST-Pre. Afterward, ablation studies are conducted to analyze the design of our MaST-Pre and show the visualization results. ### Datasets In this paper, to demonstrate the effectiveness of our spatio-temporal representations, we focus on long-term point cloud video tasks, including 4D action recognition and 4D gesture recognition. **MSRAction-3D**[20] and **NTU-RGBD**[33] are utilized for the action recognition task. (1) MSRAction-3D is comprised of 567 videos in 20 daily actions. The average frame number contained in each video is about 40. Following [9, 10], 270 videos are used as the training set and 297 videos are adopted as the test data. (2) NTU-RGBD consists of 56,880 videos with 60 fine-grained action categories. The frame number of each video is about 30 to 300. Under the cross-subject setting [33], 40,320 training videos and 16,560 test videos are used. **SHREC'17**[7] and **NvGesture**[28] are utilized for the gesture recognition task. (1) SHREC'17 is comprised of 2800 videos in 28 gestures. Following [7], this dataset is split into 1960 training videos and 840 test videos. (2) NvGesture consists of 1532 videos with 25 gesture classes. Following [27], 1050 videos are assigned to the training set and the remaining 482 videos to the test set. ### Pre-training During pre-training, given a point cloud video, 24 frames are densely sampled and 1024 points are selected for each frame. Following [9, 10], the frame sampling stride is set to 2/1 on NTU-RGBD/MSRAction-3D and only random scaling is employed for data augmentation. For division and embedding, the temporal downsampling rate is set to 2 and the temporal kernel size \(l\) of each point tube is set to 3. Meanwhile, the spatial downsampling rate is set to 32. The radius of each support domain \(r\) is set to 0.1/0.3 on NTU-RGBD/MSRAction-3D and the number of neighbor points \(n\) within the spherical query is set to 32. The masking ratio is set to 75% unless otherwise specified. P4Transformer [9] is utilized as our encoder, which consists of 10/5 layers of vanilla Transformers on NTU-RGBD/MSRAction-3D. The decoder is a 4-layer transformer. To verify the extensibility, PST-Transformer [10] is also used as an encoder on MSRAction-3D. Our MaST-Pre is pre-trained for 200 epochs and linear warmup is utilized for the first 10 epochs. The AdamW optimizer is used with a batch size of 128, and the initial learning rate is set to 0.001 with a cosine decay strategy. \begin{table} \begin{tabular}{c|l|c} \hline \multicolumn{2}{c|}{**Methods**} & **Accuracy (\%)** \\ \hline \multirow{4}{*}{Supervised Learning} & MeteorNet [24] & 88.50 \\ & PSTNet [11] & 91.20 \\ & PSTNet++ [12] & 92.68 \\ & Kinet [50] & 93.27 \\ & PPTr [43] & 92.33 \\ & P4Transformer [9] & 90.94 \\ & PST-Transformer [10] & 93.73 \\ \hline \multicolumn{2}{c|}{End-to-end} & **P4Transformer + MaST-Pre** & 91.29 \\ \multicolumn{2}{c|}{Fine-tuning} & **PST-Transformer + MaST-Pre** & 94.08 \\ \hline \end{tabular} \end{table} Table 1: Action recognition accuracy on MSRAction-3D. ### End-to-end Fine-tuning We first evaluate our MaST-Pre by fine-tuning the pre-trained encoder with a new classifier in a supervised manner. End-to-end fine-tuning experiments are conducted on NTU-RGBD and MSRAction-3D, respectively. In each experiment, the same dataset is used for pre-training and fine-tuning. **MSRAction-3D.** During fine-tuning, 24 frames are densely sampled and 2048 points are selected in each frame. Following [11], the spatial search radius is set to 0.7. The model is trained for 50 epochs with a batch size of 12 on 4 GPUs. We use the AdamW optimizer and the initial learning rate is set to 0.001 with a cosine decay strategy. As shown in Table 1, compared with baselines trained in a fully supervised manner, our MaST-Pre introduces accuracy improvements for both P4Transformer and PST-Transformer. According to prior experience, the masked autoencoder needs to be fed a considerable amount of data in the pre-training stage to learn useful knowledge [13, 15]. However, the MSRAction-3D dataset is too small to bring significant improvement. **NTU-RGBD.** The setup of fine-tuning is the same as pre-training, except that the pre-trained model is fine-tuned for 20 epochs with a batch size of 48 on 8 GPUs and the initial learning rate is set to 0.0005. From the end-to-end fine-tuning in Table 2, we can see that our pre-training method introduces an accuracy improvement compared with the baseline. By predicting the spatio-temporal structure, our MaST-Pre learns appearance and motion information during pre-training. ### Semi-supervised Learning We also evaluate the learned representations using a semi-supervised learning experiment. Specifically, the cross-subject training set of NTU-RGBD is used for pre-training, and then only a 50% training set is used for fine-tuning in a supervised manner. The setup of our semi-supervised learning experiment is the same as end-to-end fine-tuning on NTU-RGBD (Sec. 4.3). From Table 2, we can see that the 50% semi-supervised result produced by our MaST-Pre achieves comparable performance to the fully supervised baseline even with only limited annotated data. This clearly demonstrates that MaST-Pre learns high-quality representations. ### Transfer Learning To evaluate the generalization ability of the representations learned by MaST-Pre, we conduct experiments by transferring the pre-trained encoder to other datasets. Specifically, the encoder is first pre-trained on NTU-RGBD following the setup in Sec. 4.2, and then fine-tuned with a new classifier on NvGesture and SHREC'17, respectively. Our transfer experiments are not only cross-dataset but also cross-task, _i.e_., from action recognition to gesture recognition. We compare our fine-tuned results to the fully supervised baseline in Table 3. During fine-tuning, an AdamW optimizer with a batch size of 24 is used, and the initial learning rate is set to 0.002 with a cosine decay strategy. The pre-trained model is fine-tuned for 50 epochs on NvGesture and SHREC'17. As shown in Table 3, our MaST-Pre pre-training facilitates the P4Transformer to produce superior accuracy compared to the fully supervised baseline. Moreover, our MaST-Pre also performs faster convergence. Compared with the baseline without pre-training, significant improvements are achieved after fine-tuning for only 30 epochs (_e.g_., 84.8% \(\rightarrow\) 87.6% on NvGesture and 87.5% \(\rightarrow\) 90.2% on SHREC'17). This demonstrates that our MaST-Pre has excellent generalization ability across different tasks, facilitating the accuracy improvement of downstream tasks. ### Ablation Studies In order to balance authority and efficiency, the experiments of ablation studies are conducted on 10% NTU-RGBD, which contains 4032 training videos and 1656 test videos with category balance. **Architecture Design.** Our MaST-Pre utilizes a two-stream prediction to jointly learn both appearance and motion information. To demonstrate the effectiveness of this \begin{table} \begin{tabular}{l c} \hline \hline **Methods** & **Acc.** \\ \hline 3DV-Motion [40] & 84.5 \\ 3DV-PointNet++ [40] & 88.8 \\ PSTNet [11] & 90.5 \\ PSTNet++ [12] & 91.4 \\ Kinet [50] & 92.3 \\ P4Transformer [9] & 90.2 \\ PST-Transformer [10] & 91.0 \\ \hline **P4Transformer + MaST-Pre** (End-to-end Fine-tuning) & 90.8 \\ **P4Transformer + MaST-Pre** (50% Semi-supervised) & 87.8 \\ \hline \hline \end{tabular} \end{table} Table 2: Action recognition accuracy (%) on NTU-RGBD under cross-subject setting. \begin{table} \begin{tabular}{l c c} \hline \hline **Methods** & **NvG** & **SHR** \\ \hline FlickerNet [26] & 86.3 & - \\ PLSTM [27] & 85.9 & 87.6 \\ PLSTM-PSS [27] & 87.3 & 93.1 \\ Kinet [50] & 89.1 & 95.2 \\ \hline P4Transformer [9] (30 Epochs) & 84.8 & 87.5 \\ P4Transformer [9] (50 Epochs) & 87.7 & 91.2 \\ \hline \hline **P4Transformer + MaST-Pre** (30 Epochs) & 87.6 & 90.2 \\ **P4Transformer + MaST-Pre** (50 Epochs) & 89.3 & 92.4 \\ \hline \hline \end{tabular} \end{table} Table 3: Gesture recognition accuracy (%) on NvGesture (NvG) and SHREC’17 (SHR). architecture, we first present the performance of model A0 as the 10% NTU-RGBD baseline in a fully supervised manner (65.85%). Then, model A1 is developed by removing the motion prediction stream. Quantitative results are presented in Table 4. It shows that with solely the appearance stream, the performance gain introduced by model A1 pre-training is limited. When these two streams are combined, comprehensive information can be learned and superior accuracy is achieved by our model A2 (65.85% \(\rightarrow\) 78.25%). This demonstrates the effectiveness of our mask-based pre-training on point cloud videos and the necessity of the task of explicitly predicting motions. **Masking Ratio.** The masking operation plays a critical role in our MaST-Pre. Therefore, we conduct experiments to study different masking ratios, and the results are presented in Table 5. It shows that a high masking ratio is beneficial to our MaST-Pre and the highest accuracy is achieved at 75% masking ratio (model B2). **Appearance Stream.** Right reconstruction targets in the appearance stream contribute to the performance of our MaST-Pre. For model D3, in each point tube, the reconstruction loss (Eq. 2) is first calculated in each frame separately and then aggregated over \(l\) frames, which is a decoupled manner. In contrast, model D2 calculates the reconstruction loss in a coupled manner by considering all points together. We also develop model D1 to reconstruct only the middle frame of each point tube. As shown in Table 6, model D2 brings no accuracy gain and is even worse than the baseline model A0 (58.60% vs. 65.85%). In addition, D3 outperforms D1 and D2. This is because D3 implicitly learns spatio-temporal information during the process of reconstructing decoupled point tubes. We further visualize the reconstruction results in Fig. 4. \begin{table} \begin{tabular}{l c|c|c} \hline \hline & **\# Frames** & **Temporal \& Spatial** & **Accuracy (\%)** \\ \hline D1 & 1 & - & 77.34 \\ \hline D2 & 3 & Coupling & 58.60 \\ D3 (Ours) & 3 & Decoupling & **78.25** \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation studies on the appearance stream. Figure 4: Visualization of reconstruction results. For each action sample, the ground truth lies in the first row, the masked video lies in the second row, and the result lies in the third row. \begin{table} \begin{tabular}{l|c c|c} \hline \hline & **Appearance Stream** & **Motion Stream** & **Acc. (\%)** \\ \hline A0 & & & 65.85 \\ A1 & ✓ & & 70.13 \\ A2 (Ours) & ✓ & ✓ & **78.25** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation studies on pre-training architectures. **Temporal Cardinality Difference.** In order to investigate the temporal cardinality difference, we present the results under different sections, interpolations, and strides in Table 7. We develop the model E1/E3 to divide the support domain into 4/16 sections, while their accuracy is lower than E2 with 8 sections. This is because small space resolution will introduce noises and large resolution makes temporal cardinality difference insensitive to motions. Next, we develop model F1 by removing interpolation. While F2 outperforms F1 because interpolation improves its robustness. Finally, we develop model G2 to calculate the cardinality difference with temporal stride 2, but its performance is worse than G1 with stride 1. This is because a large temporal stride cannot capture fine-grained motions. We further visualize multiple samples of temporal cardinality difference in Fig. 5 to demonstrate its effectiveness in modeling motions. We present three typical actions, each consisting of two samples. As shown in Fig. 5(a), the temporal cardinality differences of the two _raising hands_ actions project extremely similar motion patterns. Points in the first and seventh octants flow out heavily over time. Meanwhile, temporal cardinality differences within _putting hands down_ (Fig. 5(b)) also display similarities, as well as in _kicking forward_ (Fig. 5(c)). In particular, the temporal cardinality differences between _raising hands_ and _putting hands down_ are approximately reversible, which reflects its effectiveness in modeling dynamics. **Computational Complexity.** Table 8 shows the pre-training complexity and corresponding fine-tuning accuracy of the two models. After adding the motion prediction stream, model H2 achieves much higher accuracy than H1 with only a minor increase in pre-training complexities (\(70.13\%\to 78.25\%\)). ## 5 Conclusion In this paper, we introduce a masked spatio-temporal structure prediction method for point cloud video pre-training, termed as MaST-Pre. For modeling subtle dynamics, the temporal cardinality difference is proposed, which can be calculated online directly from inputs. Based on point-tube masking, MaST-Pre jointly conducts point cloud video reconstruction and temporal cardinality difference prediction to learn both appearance and motion information. Experiments on four benchmarks show that our MaST-Pre is an effective pre-training framework to boost the performance of point cloud video understanding. **Acknowledgments.** This work was partially supported by the Fundamental Research Funds for the Central Universities (No. 226-2023-00048), the National Natural Science Foundation of China (No. U20A20185, 61972435), the Guangdong Basic and Applied Basic Research Foundation (2022B1515020103), the Shenzhen Science and Technology Program (No. RCYX20200714114641140), and the Chongqing Technology Innovation and Application Development Special Key Project(cstc2021jscx-cylhX0006). \begin{table} \begin{tabular}{l|c c c} \hline & E1 & E2 (Ours) & E3 \\ \hline \hline **\# Section** & 4 & 8 & 16 \\ **Accuracy (\%)** & 77.85 & **78.25** & 69.50 \\ \hline \hline \end{tabular} \begin{tabular}{l|c c} \hline & F1 & F2 (Ours) \\ \hline \hline **Interpolation** & ✗ & ✓ \\ **Accuracy (\%)** & 76.69 & **78.25** \\ \hline \hline \end{tabular} \begin{tabular}{l|c c} \hline & G1 (Ours) & G2 \\ \hline \hline **\# Stride** & 1 & 2 \\ **Accuracy (\%)** & **78.25** & 75.85 \\ \hline \hline \end{tabular} \end{table} Table 7: Ablation studies on the temporal cardinality difference. Figure 5: Samples of temporal cardinality difference, computed by subtracting cardinality histograms of frame \(t-1\) from frame \(t\). \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline & **Architectures** & **Encoder** & **Time** & **Memory** & **Acc. (\%)** \\ \hline H1 & Only Appearance & w/o [M] & 5.4 & 6414 & 70.13 \\ H2 (Ours) Two Streams & w/o [M] & 6.2 & 6418 & **78.25** \\ \hline \hline \end{tabular} \end{table} Table 8: Time (mins/epoch) and memory (MiB) complexities.
2301.05174
Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study
Most approaches to cross-modal retrieval (CMR) focus either on object-centric datasets, meaning that each document depicts or describes a single object, or on scene-centric datasets, meaning that each image depicts or describes a complex scene that involves multiple objects and relations between them. We posit that a robust CMR model should generalize well across both dataset types. Despite recent advances in CMR, the reproducibility of the results and their generalizability across different dataset types has not been studied before. We address this gap and focus on the reproducibility of the state-of-the-art CMR results when evaluated on object-centric and scene-centric datasets. We select two state-of-the-art CMR models with different architectures: (i) CLIP; and (ii) X-VLM. Additionally, we select two scene-centric datasets, and three object-centric datasets, and determine the relative performance of the selected models on these datasets. We focus on reproducibility, replicability, and generalizability of the outcomes of previously published CMR experiments. We discover that the experiments are not fully reproducible and replicable. Besides, the relative performance results partially generalize across object-centric and scene-centric datasets. On top of that, the scores obtained on object-centric datasets are much lower than the scores obtained on scene-centric datasets. For reproducibility and transparency we make our source code and the trained models publicly available.
Mariya Hendriksen, Svitlana Vakulenko, Ernst Kuiper, Maarten de Rijke
2023-01-12T18:00:00Z
http://arxiv.org/abs/2301.05174v2
# Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study ###### Abstract Most approaches to cross-modal retrieval (CMR) focus either on object-centric datasets, meaning that each document depicts or describes a single object, or on scene-centric datasets, meaning that each image depicts or describes a complex scene that involves multiple objects and relations between them. We posit that a robust CMR model should generalize well across both dataset types. Despite recent advances in CMR, the reproducibility of the results and their generalizability across different dataset types has not been studied before. We address this gap and focus on the reproducibility of the state-of-the-art CMR results when evaluated on object-centric and scene-centric datasets. We select two state-of-the-art CMR models with different architectures: (i) CLIP; and (ii) X-VLM. Additionally, we select two scene-centric datasets, and three object-centric datasets, and determine the relative performance of the selected models on these datasets. We focus on reproducibility, replicability, and generalizability of the outcomes of previously published CMR experiments. We discover that the experiments are not fully reproducible and replicable. Besides, the relative performance results partially generalize across object-centric and scene-centric datasets. On top of that, the scores obtained on object-centric datasets are much lower than the scores obtained on scene-centric datasets. For reproducibility and transparency we make our source code and the trained models publicly available. ## 1 Introduction Cross-modal retrieval (CMR) is the task of finding relevant items across different modalities. For example, given an image, find a text or vice versa. The main challenge in CMR is known as _the heterogeneity gap_[5, 22]. Since items from different modalities have different data types, the similarity between them cannot be measured directly. Therefore, the majority of CMR methods published to date attempt to bridge this gap by learning a latent representation space, where the similarity between items from different modalities can be measured [57]. In this work, we specifically focus on _image-text_ CMR, which uses textual and visual data. The retrieval task is performed on _image-text pairs_. In each image-text pair, the text (often referred to as _caption_) describes the corresponding image it is aligned with. For image-text CMR we use either an image or a text as a query [57]. Hence, the CMR task that we address in this paper consists of two subtasks: (i) _text-to-image retrieval_: given a text that describes an image, retrieve all the images that match this description; and (ii) _image-to-text retrieval_: given an image, retrieve all texts that can be used to describe this image. **Scene-centric vs. object-centric vs. datasets.** Existing image datasets can be grouped into _scene-centric_ and _object-centric_ datasets [48; 62]. The two types of dataset are typically used for different tasks, viz. the tasks of scene and object understanding, respectively. They differ in important ways that are of interest to us when evaluating performance and generalization abilities of CMR models. Scene-centric images depict complex scenes that typically feature multiple objects and relations between them. These datasets contain image-text pairs, where, in each pair, an image depicts a complex scene of objects and the corresponding text describes the whole scene, often focusing on _relations and activities_. Images in object-centric image datasets are usually focused on a single object of interest that they primarily depict. This object is often positioned close to the center of an image with other objects, optionally, in the background. Object-centric datasets contain image-text pairs, where, in each pair, an image depicts an object of interest and the corresponding text describes the depicted _object and its (fine-grained) attributes_. To illustrate the differences between the two dataset types in CMR, we consider the examples provided in Fig. 1 with an object-centric image-caption pair (left) and a scene-centric image-caption pair (right). Note how the pairs differ considerably in terms of the visual style and the content of the caption. The pair on the left focuses on a single object ("pants") and describes its fine-grained visual attributes ("multicolor," "boho," "batic"). The pair on the right captures a scene describing multiple objects ("seagulls," "pier," "people") and relations between them ("sitting," "watching"). **Research goals.** We focus on (traditional) CMR methods that extract features from each modality and learn a common representation space. Recent years have seen extensive experimentation with such CMR methods, mostly organized into two groups: (i) contrastive experiments on object-centric datasets [17], and (ii) contrastive experiments on scene-centric datasets [35]. In this paper, we consider representative state-of-the-art CMR methods from both groups. We select two pre-trained models which demonstrate state-of-the-art performance on CMR task and evaluate them in a zero-shot setting. In line with designs used in prior reproducibility work on CMR [3] we select two models for the study. Following the ACM terminology [1], we focus on _reproducibility_ (different team, same experimental setup) and _replicability_ (different team, different experimental setup) of previously reported results. And following Voorhees [55], we focus on relative (a.k.a. comparative) performance results. In addition, for the reproducibility experiment, we consider the absolute difference between the reported scores and the reproduced scores. We address the following research questions: (RQ1) Are published relative performance results on CMR reproducible? This question matters because it allows us to confirm the validity of reported results. We show that the relative performance results are not fully reproducible. Specifically, the results are reproducible for one dataset, but not for the other dataset). We then shift to replicability and examine whether lessons learned on scene-centric datasets transfer to object-centric datasets: (RQ2) To what extent are the published relative performance results replicable? That is, we investigate the validity of the reported results when evaluated in a different setup. We find that relative performance results are partially replicable, using other datasets. After investigating the reproducibility and replicability of the results, we consider the generalizability of the results. We contrastively evaluate the results on object-centric and scene-centric datasets: (RQ3) Do relative performance results for state-of-the-art CMR methods generalize from scene-centric datasets to object-centric datasets? We discover that the relative performance results only partially generalize across the two dataset types. **Main contributions.** Our main contributions are: (i) We are one of the first to consider reproducibility in the context of CMR and reproduce scene-centric CMR experiments from two papers [44, 61] and find that the results are only partially reproducible. (ii) We perform a replicability study and examine whether relative performance differences reported for CMR methods generalize from scene-centric datasets to object-centric datasets. (iii) We investigate the generalizability of obtained results and analyze the effectiveness of pre-training on scene-centric datasets for improving the performance of CMR on object-centric datasets, and vice versa. And, finally, (iv) to facilitate the reproducibility of our work, we provide the code and the pre-trained models used in our experiments.5 Figure 1: An object-centric (left) and a scene-centric (right) image-text pair. Sources: Fashion200k (left); MS COCO (right). ## 2 Related Work **Cross-modal retrieval.** CMR methods attempt to construct a multimodal representation space, where the similarity of concepts from different modalities can be measured. Some of the earliest approaches in CMR utilised canonical correlation analysis [15, 26]. They were followed by a dual encoder architecture equiped with a recurrent and a convolutional component, a hinge loss [12, 58] and hard-negative mining [11]. Later on, several attention-based architectures were introduced such as architectures with dual attention [39], stacked cross-attention [31], bidirectional focal attention [36]. Another line of work proposed to use transformer encoders [54] for CMR task [38], and adapted the BERT model [8] as a backbone [13, 67]. Some other researchers worked on improving CMR via modality-specific graphs [56], or image and text generation modules [16]. There is also more domain-specific work that focused on CMR in fashion [14, 28, 29, 30], e-commerce [19, 20], cultural heritage [49] and cooking [56]. In contrast to the majority of prior work on the topic, we focus on the reproducibility, replicability, and generalizability of CMR methods. In particular, we explore the state-of-the-art models designed for the CMR task by examining their performance on scene-centric and object-centric datasets. **Scene-centric and object-centric datasets.** The majority of prior work related to object-centric and scene-centric datasets focuses on computer vision tasks such as object recognition, object classification, and scene recognition. Herranz et al. [21] investigated biases in a CNN when trained on scene-centric versus object-centric datasets and evaluated on the task of object classification. In the context of object detection, prior work focused on combining feature representations learned from object-centric and scene-centric datasets to improve the performance when detecting small objects [48], and using object-centric images to improve the detection of objects that do not appear frequently in complex scenes [62]. Finally, for the task of scene recognition, Zhou et al. [66] explored the quality of feature representations learned from both scene-centric and object-centric datasets and applied to the task of scene recognition. Unlike prior work on the topic, in this paper, we focus on both scene-centric and object-centric datasets for evaluation on CMR task. In particular, we explore how state-of-the-art (SOTA) CMR models perform on object-centric and scene-centric datasets. **Reproducibility in cross-modal retrieval.** To the best of our knowledge, despite the popularity of the CMR task, there are very few papers that focus on reproducibility of research in CMR. Some rare (recent) examples include [3], where the authors survey metric learning losses used in computer vision and explore their applicability for CMR. Rao et al. [45] analyze contributing factors that affect the performance of the state-of-the-art CMR models. However, all prior work focuses on exploring model performance only on two popular scene-centric datasets: Microsoft COCO (MS COCO) and Flickr30k. In contrast, in this work, we take advantage of the diversity of the CMR datasets and specifically focus on examining how the state-of-the-art CMR models perform across different dataset types: scene-centric and object-centric datasets. ## 3 Task Definition We follow the same notation as in previous work [4, 53, 65]. An image-caption cross-modal dataset consists of a set of images \(\mathcal{I}\) and texts \(\mathcal{T}\) where the images and texts are aligned as image-text pairs: \(\mathcal{D}=\{(\mathbf{x}_{\mathcal{I}}^{1},\mathbf{x}_{\mathcal{I}}^{1})\),..., \((\mathbf{x}_{\mathcal{I}}^{n},\mathbf{x}_{\mathcal{I}}^{n})\}\). The _cross-modal retrieval_ (CMR) task is defined analogous to the standard information retrieval task: given a query \(\mathbf{q}\) and a set of \(m\) candidates \(\Omega_{\mathbf{q}}=\{\mathbf{x}^{1},\ldots,\mathbf{x}^{m}\}\) we aim to rank all the candidates w.r.t. their relevance to the query \(\mathbf{q}\). In CMR, the query can be either a text \(\mathbf{q}_{\mathcal{T}}\) or an image \(\mathbf{q}_{\mathcal{I}}\): \(\mathbf{q}\in\{\mathbf{q}_{\mathcal{T}},\mathbf{q}_{\mathcal{I}}\}\). Similarly, the set of candidate items can be either visual \(\mathcal{I}_{\mathbf{q}}\subset\mathcal{I}\), or textual \(\mathcal{T}_{\mathbf{q}}\subset\mathcal{T}\) data: \(\Omega\in\{\mathcal{I}_{\mathbf{q}},\mathcal{T}_{\mathbf{q}}\}\). The CMR task is performed across modalities, therefore, if the query is a text then the set of candidates are images, and vice versa. Hence, the task comprises effectively two subtasks: (i) _text-to-image retrieval_: given a textual query \(\mathbf{q}_{\mathcal{T}}\) and a set of candidate images \(\Omega\subset\mathcal{I}\), we aim to rank all instances in the set of candidate items \(\Omega\) w.r.t. their relevance to the query \(\mathbf{q}_{\mathcal{T}}\); (ii) _image-to-text retrieval_: given an image as a query \(\mathbf{q}_{\mathcal{I}}\) and a set of candidate texts \(\Omega\subset\mathcal{T}\), we aim to rank all instances in the set of candidate items \(\Omega\) w.r.t. their relevance to the query \(\mathbf{q}_{\mathcal{I}}\). ## 4 Methods In this section, we give an overview of the models included in the study, of the models which were excluded, and provide justification for it. All the approaches we focus on belong to the traditional CMR framework and comprise two stages. First, we extract textual and visual features. The features are typically extracted with a textual encoder and a visual encoder. Next, we learn a latent representation space where the similarity of items from different modalities can be measured directly. ### Methods included for comparison We focus on CMR in _zero-shot setting_, hence, we only consider pre-trained models. Therefore, we focus on the models that are released for public use. Besides, as explained in Section 1, we follow prior reproducibility work to inform our experimental choices regarding the number of models. Given the above-mentioned requirements, we selected two methods that demonstrate state-of-the-art performance on the CMR task: CLIP and X-VLM. **Contrastive Language-Image Pretraining (CLIP) [44].** This model is a dual encoder that comprises an image encoder, and a text encoder. The model was pre-trained in a contrastive manner using a symmetric loss function. It is trained on 400 million image-caption pairs scraped from the internet. The text encoder is a transformer [54] with modification from [43]. For the image encoder, the authors present two architectures. The first one is based on ResNet [18] and it is represented in five variants in total. The first two options are ResNet-50, ResNet-101; the last three options are variants of ResNet scaled up in the style of EfficientNet [51] The second image encoder architecture is a Vision Transofrmer (ViT) [9]. It is presented in three variants: ViT-B/32, a ViT-B/16, and a ViT-L/14. The CMR results reported in the original paper are obtained with a model configuration where vision transformer ViT-L/14 is used as an image encoder, and the text transformer is a text encoder. Hence, we use this configuration in our experiments. **X-VLM [61].** This model consists of three encoders: an image encoder, a text encoder, and a cross-modal encoder. The image and text encoder take an image and text as inputs and output their visual and textual representations. The cross-modal encoder fuses the output of the image encoder and the output of the text encoder. The fusion is done via a cross-attention mechanism. For CMR task, the model is fine-tuned via a contrastive learning loss and a matching loss. All encoders are transformer-based. The image encoder is a ViT initialised with Swin Transformer\({}_{base}\)[37]. Both the text encoder and the cross-modal encoder are initialised using different layers of BERT [8]: the text encoder is initialized using the first six layers, whereas the cross-modal encoder is initialised using the last six layers. ### Methods excluded from comparison While selecting the models for the experiments, we considered other architectures with promising performance on the MS COCO and the Flickr30k datasets. Below, we outline the architectures we considered and explain why they were not included. Several models such as Visual N-Grams [32], Unicoder-VL [33], ViLT-B/32 [25], UNITER [6] were excluded because they were consistently outperformed by CLIP on the MS COCO and Flickr30k datasets by large margins. Besides, we excluded ImageNet [42] because it was outperformed by CLIP on the MS COCO dataset. ALIGN [23], ALBEF [34], VinVL [64], METER [10] were not included because X-VLM consistently outperformed them. UNITER [6] was beaten by both CLIP and X-VLM. We did not include other well-performing models such as ALIGN [23], Flamingo [2], CoCa [60] because the pre-trained models were not publicly available. ## 5 Experimental Setup In this section, we discuss our experimental design including the choice of datasets, subtasks, metrics, and implementation details. ### Datasets We run experiments on two scene-centric and three object-centric datasets. Below, we discuss each of the datasets in more detail. **Scene-centric datasets.** We experiment with two scene-centric datasets: (i) Microsoft COCO (MS COCO) [35] contains 123,287 images depicting regular scenes from everyday life with multiple objects placed in their natural contexts. There are 91 different object types such as "person", "bicycle", "apple". (ii) Flickr30k contains 31,783 images of regular scenes from everyday life, activities, and events. For both scene-centric datasets, we use the splits provided in [24]. The MS COCO dataset is split into 113,287 images for training, 5,000 for testing and 5,000 for validation; the Flickr30k dataset has 29,783 images for training, 1,000 for testing and 1,000 for validation. In both datasets, every image was annotated with five captions using Amazon Mechanical Turk. Besides, we select one caption per image randomly, and use the test set for our experiments. **Object-centric datasets.** We consider three object-centric datasets in our experiments: (i) Caltech-UCSD Birds 200 (CUB-200) [59] contains 11,788 images of 200 birds species. Each image is annotated with a fine-grained caption from [46]. We selected one caption per image randomly. Each caption is at least 10 words long and does not contain any information about the birds' species or actions. (ii) Fashion200k contains 209,544 images that depict various fashion items in five product categories (dress, top, pant, skirt, jacket) and their corresponding descriptions. (iii) Amazon Berkley Objects (ABO) [7] contains 147,702 product listings associated with 398,212 images. This dataset was derived from Amazon.com product listings. We selected one image per listing and used the associated product description as its caption. The majority of images depict a single product on a white background. The product is located in the center of the image and takes at least 85% of the image area. For all object-centric datasets, we use the splits provided by the dataset authors and use the test split for our experiments. ### Subtasks Our goal is to assess and compare the performance of the CMR methods (described in Section 5) across the object-centric and scene-centric datasets described in the previous subsection. We design an experimental setup that takes into account two CMR subtasks and two dataset types. It can be summarized using a tree with branches that correspond to different configurations (see Fig. 2). We explain how we cover the branches of this tree in the next subsection. The tree starts with a root ("Image-text CMR" with label \(0\)) that has sixteen descendants, in total. The root node has two children corresponding to the two image-text CMR subtasks: a text-to-image retrieval (node 1) and image-to-text retrieval (node 2). Since we want to evaluate each of these subtasks on both object-centric and scene-centric datasets, nodes 1 and 2 also have two children each, i.e., the nodes \(\{3,4,5,6\}\). Finally, every object-centric node has three children: CUB-200, Fashion200k, and ABO datasets \(\{7,8,9,12,13,14\}\); and every scene-centric node has two children: MS COCO and Flickr30k datasets \(\{10,11,15,16\}\). ### Experiments To answer the research questions introduced in Section 1, we conduct two experiments. In all the experiments, we use CLIP and X-VLM models in a zero-shot setting. Following [55], we focus on relative performance results. In each experiment, we consider different subtrees from Fig. 2. Following [25, 32, 33, 44, 61], we use Recall@K where \(K=\{1,5,10\}\) to evaluate the model performance in all our experiments. In addition, following [50, 52, 63], we calculate the sum of recalls (rsum) for text-to-image, and image-to-text retrieval tasks as well as the total sum of recalls for both tasks. For text-to-image retrieval, we first obtain representations for all the candidate images by passing them through the image encoder of the model. Then we pass each textual query through the text encoder of the model and retrieve the top-\(k\) candidates ranked by cosine similarity w.r.t. the query. For image-to-text retrieval, we do the reverse, using the texts as candidates and images as queries. More specifically, we start by obtaining representations of the candidate captions by passing them through the text encoder. Afterwards, for each of the visual queries, we pass the query through the image encoder and retrieve top-\(k\) candidates ranked by cosine similarity w.r.t. the query. In _Experiment 1_ we evaluate the reproducibility of the CMR results reported in the original publications (RQ1). Both models we consider (CLIP and X-VLM) were originally evaluated on two scene-centric datasets, viz. MS COCO and Flickr30k. Therefore, for our reproducibility study, we also evaluate these models on these two datasets. We evaluate both text-to-image and image-to-text retrieval. That is, we focus on the two sub-trees 0\(\leftarrow\)1\(\leftarrow\)4\(\leftarrow\){10, 11} and 0\(\leftarrow\)2\(\leftarrow\)6\(\leftarrow\){15, 16} (the red and blue parts of the tree) from Fig. 2. In addition to relative performance results, we consider absolute differences between the reported scores and the reproduced scores. Following Petrov and Macdonald [41], we assume that the score is reproduced if we obtain a score value equal to the reported score given a relative tolerance of \(\pm 5\%\). In _Experiment 2_ we focus on the replicability of the reported results on object-centric datasets (RQ2). Thus, we evaluate CLIP and X-VLM on the CUB-200, Fashion200k, and ABO datasets. This experiment covers the subtrees 0\(\leftarrow\)1\(\leftarrow\)3\(\leftarrow\){7, 8, 9} and 0\(\leftarrow\)2 \(\leftarrow\)5\(\leftarrow\){12, 13, 14} (the red and green parts of the tree) in Fig. 2. After obtaining the results from Experiment 1 and 2, we examine the generalizability of the obtained scores (RQ3). We do so by comparing the relative performance results the models achieve on the object-centric versus scene-centric datasets. More specifically, we compare the relative performance of CLIP and X-VLM on CUB-200, Fashion200k, ABO with their relative performance on MS COCO and Flickr30k. Thus, this experiment captures the complete tree in Fig. 2. Figure 2: Our experimental design for evaluating CMR methods across object-centric and scene-centric datasets. The blue colour indicates parts of the tree used in Experiment 1, the green color indicates parts of the tree used in Experiment 2, and the red color indicates parts used in all experiments. (Best viewed in color.) ## 6 Results We focus on the reproducibility (different team, same setup) and replicability (different team, different setup) of the CMR experiments reported in the original papers devoted to CLIP [44] and X-VLM [61]. To organize our result presentation, we refer to the tree in Fig. 2. We traverse the tree bottom up, from the leaves to the root. ### RQ1: Reproducibility To address RQ1, we report on the outcomes of Experiment 1. We investigate to what extent the CMR results reported in the original papers devoted to CLIP [44] and X-VLM [61] are reproducible. Given that both methods were originally evaluated on two scene-centric datasets, viz. MS COCO and Flickr30k, we evaluate the models on the text-to-image and image-to-text tasks on these two datasets. Therefore, we focus on the two blue sub-trees 0\(\leftarrow\)1\(\leftarrow\)4\(\leftarrow\){10, 11} and 0\(\leftarrow\)2\(\leftarrow\)6\(\leftarrow\){15, 16} from Fig. 2. **Results.** The results of Experiment 1 are shown in Table 1. We recall the scores obtained in the original papers [44, 61] ("Orig.") and the scores that we obtained ("Repr."), on the MS COCO and Flickr30k datasets. Across the board, the scores that we obtained (the "reproduced scores") tend to be lower than the scores obtained in the original publications (the "original scores"). On the MS COCO dataset, X-VLM consistently outperforms CLIP, both in the original publications and in our setup, for both the text-to-image and the image-to-text tasks. Moreover, this holds for all R@\(n\) metrics, and, hence, for the Rsum metrics. Interestingly, the relative gains that we obtain tend to be larger than the ones obtained in the original publications. For example, our biggest relative difference is for the image-to-text task in terms of the R@1 metric: according to the scores reported in [44, 61], \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{3}{c}{**Text-to-image**} & \multicolumn{3}{c}{**Image-to-text**} & \multicolumn{3}{c}{**Rsum**} \\ \cline{2-10} \multicolumn{2}{c}{**Model**} & \multicolumn{1}{c}{**R@1**} & \multicolumn{1}{c}{**R@5**} & \multicolumn{1}{c}{**R@10**} & \multicolumn{1}{c}{**R@1**} & \multicolumn{1}{c}{**R@5**} & \multicolumn{1}{c}{**R@10**} & \multicolumn{1}{c}{**t2i**} & \multicolumn{1}{c}{**i2t**} & \multicolumn{1}{c}{**total**} \\ \hline \multicolumn{10}{c}{**MS COCO (5k)**} \\ \hline \multirow{3}{*}{\begin{tabular}{l} **Model** \\ \end{tabular} } & CLIP [44] & 37.80 & 62.40 & 72.20 & 58.40 & 81.50 & 88.10 & 172.40 & 228.00 & 400.40 \\ & X-VLM [61] & **55.60** & **82.70** & **90.00** & **70.80** & **92.10** & **96.50** & **228.30** & **259.40** & **487.70** \\ \hline \multirow{3}{*}{\begin{tabular}{l} **Model** \\ \end{tabular} } & CLIP & 21.59 & 40.22 & 49.80 & 24.36 & 44.13 & 53.41 & 111.61 & 121.90 & 233.51 \\ & X-VLM & **42.79** & **67.61** & **67.64** & **64.60** & **84.48** & **84.50** & **178.04** & **233.58** & **411.62** \\ \hline \multicolumn{10}{c}{**Flickr30k (1k)**} \\ \hline \multirow{3}{*}{\begin{tabular}{l} **Model** \\ \end{tabular} } & CLIP [44] & 68.70 & 90.60 & 95.20 & **88.00** & **98.70** & 99.40 & 254.50 & **286.10** & 540.60 \\ & X-VLM [61] & **71.90** & **93.30** & **96.40** & 85.30 & 97.80 & **99.60** & **261.60** & 282.70 & **544.30** \\ \hline \multirow{3}{*}{ \begin{tabular}{l} **Model** \\ \end{tabular} } & CLIP & **74.95** & **93.09** & **96.15** & **77.02** & **94.18** & **96.84** & **264.19** & **268.04** & **532.23** \\ & X-VLM & 37.82 & 82.36 & 82.48 & 63.30 & 91.10 & 91.10 & 202.66 & 245.50 & 448.16 \\ \hline \hline \end{tabular} \end{table} Table 1: Results of Experiment 1 (reproducibility study), using the MS COCO and Flickr30k datasets. “Orig.” indicates the scores from the original publications. “Repr.” indicates the scores that we obtained. X-VLM outperforms CLIP by 21%, whereas in our experiments the relative gain is 165%. On average, the original CLIP scores are as much as \(\sim\)70% higher than the reproduced scores; the original scores for X-VLM are \(\sim\)20% higher than the reproduced ones. When considering the absolute differences between the original scores and the reproduced scores and assuming a relative tolerance of \(\pm\)5%, we see that, on the MS COCO dataset, the scores are not reproducible for both models. On the Flickr30k dataset, we see a different pattern. For the text-to-image task, the original results indicate that X-VLM consistently outperforms CLIP, on all R@\(n\) metrics, but according to our results, the relative order is consistently reversed. For the image-to-text task, we obtained mixed outcomes: for R@1 and R@5, the original order (CLIP outperforms X-VLM) is confirmed, but for R@10 the order is swapped. According to our experimental results, however, CLIP consistently outperforms X-VLM on all tasks, and on all R@\(n\) metrics (and hence also on the Rsum metrics). On the Flickr30k dataset, the CLIP scores are reproduced on the text-to-image and image-to-text retrieval tasks when the model is evaluated on R@5 and R@10. On the text-to-image task, the reproduced R@5 score is 2.7% higher than the original score; the reproduced R@10 score is 1% higher than the original score. For the image-to-text retrieval task, the reproduced R@5 score is 4% lower than the original score; the reproduced R@10 score is 2% lower than the original score. **Answer to RQ1.** In the case of the CLIP model, the obtained _absolute_ scores were reproducible only on the Flickr30k dataset for the text-to-image and the image-to-text tasks when evaluated on R@5 and R@10. For X-VLM, we did not find the absolute scores obtained when evaluating the model on the MS COCO and Flickr20k datasets to be reproducible, neither for the text-to-image nor the image-to-text tasks. The _relative_ outcomes on the MS COCO dataset could be reproduced, for all tasks and metrics, whereas on the Flickr30k dataset they could only partially be reproduced, that is, only for the image-to-text task on the R@1 and R@5 metrics; for the text-to-image task, X-VLM outperforms CLIP according to the original scores, but CLIP outperforms X-VLM according to our reproduced scores. **Upshot.** As explained in Section 4, in this paper we focus on CMR in a zero-shot setting. This implies that the differences that we observed between the original scores and the reproduced scores must be due to differences in text and image data (pre-)processing and loading. We, therefore, recommend that the future work includes (as much as is practically possible) tools and scripts used in these stages of the experiment with the publication of its implementations. ### RQ2: Replicability To answer RQ2, we replicate the originally reported text-to-image and image-to-text retrieval experiments in a different setup, i.e., by evaluating CLIP and X-VLM using object-centric datasets instead of scene-centric datasets. Thus, we evaluate CLIP and X-VLM on the CUB-200, Fashion200k, and ABO datasets and focus on the green subtrees 0\(\leftarrow\)1\(\leftarrow\)3\(\leftarrow\){7, 8, 9} and 0\(\leftarrow\)2\(\leftarrow\)5\(\leftarrow\){12, 13, 14} from Fig. 2. **Results.** The results of Experiment 2 (aimed at answering RQ2) can be found in Ta ble 2. On the CUB-200 dataset, CLIP consistently outperforms X-VLM. The biggest relative increase is 124% for image-to-text in terms of R@10, while the smallest relative increase is 1% for text-to-image in terms of R@1. Overall, on the text-to-image retrieval task, CLIP outperforms X-VLM by 38%, and on the image-to-text retrieval task, the relative gain is 70%. On Fashion200k, CLIP outperforms X-VLM, too. The smallest relative increase is 9% for text-to-image in terms of R@1, the biggest relative increase is 260% for image-to-text in terms of R@10. In general, on the text-to-image retrieval task, CLIP outperforms X-VLM by 52%; on the image-to-text retrieval task, the relative gain is 83%. Finally, on the ABO dataset, CLIP outperforms X-VLM again. The smallest relative increase is 101% for text-to-image in terms of R@1, the biggest relative increase is 241% for image-to-text again in terms of R@10. In general, on the text-to-image retrieval task, CLIP outperforms X-VLM by 139%; on the image-to-text retrieval task, the relative gain is 190%. All in all, CLIP outperforms X-VLM on all three scene-centric datasets. The overall relative gain on CUB-200 dataset is 55%, on Fashion200k dataset - 101%. The biggest relative gain of 166% is obtained on the ABO dataset. **Answer to RQ2.** The outcome of Experiment 2 is clear. The original relative performance results obtained on the MS COCO and Flickr30k (Table 1) are only partially replicable to the CUB-200, Fashion200k, and ABO datasets. On the latter datasets CLIP consistently outperforms X-VLM by a large margin, whereas the original scores obtained on the former datasets indicate that X-VLM mostly outperforms CLIP. **Upshot.** We hypothesize that the failure to replicate the relative results originally reported for scene-centric datasets (viz. X-VLM outperforms CLIP) is due to CLIP being pre-trained on more and more diverse image data. We, therefore, recommend that future work aimed at developing large-scale CMR models quantifies and reports the diversity of the training data used. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Text-to-image**} & \multicolumn{3}{c}{**Image-to-text**} & \multicolumn{3}{c}{**Rsum**} \\ \cline{2-9} **Model** & **R@1** & **R@5** & **R@10** & **R@1** & **R@5** & **R@10** & **t2i** & **i2t** & **total** \\ \hline \multicolumn{9}{c}{**CUB-200**} \\ \hline CLIP & **0.71** & **2.38** & **4.42** & **1.23** & **3.40** & **5.48** & **7.51** & **10.11** & **17.62** \\ X-VLM & 0.70 & 2.28 & 2.45 & 1.16 & 2.35 & 2.45 & 5.43 & 5.96 & 11.39 \\ \hline \multicolumn{9}{c}{**Fashion200k**} \\ \hline CLIP & **3.05** & **8.56** & **12.85** & **3.43** & **9.82** & **14.56** & **24.46** & **27.81** & **52.27** \\ X-VLM & 2.80 & 6.62 & 6.70 & 1.84 & 3.96 & 4.04 & 16.12 & 09.84 & 25.96 \\ \hline \multicolumn{9}{c}{**ABO**} \\ \hline CLIP & **6.25** & **13.90** & **18.50** & **7.99** & **18.96** & **25.57** & **38.65** & **52.52** & **91.17** \\ X-VLM & 3.10 & 6.48 & 6.56 & 3.20 & 7.42 & 7.50 & 16.14 & 18.12 & 34.26 \\ \hline \hline \end{tabular} \end{table} Table 2: Results of Experiment 2 (replicability study), using the CUB-200, Fashion200k, and ABO datasets. ### RQ3: Generalizability To answer RQ3, we compare the relative performance of the selected models on object-centric and scene-centric data. Thus, we compare the relative performance of CLIP and X-VLM on CUB-200, Fashion200k, ABO with their relative performance on MS COCO and Flickr30k. We focus on the complete tree from Fig. 2. **Results.** The results of our experiments on the scene-centric datasets are in Table 1; the results that we obtained on the object-centric datasets are in Table 2. On object-centric datasets, CLIP consistently outperforms X-VLM. However, the situation with scene-centric results is partially the opposite. There, X-VLM outperforms CLIP on the MS COCO dataset. **Answer to RQ3.** Hence, we answer RQ3 by stating that the relative performance results for CLIP and X-VLM that we obtained in our experiments only partially generalize from scene-centric to object-centric datasets. The MS COCO dataset is the odd one out.6 Footnote 6: On the GitHub repository for CLIP, several issues have been posted related to the performance of CLIP on the MS COCO dataset. See, e.g., [https://github.com/openai/CLIP/issues/115](https://github.com/openai/CLIP/issues/115). **Upshot.** Given the observed differences in relative performance results for CLIP and X-VLM on scene-centric vs. object-centric datasets, we recommend that CMR be trained in both scene-centric and object-centric datasets to help improve the generalizability of experimental outcomes. ## 7 Discussion & Conclusions We have examined two SOTA image-text CMR methods, CLIP and X-VLM, by contrasting their performance on two scene-centric datasets (MS COCO and Flickr30k) and three object-centric datasets (CUB-200, Fashion200k, ABO) in a zero-shot setting. We focused on the _reproducibility_ of the CMR results reported in the original publications when evaluated on the selected scene-centric datasets. The reported scores were not reproducible for X-VLM when evaluated on the MS COCO and the Flickr30k datasets. For CLIP, we were able to reproduce the scores on the Flickr30k dataset when evaluated using R@5 and R@10. Conversely, the relative results were reproducible on the MS COCO dataset, for all metrics and tasks, and partially reproducible on the Flickr30k dataset only for image-to-text task when evaluated on R@1 and R@5. We also examined the _replicability_ of the CMR results using three object-centric datasets. We discovered that the relative results are replicable when we compare the relative performance on the object-centric datasets with the relative scores on the Flickr30k dataset. However, for the MS COCO dataset, the relative outcomes were not replicable. And, finally, we explored the generalizability of the obtained results by comparing the models' performance on scene-centric vs. object-centric datasets. We observed that the absolute scores obtained when evaluating models on object-centric datasets are much lower than the scores obtained on scene-centric datasets. Our findings demonstrate that the reproducibility of CMR methods on scene-centric datasets is an open problem. Besides, we show that while the majority of CMR methods are evaluated on the MS COCO and the Flickr30k datasets, the object-centric datasets represent a challenging and relatively unexplored set of benchmarks. A limitation of our work is the relatively small number of scene-centric and object-centric datasets used for the evaluation of the models. Another limitation is that we only considered CMR in a zero-shot setting, ignoring, e.g., few-shot scenarios; this limitation did, however, come with the important advantage of reducing the number of experimental design decisions to be made for contrastive experiments. A promising direction for future work is to include further datasets when contrasting the performance of CMR models, both scene-centric and object-centric. In particular, it would be interesting to investigate the models' performance on datasets, e.g., Conceptual Captions [47], the Flower [40], and the Cars [27] datasets. A natural step after that would be to consider few-shot scenarios. ### Acknowledgements. We thank Paul Groth, Andrew Yates, Thong Nguyen, and Maurits Bleeker for helpful discussions and feedback. This research was supported by Ahold Delhaize, and the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research, [https://hybrid-i.ntelligence-centre.nl](https://hybrid-i.ntelligence-centre.nl). All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors.
2308.06176
Phase transitions and thermodynamic cycles in the broken PT-regime
We propose a new type of quantum thermodynamic cycle whose efficiency is greater than the one of the classical Carnot cycle for the same conditions for a system when viewed as homogeneous. In our model this type of cycle only exists in the low temperature regime in the spontaneously broken parity-time-reversal (PT) symmetry regime of a non-Hermitian quantum theory and does not manifest in the PT-symmetric regime. We discuss this effect for an ensemble based on a model of a single boson coupled in a non-Hermitian way to a bath of different types of bosons with and without a time-dependent boundary. The cycle can not be set up when considering our system as heterogeneous, i.e. undergoing a first order phase transition. Within that interpretation we find that the entropy is vanishing throughout the spontaneously broken PT-regime.
Andreas Fring, Marta Reboiro
2023-08-11T15:04:59Z
http://arxiv.org/abs/2308.06176v2
# Thermodynamic cycles in the broken PT-regime - beating the Carnot cycle ###### Abstract We propose a new type of quantum thermodynamic cycle whose efficiency is greater than the one of the classical Carnot cycle for the same conditions. In our model this type of cycle only exists in the low temperature regime in the spontaneously broken parity-time-reversal (PT) symmetry regime of a non-Hermitian quantum theory and does not manifest in the PT-symmetric regime. We discuss this effect for an ensemble based on a model of a single boson coupled in a non-Hermitian way to a bath of different types of bosons with and without a time-dependent boundary. ## I Introduction Carnot's thermodynamic cycle has been proposed almost 200 years ago in 1824 [1; 2]. According to Carnot's theorem it is the most efficient engine operating between two heat reservoirs at absolute cold temperature \(T_{c}\) and absolute hot temperature \(T_{h}\) achieving its efficiency at \(\eta=1-T_{c}/T_{h}\). Here we propose a new type of cycle that has a larger efficiency. The proposed cycle bears resemblance to a Stirling cycle, as unlike the Carnot, which moves along two isentropes and two isothermals our cycles moves along two isochorics and two isothermals. We formally identify a combination of coupling constants as the analogue of the volume in this picture. Our setting is within the context of non-Hermitian \(\mathcal{PT}\)-symmetric quantum theories which have been studied extensively for 25 years since their discovery [3]. Their theoretical description is by now well-understood. Different from non-Hermitian open systems, they possess two distinct regimes that are characterised by their symmetry properties with regard to simultaneous parity reflection (\(\mathcal{P}\)) and time-reversal (\(\mathcal{T}\)). When their Hamiltonians respect this antilinear symmetry [4] and their eigenstates are simultaneous eigenstates of the \(\mathcal{PT}\)-operator the eigenspectra are guaranteed to be real and the evolution of the system is unitary. This regime is referred to as the \(\mathcal{PT}\)-symmetric regime. In turn, when the eigenstates of the \(\mathcal{PT}\)-symmetric Hamiltonian are not eigenstates of the \(\mathcal{PT}\)-operator, the energy eigenvalues occur in complex conjugate pairs, and one speaks of this parameter regime as the spontaneously broken \(\mathcal{PT}\)-regime. The two regimes are separated in their parameter space by the so-called exceptional point [5; 6; 7]. Many of the features predicted by these type of theories have been verified experimentally in optical settings that mimic the quantum mechanical description [8; 9; 10]. The transition from one regime to another has recently also been confirmed in a fully fledged quantum experiment [11]. The new thermodynamic cycle we propose here exists in the spontaneously broken \(\mathcal{PT}\)-symmetric regime. In a single particle quantum mechanical theory this parameter regime would normally be discarded on the grounds of being unphysical. The reason for this is that while one of the complex energy eigenvalues will give rise to decay, which is physically acceptable, the other with opposite sign in the imaginary part would inevitably lead to an unphysical infinite growth. One way to fix this problem and "mend" the broken regime is to introduce a time-dependent metric [12] or technically equivalently by introducing time-dependent boundaries [13]. Another possibility is to consider a large thermodynamic ensemble of particles [14; 15], which is the approach followed here. We will also explore the combination of both approaches. ## II A boson coupled to a boson bath ### Time-independent scenario Our model [16] consists of a boson represented by the operators \(a\), \(a^{\dagger}\) coupled to a bath of \(N\) different bosons represented by \(q_{i}\), \(q_{i}^{\dagger}\)\(i=1,\ldots,N\). The non-Hermitian Hamiltonian reads \[H=\nu N_{a}+\nu N_{q}+\sqrt{N}(g+k)a^{\dagger}Q+\sqrt{N}(g-k)Q^{\dagger}a, \tag{1}\] with number operator \(N_{a}=a^{\dagger}a\), \(N_{q}=\sum_{n=0}^{N}\ q_{n}^{\dagger}q_{n}\), Weyl algebra generators \(Q=\sum_{n}q_{n}/\sqrt{N}\), \(Q^{\dagger}=\sum_{n}q_{n}^{\dagger}/\sqrt{N}\) and real coupling constants \(\nu\), \(g\), \(k\). The \(\mathcal{PT}\)-symmetry of the Hamiltonian is realised as \(\mathcal{PT}\): \(a,a^{\dagger},q_{i},q_{i}^{\dagger}\rightarrow-a,-a^{\dagger},-q_{i},-q_{i}^{\dagger}\). Since the model is non-Hermitian we need to define a new metric in order to obtain a meaningful quantum mechanical Hilbert space or map it to an isospectral Hermitian counterpart. The latter is achieved by using the Dyson map \(\eta=e^{\gamma(N_{a}-Q^{\dagger}Q)}\) for the similarity transformation in the time-independent Dyson equation \[h=\eta H\eta^{-1}=\nu(N_{a}+N_{q})+\sqrt{N\lambda}(a^{\dagger}Q+Q^{\dagger}a), \tag{2}\] with \(\lambda:=g^{2}-k^{2}\) and \(\tanh(2\gamma)=-k/g\). Clearly for \(h\) to be Hermitian we require \(\lambda>0\), which constitutes the \(\mathcal{PT}\)-symmetric regime, whereas \(\lambda<0\) is referred to as the spontaneously broken \(\mathcal{PT}\)-regime when also the eigenstates of \(H\) are no longer eigenstates of \(\mathcal{PT}\). This is seen from the change in the Dyson map, with \(\gamma\notin\mathbb{R}\), in the relation \(\phi=\eta\psi\), where \(\phi\) and \(\psi\) are the eigenstates of \(h\) and \(H\), respectively. The exceptional point is located \(\lambda=0\) in the parameter space where the stated Dyson map becomes undefined. In order to solve the model we can employ the Tamm-Dancoff method [17] by mapping the Hermitian Hamiltonian to \[h=W_{+}\Gamma_{+}^{\dagger}\Gamma_{+}+W_{-}\Gamma_{-}^{\dagger}\Gamma_{-} \tag{3}\] with \[W_{\pm}:=\nu\pm\sqrt{N}\sqrt{\lambda},\qquad\Gamma_{\pm}^{\dagger}=\frac{1}{ \sqrt{2}}\left(a^{\dagger}\pm Q^{\dagger}\right). \tag{4}\] The eigensystem of \(h\) then consists of two decoupled Hermitian harmonic oscillators \[h\left|n_{+},n_{-}\right\rangle=\left(E_{n_{+}}+E_{n_{-}}\right)\left|n_{+},n_ {-}\right\rangle, \tag{5}\] where the eigenvalues and eigenstates are \[E_{n_{\pm}}=n_{\pm}W_{\pm},\quad\left|n_{+},n_{-}\right\rangle=\frac{1}{\sqrt{ n_{+}!n_{-}!}}\Gamma_{+}^{\dagger n_{+}}\Gamma_{-}^{\dagger n_{-}}\left|0,0 \right\rangle, \tag{6}\] respectively, and \(n_{\pm}\in\mathbb{N}_{0}\). In the \(\mathcal{PT}\)-symmetric regime we restrict our parameter range here to \(\nu\geq\sqrt{N}\sqrt{\lambda}\) in order to ensure the boundedness of the spectrum. The Hermitian Hamiltonian \(h\) is equivalent to the Hamiltonian \(H\) in equation (1) as long as the Dyson map is well-defined. ### Time-dependent scenario Next, we introduce an explicit time-dependence into our system. This can be achieved in two different ways: by allowing the non-Hermitian Hamiltonians and the metric to be explicitly time-dependent or by allowing only the metric to be time-dependent. An alternative, but equivalent, viewpoint of these settings correspond to restricting the domain of the system by introducing a time-dependent boundary [13]. While the latter approach is more physically intuitive, dealing with time-dependent Dyson maps or metric operators is technically easier and better defined. In either case, the Dyson equation (2) has to be replace by its time-dependent version [18] \[h(t)=\eta(t)H(t)\eta^{-1}(t)+i\hbar\partial_{t}\eta(t)\eta^{-1}(t), \tag{7}\] and one needs to distinguish the non-Hermitian Hamiltonian \(H(t)\) from the instantaneous energy operator \[E(t)=H(t)+i\hbar\eta^{-1}(t)\partial_{t}\eta(t). \tag{8}\] Keeping \(H\) time-independent, a solution to (7) for the non-Hermitian Hamiltonian in (1) in form of a time-dependent Dyson map \[\eta(t)=e^{-i\nu t(N_{a}+N_{b})-i\mu_{I}(t)(a^{\dagger}Q+Q^{\dagger}a)}, \tag{9}\] and a time-dependent Hermitian Hamiltonian \[h(t)=\nu(N_{a}+N_{q})+\mu(t)(a^{\dagger}Q+Q^{\dagger}a), \tag{10}\] were found in [16], with \[\mu(t)=\frac{\lambda\sqrt{N}\sqrt{c_{1}^{2}+\lambda}}{2\lambda+2c_{1}^{2}\sin^ {2}\left[2\sqrt{N\lambda}(t+c_{2})\right]}, \tag{11}\] and \[\mu_{I}(t)=\frac{1}{2}\arctan\left\{\frac{\sqrt{c_{1}^{2}+\lambda}}{\sqrt{ \lambda}}\tan\left[2\sqrt{N\lambda}(t+c_{2})\right]\right\}. \tag{12}\] We have set \(\hbar=1\) with \(c_{1}\) and \(c_{2}\) being real integration constants. The latter may be set to zero as it just corresponds to a shift in time, whereas the appropriate choice of \(c_{1}\) is crucial since it controls in part the reality of the coefficient functions \(\mu(t)\) and \(\mu_{I}(t)\). As the operator structure of the time-independent and time-dependent system are identical, they also share the same eigenstates, where the instantaneous energy eigenvalues become \[E_{n_{\pm}}(t)=n_{\pm}W_{\pm}(t),\quad\text{with }W_{\pm}(t)=\nu\pm\mu(t). \tag{13}\] At the particular times \[t_{c}^{n}=\frac{1}{4\lambda\sqrt{N}}\arccos\left[1+\frac{2\lambda-\sqrt{ \lambda}\sqrt{c_{1}^{2}+\lambda}}{c_{1}^{2}}\right]+\frac{\pi n}{2\lambda \sqrt{N}}, \tag{14}\] with \(n\in\mathbb{Z}\), the time-dependent system coincides with the time-independent one. These times are real in the two parameter regimes of either \(\mathcal{PT}\)-regime, i.e. \(-c_{1}^{2}/3\leq\lambda\leq 0\) or \(0\leq\lambda\leq c_{1}^{2}/3\). Here, we will discuss the thermodynamic properties of the time-independent and time-dependent systems in all \(\mathcal{PT}\)-regimes, but not in terms of microstates in bipartite systems as previously done in [16; 19]. Instead, here will look at large ensembles and in particular focus on setting up a Carnot cycle and a new cycle moving along different thermodynamic paths. In general, quantum thermodynamic properties for non-Hermitian systems have been discussed previously in [14; 15; 20; 21; 22] ## III Carnot \((Ts)\) vs Stirling \((T\lambda)\) cycles The quantum mechanical partition function for canonical ensembles is calculated in a standard fashion for our time-independent model (1) as \[Z(T,\nu,\lambda) = \text{tr}\rho_{h}=\sum_{n_{\pm}}\left\langle n_{+},n_{-}\right| \rho_{h}\left|n_{+},n_{-}\right\rangle\] \[= \frac{e^{\nu/T}}{4\sinh(W_{+}/2T)\sinh(W_{-}/2T)},\] with \(\rho_{h}=e^{-\beta h}\), \(\beta=1/T\). From this expression we may compute all thermodynamic quantities that are of interest here. The Helmholtz free energy, internal energy and entropy results to \[F(T,\nu,\lambda) = -T\ln Z(T,\nu,\lambda), \tag{16}\] \[U(T,\nu,\lambda) = \frac{T^{2}}{Z}\frac{dZ}{dT}=\frac{\mathrm{tr}\left(h\rho_{h} \right)}{\mathrm{tr}\rho_{h}}\] \[= \frac{1}{2}\left[W_{-}\coth\frac{W_{-}}{2T}+W_{+}\coth\frac{W_{- }}{2T}-2\nu\right],\] \[S(T,\nu,\lambda) = -\frac{dF}{dT}\bigg{|}_{\lambda}=\ln Z+\frac{U}{T}, \tag{18}\] respectively. The behaviour of these quantities as functions of temperature, displayed in figure 1, is qualitatively different in the two \(\mathcal{PT}\)-regimes discussed here. We find that in the \(\mathcal{PT}\)-symmetric regime the internal energy, as well as the entropy, behave in a standard fashion with the latter being a monotonously increasing function. Remarkably the energy has been amended as it is also real in the spontaneously broken \(\mathcal{PT}\)-symmetric regime. In the low temperature regimes we observe oscillatory behaviour for both quantities, whereas for large temperatures the asymptotic behaviour is similar to the one in the \(\mathcal{PT}\)-symmetric regime, with \(\lim_{T\to\infty}U(T)\sim 2T\) and \(\lim_{T\to\infty}S(T)\sim 2\ln T\). We can exploit these features to set up a new type of thermodynamic cycle along a different path and compare with the conventional Carnot cycle in the two \(\mathcal{PT}\)-regimes which is identified in figures 2 and 3 in form of a dashed rectangle. In general, the Carnot cycle is defined as a four step process consisting of an isothermal expansion (\(1\to 2\)), an isentropic expansion (\(2\to 3\)), a subsequent isothermal compression (\(3\to 4\)) and an isentropic compression (\(4\to 1\)). In our example in the spontaneously broken \(\mathcal{PT}\)-regime these steps can be realised by a suitable tuning of the parameters at our disposal. We have: step \(1\to 2:\) change \(\lambda_{1}\) to \(\lambda_{2}\), step \(2\to 3:\) change \(\nu\) as a function of \(T\) along the line \(S(T)=S_{2}\) for constant \(\lambda\) as indicated in the top inset of figure 2, step \(3\to 4:\) change \(\lambda_{2}\) to \(\lambda_{1}\) and finally in step \(4\to 1:\) change \(\nu\) as a function of \(T\) along the line \(S(T)=S_{1}\) for constant \(\lambda\) as indicated in the top inset of figure 2. Notice that the steps \(2\to 3\) and \(4\to 1\) along the isentropes can not be realised by varying \(\lambda\) as a function of \(T\) for constant \(\nu\). The multi-valuedness of \(\lambda(T)\) which makes this impossible can be seen in the contour plot in the lower inset in figure 2. However, in the broken \(\mathcal{PT}\)-symmetric regime we also have a second option at our disposal to connect the point 2 with 3 and 4 with 1. Instead of keeping \(\lambda\) fixed and varying \(\nu\) along the isentropic, we can keep both \(\lambda\) and \(\nu\) fixed with only varying the temperature, i.e. we connect the points along the iso-\(\lambda\) and iso-\(\nu\) lines. This is the new thermodynamic cycle we propose. The cycle can be interpreted as an analogue to the Stirling cycle: Seeking out conjugate pairs of variables in parameter space we may interpret \(\lambda\) as the correspondent to the volume and pair it as usual with the pressure \(p\). Keeping \(\nu\) constant, the total differential of the Helmhotz free energy then acquires the form \[dF=-SdT-pd\lambda, \tag{19}\] such that \[p=\left.-\frac{\partial F}{\partial\lambda}\right|_{T}=\frac{\sqrt{N}\sinh \left(\frac{\sqrt{\lambda}\sqrt{N}}{T}\right)}{\sqrt{\lambda}\left[2\cosh \left(\frac{\nu}{T}\right)-2\cosh\left(\frac{\sqrt{\lambda}\sqrt{N}}{T} \right)\right]}. \tag{20}\] We can then interpret the thermodynamic processes in the new cycle as: \(1\to 2\): isothermal heat addition, \(2\to 3\): isochoric (iso-\(\lambda\)) heat removal, \(3\to 4\): isothermal heat removal and \(4\to 1\): isochoric (iso-\(\lambda\)) heat addition. As seen in figure 1 panels (c) and (d), it is crucial to note that in the \(\mathcal{PT}\)-symmetric regime and the high temperature regime of the spontaneously broken \(\mathcal{PT}\)- regime two points with the same entropy always have different values for \(\lambda\) when \(\nu\) is fixed or vice versa, since the entropy is monotonously increasing. Hence, the proposed cycle can not manifest in those regimes. A necessary condition for the cycle, as depicted in figure 2, to manifest, is the existence of solutions to the two equations \[S(T_{1},\nu,\lambda_{1},N) = S(T_{2},\nu,\lambda_{1},N)=S_{1}, \tag{21}\] \[S(T_{1},\nu,\lambda_{2},N) = S(T_{2},\nu,\lambda_{2},N)=S_{2} \tag{22}\] for \(T_{1}\) and \(T_{2}\) with given \(N,\nu,\lambda_{1},\lambda_{2}\). Our numerical solutions for these equations are reported in the captions of figure 2. Notice that it is by no means guaranteed that for a given set of parameters real solutions exist and that even an ideal Carnot cycle can be realised. The fact that we found a solution to vary along the isentropics with a single parameter, i.e. \(\nu\), is also not guaranteed in all parameter settings. Figure 1: Internal energy \(U\) and Entropy \(S\) as a function of temperature \(T\) in the \(\mathcal{PT}\)-symmetric regime panel (a), (c) and in the spontaneously broken \(\mathcal{PT}\)-symmetric regime panel (b), (d), respectively. The newly proposed cycle does indeed beat the Carnot cycle in the sense that the amount of total energy transferred as work \(W\) during the cycle as well as its efficiency are larger than in the Carnot cycle. To see that we calculate the work \(\Delta W_{ij}\) as the heat \(\Delta Q_{ij}\) transferred minus the internal energy \(\Delta U_{ij}\) for each of the steps \(i\to j\) \[\Delta W_{ij}=\Delta Q_{ij}-\Delta U_{ij}, \tag{23}\] where \(\Delta Q_{ij}=\int_{S_{i}}^{S_{j}}TdS\), \(\Delta W_{ij}=\int_{\lambda_{i}}^{\lambda_{j}}pd\lambda\), with the pressure \(p\) identified in (20), and \(\Delta U_{ij}=U_{j}-U_{i}\). This means we are adopting here the conventions \(\Delta Q>0\) (\(\Delta Q<0\)) for heat absorbed (released) by the system and \(\Delta W>0\) (\(\Delta W<0\)) for work done by (put into) the system. The numerical values for our example are reported in the following table: \begin{tabular}{c||c|c|c} \(T\lambda\)-cycle & \(\Delta W_{ij}\) & \(\Delta Q_{ij}\) & \(\Delta U_{ij}\) \\ \hline \hline \(1\to 2\) & 2.1172 & 33.6174 & 31.5002 \\ \(2\to 3\) & 0 & 0.1054 & 0.1054 \\ \(3\to 4\) & 0.2065 & -31.4415 & -31.6480 \\ \(4\to 1\) & 0 & 0.0424 & 0.0424 \\ \(\oint_{\Gamma_{2}}\) & 2.3238 & 2.3238 & 0 \\ \end{tabular} Here each column is computed separately and the assembled results confirm the relation (23). We obtain the values \(U_{1}=-14.0513\), \(U_{2}=17.4488\), \(U_{3}=17.5543\), \(U_{4}=-14.0937\) from (17). We denote the path along the dashed rectangle in figure 2 as \(\Gamma_{1}\) and \(\Gamma_{2}\) as the path that differs from \(\Gamma_{1}\) in the verticals by tracing over the arches above and below the segments 23 and 41, respectively. The internal energy is vanishing along any closed loop and therefore does not contribute to the total work. Hence, in our \(T\lambda\)-cycle the heat is directly converted into work \[W_{T\lambda}=\oint_{\Gamma_{2}}TdS=2.3238. \tag{24}\] The efficiency, defined in general as the total work done by the system divided by the heat transferred into it, results for our cycle to \[\eta_{T\lambda}=\frac{W_{T\lambda}}{\Delta Q_{12}+\Delta Q_{23}+\Delta Q_{41} }=0.0688. \tag{25}\] At first we compare this to the efficiency of the Stirling cycle in an ideal gas \[\eta_{\rm Stirling}=\frac{R(T_{2}-T_{1})}{RT_{2}+c_{v}(T_{2}-T_{1})/\ln\lambda_ {2}/\lambda_{1}} \tag{26}\] with \(R\) denoting the ideal gas constant and \(c_{v}\) the specific heat. With a typical value of \(c_{v}=5/4R\) for air this yields \(\eta_{\rm Stirling}=0.05503\) and if we want to match the expression with \(\eta_{T\lambda}\) we would require a negative specific heat of \(c_{v}=-0.4516R\). Evidently, this means our system is far from an ideal gas. Next, we compare with the Carnot cycle as indicated in figure 2. We report once more the individual contributions in a table: \begin{tabular}{c||c|c|c} \(TS\)-cycle & \(\Delta W_{ij}\) & \(\Delta Q_{ij}\) & \(\Delta U_{ij}\) \\ \hline \hline \(1\to 2\) & 2.1172 & 33.6174 & 31.5002 \\ \(2\to 3\) & -0.1054 & 0 & 0.1054 \\ \(3\to 4\) & 0.2065 & -31.4415 & -31.6480 \\ \(4\to 1\) & -0.0424 & 0 & 0.0424 \\ \(\oint_{\Gamma_{1}}\) & 2.1760 & 2.1760 & 0 \\ \end{tabular} Figure 3: Entropy as a function of temperature in the \(\mathcal{PT}\)-symmetric regime with a Carnot thermodynamic cycle (dashed lines). The size of the bath is \(N=120\) and \(\nu=25\). The Carnot cycle is constructed between the temperatures \(T_{1}=35.5489\), \(T_{2}=88.4576\) and entropies \(S_{1}=4.7726\) and \(S_{2}=6\). The inset shows how to vary \(\lambda\) from \(T_{2}\) to \(T_{1}\) as a function of \(T\) and vice versa along the constant entropies \(S_{2}\) and \(S_{1}\), respectively. Figure 2: Entropy as a function of temperature in the spontaneously broken \(\mathcal{PT}\)-regime with a Carnot (dashed lines) and new type of thermodynamic cycle. We kept the size of the bath fixed with \(N=160\). For the chosen parameters we obtain as solutions of (21), (22) the temperatures \(T_{1}=5.53240\), \(T_{2}=5.91528\) and entropies \(S_{1}=-2.51338\) and \(S_{2}=3.16977\). The top inset shows how to vary \(\nu\) from \(T_{2}\) to \(T_{1}\) as a function of \(T\) and vice versa along constant entropies. The lower inset shows a contour plot of the entropy in the \(\lambda\)T-plane. Thus the total work done by the system is \[W_{\rm Carnot} = \oint_{\Gamma_{1}}dQ-\oint_{\Gamma_{1}}dU\] \[= \oint_{\Gamma_{1}}TdS=(T_{2}-T_{1})(S_{2}-S_{1})=2.1760,\] which is smaller than the work done by the \(T\lambda\)-cycle (24). The efficiency is obtained in this case as \[\eta_{\rm Carnot}=\frac{W_{\rm Carnot}}{\Delta_{12}}=1-\frac{T_{1}}{T_{2}}=0. 06473, \tag{28}\] which is also smaller than the one obtained for the \(T\lambda\)-cycle (25). In comparison, in the \(\mathcal{PT}\)-symmetric regime any Carnot cycle must connect four different values of \(\lambda\) or \(\nu\) for fixed \(\nu\) or \(\lambda\), respectively. This is seen in figure 3 for the first case. A similar figure can be constructed by varying \(\nu\) for fixed \(\lambda\). Thus, the new cycle we proposed for the spontaneously broken \(\mathcal{PT}\)-regime can not manifest in the \(\mathcal{PT}\)-symmetric regime. A further difference is that when \(\nu\) is kept fixed we can not vary \(\lambda\) along an isentropic line in the spontaneously broken \(\mathcal{PT}\)-regime, whereas in the \(\mathcal{PT}\)-symmetric regime we have to vary \(\lambda\) to stay on the isentropic. Next, we carry out a similar analysis for the time-dependent system. The thermodynamic quantities can be computed in almost the same manner as in (15)-(18), with the difference that the time-dependence is introduced by replacing the functions \(W_{\pm}\) in (6) by their time-dependent versions \(W_{\pm}(t)\) from (13). For fixed values of time we obtain a similar behaviour as in the time-independent case and as pointed out in (14), for some values of time this becomes even identical. The novel option we have in the time-dependent case is that we can keep all the model parameters fixed and let the system evolve with time. An example of such an evolution in the spontaneously broken \(\mathcal{PT}\)-symmetric regime is seen in figure 4, where we depict contours of constant entropy in the temperature-time plane. We observe that there exist plenty of timelines along the constant entropy contour \(S(T)=S_{1}\), displayed as dashed black lines. After changing from \(\lambda_{1}\) to \(\lambda_{2}\), a similar figure can be obtained for \(S(T)=S_{2}\). Thus, for the time-dependent system in the broken regime we may lower or increase the temperature along the isentropics by letting the system evolve in time, which means there exists yet another possibility to manifest the steps \(2\to 3\) and \(4\to 1\) in the Carnot cycle. We note that the timescales involved for this process are extremely short, e.g. for the sample values in figure 4 we have \(\Delta_{t}:=t_{2}-t_{1}=t_{1}^{\prime}-t_{2}^{\prime}=0.0000291\). We compare these finding now with the time evolution in the \(\mathcal{PT}\)-symmetric regime. As seen in figure 5, unlike as in the time-independent regime, we have now the option to connect points at different temperatures for the same value of \(\lambda\) along an isentropic. Thus in principle we could modify the Carnot cycle displayed in figure 3 and set it up between just two values of \(\lambda\), similar as for the broken regime. However, the time-evolution is always increasing the temperature, which is fine for the \(4\to 1\) step, but for the step \(2\to 3\) we need to lower the temperature which would require time to run backwards. Hence, a Carnot cycle between two values of \(\lambda\) does not exist in the \(\mathcal{PT}\)-symmetric regime. Figure 5: Contours of constant entropy in the \(\mathcal{PT}\)-symmetric regime in the \(Tt\)-plane for \(N=120\), \(\nu=25\), \(\lambda=4.5\) and \(c_{1}=6\). The dashed lines display the values of constant \(S_{1}\) as specified in figure 1. The red dots indicate \(S_{1}=S(T_{1},t_{1})=S(T_{2},t_{2})\) with \(t_{1}=0.0025630\) and \(t_{2}=0.0053601\). Figure 4: Contours of constant entropy in the spontaneously broken \(\mathcal{PT}\)-symmetric regime in the \(Tt\)-plane for \(N=160\), \(\nu=12\), \(\lambda=-24\) and \(c_{1}=4.75\). The dashed lines display the values of constant \(S_{1}\) as specified in figure 2. The red dots indicate \(S_{1}=S(T_{1},t_{1})=S(T_{2},t_{2})\) with \(t_{1}=0.0023241\), \(t_{2}=0.0023532\) or \(t_{2}^{\prime}=0.0028210\), \(t_{1}^{\prime}=0.0028501\). Conclusion, Summary, Outlook Our main result is that in the low temperature regime of an ensemble build on a non-Hermitian Hamiltonian system in the spontaneously broken \(\mathcal{PT}\)-regime three new option exist to connect two values of the entropy at different temperatures that do not manifest in the other regimes: One can connect a) by varying \(\nu\) as a function of temperature at constant entropy and \(\lambda\), b) by varying the entropy as a function of temperature at constant \(\lambda\) and \(\nu\), c) by varying the temperature as a function of time at constant entropy, \(\lambda\) and \(\nu\). The possibility a) can be employed in an ideal Carnot cycle, whereas the possibilities b) and c) allow to set up a new type of cycle along an isochoric path. The new cycle has a better efficiency than the Carnot cycle. The nature of the paths in the new cycle resembles a Stirling cycle, but its efficiency is quite different from setting up the latter in an ideal gas. Naturally there are several open issues left to explore in future work. We conjecture that the features observed in our model are universally occurring in the spontaneously broken \(\mathcal{PT}\)-regimes of non-Hermitian systems. However, to confirm this one needs to explore more examples and ultimately identify more generic model independent reasons. **Acknowledgments:** MR is partially supported by CONICET, Argentine.
2301.12408
Lifts of Brauer characters in characteristic two
A conjecture raised by Cossey in 2007 asserts that if $G$ is a finite $p$-solvable group and $\varphi$ is an irreducible $p$-Brauer character of $G$ with vertex $Q$, then the number of lifts of $\varphi$ is at most $|Q:Q'|$. This conjecture is now known to be true in several situations for $p$ odd, but there has been little progress for $p$ even. The main obstacle appeared in characteristic two is that all the vertex pairs of a lift are neither linear nor conjugate. In this paper we show that if $\chi$ is a lift of an irreducible $2$-Brauer character in a solvable group, then $\chi$ has a linear Navarro vertex if and only if all the vertex pairs of $\chi$ are linear, and in that case all of the twisted vertices of $\chi$ are conjugate. Our result can also be used to study other lifting problems of Brauer characters in characteristic two. As an application, we prove a weaker form of Cossey's conjecture for $p=2$ "one vertex at a time".
Ping Jin, Lei Wang
2023-01-29T10:14:23Z
http://arxiv.org/abs/2301.12408v2
# Lifts of Brauer characters in characteristic two ###### Abstract. A conjecture raised by Cossey in 2007 asserts that if \(G\) is a finite \(p\)-solvable group and \(\varphi\) is an irreducible \(p\)-Brauer character of \(G\) with vertex \(Q\), then the number of lifts of \(\varphi\) is at most \(|Q:Q^{\prime}|\). This conjecture is now known to be true in several situations for \(p\) odd, but there has been little progress for \(p\) even. The main obstacle appeared in characteristic two is that all the vertex pairs of a lift are neither linear nor conjugate. In this paper we show that if \(\chi\) is a lift of an irreducible \(2\)-Brauer character in a solvable group, then \(\chi\) has a linear Navarro vertex if and only if all the vertex pairs of \(\chi\) are linear, and in that case all of the twisted vertices of \(\chi\) are conjugate. Our result can also be used to study other lifting problems of Brauer characters in characteristic two. As an application, we prove a weaker form of Cossey's conjecture for \(p=2\) "one vertex at a time". Key words and phrases:Brauer character; lift; Navarro vertex; vertex pair; twisted vertex. 2010 Mathematics Subject Classification: Primary 20C20; Secondary 20C15 *Corresponding author ## 1. Introduction Fix a prime number \(p\), and let \(G\) be a finite \(p\)-solvable group. The Fong-Swan theorem states that for each irreducible \(p\)-Brauer character \(\varphi\in\mathrm{IBr}_{p}(G)\), there exists some irreducible complex character \(\chi\in\mathrm{Irr}(G)\) such that \(\chi^{0}=\varphi\), where \(\chi^{0}\) denotes the restriction of \(\chi\) to the \(p\)-regular elements of \(G\). Such a character \(\chi\) is called a **lift** of \(\varphi\). In [2], Cossey proposed the following conjecture, which also appears as Problem 3.6 of [19]. **Conjecture 1.1** (Cossey).: _Let \(G\) be a \(p\)-solvable group, and let \(\varphi\in\mathrm{IBr}_{p}(G)\). Then_ \[|L_{\varphi}|\leq|Q:Q^{\prime}|,\] _where \(L_{\varphi}\) is the set of all lifts of \(\varphi\) and \(Q\) is a vertex for \(\varphi\) (in the sense of Green)._ This global/local conjecture seems difficult to prove, although some progress has been made for \(p\) odd. In 2007, Cossey [2] verified his conjecture for (solvable) groups of odd order, and in 2011, Cossey, Lewis and Navarro [9] proved the conjecture under the conditions that either \(Q\) is abelian or \(Q\triangleleft G\) and \(p\neq 2\). Also, Cossey and Lewis [6] computed the exact number of lifts of \(\varphi\) in the case where \(\varphi\) lies in a block with a cyclic defect group. For further background material of this conjecture, the reader is referred to the survey paper of Cossey [5]. In [3], Cossey assigned to each \(\chi\in\operatorname{Irr}(G)\) a pair \((Q,\delta)\) consisting of a \(p\)-subgroup \(Q\) of \(G\) and a character \(\delta\in\operatorname{Irr}(Q)\), which is called a **vertex pair** for \(\chi\) (for precise definition, see Section 2 below or Definition 4.5 of [5]). Furthermore, the pair \((Q,\delta)\) is said to be **linear** if \(\delta\in\operatorname{Lin}(Q)\) is a linear character. Of particular note is the fact that if \(\chi\) is a lift of \(\varphi\in\operatorname{IBr}_{p}(G)\) and if \((Q,\delta)\) is a linear vertex pair for \(\chi\), then \(Q\) is necessarily a vertex for \(\varphi\) (see Theorem 4.6(d) of [5]). The following result, which is a main theorem of [11], plays a key role in the study of Cossey's conjecture as well as many other lifting problems about Brauer characters; see, for example, [3, 4, 5, 8, 9, 10, 11, 16, 20, 21]. **Lemma 1.2** (Cossey-Lewis).: _Let \(G\) be a \(p\)-solvable group with \(p>2\), and suppose that \(\chi\in\operatorname{Irr}(G)\) is a lift of some \(\varphi\in\operatorname{IBr}_{p}(G)\). Then all of the vertex pairs for \(\chi\) are linear and conjugate in \(G\)._ Following the notation of [11], we write \(L_{\varphi}(Q,\delta)\) for the set of those lifts of \(\varphi\in\operatorname{IBr}_{p}(G)\) with vertex pair \((Q,\delta)\) and let \(N_{G}(Q,\delta)\) denote the stabilizer of \(\delta\) in \(N_{G}(Q)\), where \(Q\) is a \(p\)-subgroup of a \(p\)-solvable group \(G\) and \(\delta\in\operatorname{Irr}(Q)\). In the situation of Lemma 1.2, let \(Q\) be a vertex for \(\varphi\), so that each lift of \(\varphi\) has vertex pair \((Q,\delta)\) for some \(\delta\in\operatorname{Lin}(Q)\), and let \(\{\delta_{1},\ldots,\delta_{n}\}\) be a set of representatives of the \(N_{G}(Q)\)-orbits in \(\operatorname{Lin}(Q)\) (under the natural action of \(N_{G}(Q)\) on \(\operatorname{Lin}(Q)\) induced by the conjugation action of \(N_{G}(Q)\) on \(Q\)). Then \(L_{\varphi}=\bigcup_{i=1}^{n}L_{\varphi}(Q,\delta_{i})\) is clearly a disjoint union and \[\sum_{i=1}^{n}|N_{G}(Q):N_{G}(Q,\delta_{i})|=|\operatorname{Lin}(Q)|=|Q:Q^{ \prime}|.\] This suggests that perhaps Cossey's conjecture can be proved "one vertex pair at a time", and indeed, Cossey and Lewis established the following strong form of the conjecture under certain conditions (see the proof of Theorem 1.2 of [2] for \(|G|\) odd, and Theorem 3 of [7] for \(Q\) abelian). **Theorem 1.3** (Cossey-Lewis).: _Let \(\varphi\in\operatorname{IBr}_{p}(G)\), where \(G\) is \(p\)-solvable with \(p>2\). If either \(|G|\) is odd or \(Q\) is abelian, then \(|L_{\varphi}(Q,\delta)|\leq|N_{G}(Q):N_{G}(Q,\delta)|\) for all \(\delta\in\operatorname{Lin}(Q)\)._ The purpose of the present paper is to investigate Cossey's conjecture in the case where \(p=2\), for which there has been little progress. Unfortunately, the above argument does not work in this case because Lemma 1.2 can fail, and the examples in [1] and [15] show that for a lift \(\chi\) of some \(\varphi\in\operatorname{IBr}_{2}(G)\) in a solvable group \(G\), two possibilities do happen: either \(\chi\) has no linear vertex pair or all of the linear vertex pairs for \(\chi\) are not conjugate. This is a major obstacle to our research on lifts of 2-Brauer characters. In order to extend Lemma 1.2 to the case \(p=2\), we introduced the notion of **twisted vertices**, and established the uniqueness of _linear_ twisted vertices under some conditions; see [20] for more details. We can now use **Navarro vertices** to deal with _all_ of the twisted vertices for a lift of a given 2-Brauer character in solvable groups, and obtain an analogue of Lemma 1.2 for \(p\) even. For the definition of Navarro vertices, see [18] or the next section. The following is the first main result in this paper, which strengthens Theorem A and Theorem B of [20]. Also, it is the starting point for our study of lifting Brauer characters for \(p\) even, just like Lemma 1.2 for \(p\) odd. **Theorem A**.: _Let \(G\) be a solvable group, and let \(\chi\in\operatorname{Irr}(G)\) be a lift of some \(\varphi\in\operatorname{IBr}_{2}(G)\). Then the following hold._ (a)_\(\chi\) has a linear Navarro vertex if and only if every vertex pair for \(\chi\) is linear. In that case, all of the twisted vertices for \(\chi\) are linear and conjugate in \(G\)._ (b)_\(\chi\) has a linear Navarro vertex if and only if every primitive character inducing \(\chi\) has odd degree._ (c) _If \(\chi\) is an \(\mathcal{N}\)-lift for some \(2\)-chain \(\mathcal{N}\) of \(G\), then \(\chi\) has a linear Navarro vertex._ Here, for a prime \(p\) (not necessarily \(2\)), a \(p\)**-chain**\(\mathcal{N}\) of \(G\) is a chain of normal subgroups of \(G\): \[\mathcal{N}=\{1=N_{0}\leq N_{1}\leq\cdots\leq N_{k-1}\leq N_{k}=G\}\] such that each \(N_{i}/N_{i-1}\) is either a \(p\)-group or a \(p^{\prime}\)-group, and \(\chi\in\operatorname{Irr}(G)\) is an \(\mathcal{N}\)**-lift** if for all \(N\in\mathcal{N}\), every irreducible constituent of \(\chi_{N}\) is a lift of some irreducible \(p\)-Brauer character of \(N\). (See [8] for more details.) As an application of Theorem A, we consider Cossey's conjecture for \(p=2\). Given \(\varphi\in\operatorname{IBr}_{2}(G)\) with \(G\) solvable, we denote by \(\tilde{L}_{\varphi}\) the set of lifts of \(\varphi\) that have a linear Navarro vertex, which cannot be empty by Theorem A(a) of [18], and we write \(\tilde{L}_{\varphi}(Q,\delta)\) for the subset of those characters in \(\tilde{L}_{\varphi}\) with twisted vertex \((Q,\delta)\). By Theorem A, \(\tilde{L}_{\varphi}=\bigcup_{i=1}^{n}\tilde{L}_{\varphi}(Q,\delta_{i})\) is a disjoint union, where \(\{\delta_{1},\ldots,\delta_{n}\}\) is a set of representatives of the \(N_{G}(Q)\)-orbits in \(\operatorname{Lin}(Q)\) as before. Also, we can prove a weaker form of Cossey's conjecture for \(p\) even "one vertex pair at a time"; compare the following result with Theorem 1.3. **Theorem B**.: _Let \(G\) be a solvable group, and let \(\varphi\in\operatorname{IBr}_{2}(G)\) with vertex \(Q\). Then_ \[|\tilde{L}_{\varphi}(Q,\delta)|\leq|N_{G}(Q):N_{G}(Q,\delta)|\] _for all \(\delta\in\operatorname{Lin}(Q)\). In particular, \(|\tilde{L}_{\varphi}|\leq|Q:Q^{\prime}|\)._ To prove Cossey's conjecture in the situation of Theorem B, it is natural to ask when it is true that \(\tilde{L}_{\varphi}\) and \(L_{\varphi}\) coincide (that is, every lift of \(\varphi\) has a linear Navarro vertex). By Theorem A, \(\tilde{L}_{\varphi}=L_{\varphi}\) if and only if for any lift \(\chi\) of \(\varphi\), each primitive character inducing \(\chi\) has odd degree. As an easy consequence of Theorems A and B, we present the following purely group-theoretic condition. **Corollary C**.: _Let \(G\) be a solvable group, and let \(\varphi\in\operatorname{IBr}_{2}(G)\) with vertex \(Q\). Assume that there exist normal subgroups \(N\leq M\) of \(G\) satisfying_ (a) \(N\) _has odd order,_ (b) _all Sylow subgroups of \(M/N\) are abelian,_ (c) \(G/M\) _is supersolvable._ _Then \(\tilde{L}_{\varphi}=L_{\varphi}\), and hence \(|L_{\varphi}|\leq|Q:Q^{\prime}|\)._ We end this introduction with some remarks. As usual, we will work with Isaacs' \(\pi\)-partial characters in \(\pi\)-separable groups rather that Brauer characters in \(p\)-solvable groups. For the reader's convenience, we will briefly review the definitions and basic properties of \(\pi\)-partial characters and the associated vertices in the next section (for more details see Isaacs' recent new book [14]). Here we emphasize that the "classical" case of Brauer characters is exactly the situation where \(\pi\) is the complement \(p^{\prime}\) of \(\{p\}\) in the set of all prime numbers. Actually, our proof of Theorem B is inspired by the arguments used by Cossey and Lewis in the preprint [7] to handle the case \(2\in\pi\). Nevertheless, in our case where \(2\notin\pi\) we need to use the \(\pi\)-induction of characters defined by Isaacs [12] to replace ordinary induction, and modify again the definition of Cossey's vertex pairs for any \(\chi\in\operatorname{Irr}(G)\) to obtain a new one which we call a \(*\)-vertex for \(\chi\) (see Definition 4.4). As expected, all of the \(*\)-vertices for every member of \(\tilde{L}_{\varphi}\) are still conjugate (see Theorem 4.5), which is also complementary to Lemma 1.2 and is crucial for our goal, and the proof is an immediate application of Theorem B of [20]. Throughout this paper, all groups are finite, and most of the notation and results can be found in [13, 17, 14] for ordinary characters, Brauer characters and Isaacs' \(\pi\)-partial characters. ## 2. Preliminaries In this section, we briefly review some basic notions and results from Isaacs' \(\pi\)-theory needed for our proofs. ### \(\pi\)-Partial characters Following Isaacs [14], we fix a set \(\pi\) of primes and let \(G\) be a \(\pi\)-separable group. We write \(G^{0}\) for the set of \(\pi\)-elements of \(G\), and for any complex character \(\chi\) of \(G\), we use \(\chi^{0}\) to denote the restriction of \(\chi\) to \(G^{0}\). We call \(\chi^{0}\) a \(\pi\)**-partial character** of \(G\). If a \(\pi\)-partial character \(\varphi\) cannot be written as a sum of two \(\pi\)-partial characters, we say that \(\varphi\) is **irreducible** or that \(\varphi\) is an \(\operatorname{I}_{\pi}\)**-character**. The set of all \(\operatorname{I}_{\pi}\)-characters of \(G\) is denoted \(\operatorname{I}_{\pi}(G)\). Also, if \(\chi\in\operatorname{Irr}(G)\) satisfying \(\chi^{0}=\varphi\) for some \(\varphi\in\operatorname{I}_{\pi}(G)\), then \(\chi\) is called a \(\pi\)**-lift**, or a **lift** for short when there is no risk of confusion. We write \[L_{\varphi}=\{\chi\in\operatorname{Irr}(G)\,|\,\chi^{0}=\varphi\}\] for the set of all lifts for \(\varphi\). By the Fong-Swan theorem, it follows that if \(\pi=p^{\prime}\), the set of primes different from \(p\), then the \(\pi\)-partial characters of \(G\) are exactly the \(p\)-Brauer characters in \(p\)-solvable groups. In this case, we have \(\mathrm{I}_{\pi}(G)=\mathrm{IBr}_{p}(G)\). Furthermore, Isaacs constructed a subset \(\mathrm{B}_{\pi}(G)\) of \(\mathrm{Irr}(G)\), which is a canonical lift of \(\mathrm{I}_{\pi}(G)\), so the map \(\chi\mapsto\chi^{0}\) defines a bijection from \(\mathrm{B}_{\pi}(G)\) onto \(\mathrm{I}_{\pi}(G)\) (see Theorem 5.1 of [14]). We now consider the vertices for \(\mathrm{I}_{\pi}\)-characters. Given \(\varphi\in\mathrm{I}_{\pi}(G)\), a **vertex** for \(\varphi\) is defined to be any Hall \(\pi^{\prime}\)-subgroup of a subgroup \(U\) of \(G\) such that there exists \(\theta\in\mathrm{I}_{\pi}(U)\), where \(\theta^{G}=\varphi\) and \(\theta(1)\) is a \(\pi\)-number. We use \(\mathrm{I}_{\pi}(G|Q)\) to denote the set of \(\mathrm{I}_{\pi}\)-characters of \(G\) having \(Q\) as a vertex. The following result is fundamental. **Lemma 2.1** (Theorem 5.17 of [14]).: _Let \(G\) be \(\pi\)-separable, where \(\pi\) is a set of primes, and let \(\varphi\in\mathrm{I}_{\pi}(G)\). Then all vertices for \(\varphi\) form a single conjugacy class of \(\pi^{\prime}\)-subgroups of \(G\)._ ### \(\pi\)-Factored characters Let \(\chi\in\mathrm{Irr}(G)\), where \(G\) is a \(\pi\)-separable group. We say that \(\chi\) is \(\pi\)**-special** if \(\chi(1)\) is a \(\pi\)-number and the determinantal order \(o(\theta)\) is a \(\pi\)-number for every irreducible constituent \(\theta\) of the restriction \(\chi_{S}\) for every subnormal subgroup \(S\) of \(G\). The set of \(\pi\)-special characters of \(G\) is denoted \(\mathrm{X}_{\pi}(G)\). **Lemma 2.2** (Theorem 2.10 of [14]).: _Let \(G\) be \(\pi\)-separable, and let \(H\leq G\) have \(\pi^{\prime}\)-index. Then restriction \(\chi\mapsto\chi_{H}\) defines an injection from \(\mathrm{X}_{\pi}(G)\) into \(\mathrm{X}_{\pi}(H)\)._ **Lemma 2.3** (Theorem 3.14 of [14]).: _Let \(G\) be \(\pi\)-separable. Then the map \(\chi\mapsto\chi^{0}\) defines an injection from \(\mathrm{X}_{\pi}(G)\) into \(\mathrm{I}_{\pi}(G)\)._ **Lemma 2.4** (Theorem 2.2 of [14]).: _Let \(G\) be \(\pi\)-separable, and suppose that \(\alpha,\beta\in\mathrm{Irr}(G)\) are \(\pi\)-special and \(\pi^{\prime}\)-special, respectively. Then \(\alpha\beta\) is irreducible. Also, if \(\alpha\beta=\alpha^{\prime}\beta^{\prime}\), where \(\alpha^{\prime}\) is \(\pi\)-special and \(\beta^{\prime}\) is \(\pi^{\prime}\)-special, then \(\alpha=\alpha^{\prime}\) and \(\beta=\beta^{\prime}\)._ If \(\chi\in\mathrm{Irr}(G)\) can be written as \(\chi=\alpha\beta\), where \(\alpha\) is \(\pi\)-special and \(\beta\) is \(\pi^{\prime}\)-special, then \(\chi\) is said to be \(\pi\)**-factored**. Since \(\alpha\) and \(\beta\) are uniquely determined by \(\chi\), we often use \(\chi_{\pi}\) and \(\chi_{\pi^{\prime}}\) to replace \(\alpha\) and \(\beta\), respectively. By definition, it is easy to see that every normal constituent of a \(\pi\)-special character is also \(\pi\)-special, and so we have the following. **Lemma 2.5**.: _Let \(G\) be \(\pi\)-separable, and let \(N\triangleleft G\). If \(\chi\in\mathrm{Irr}(G)\) is \(\pi\)-factored, then every irreducible constituent \(\theta\) of \(\chi_{N}\) is also \(\pi\)-factored. Furthermore, \(\theta_{\pi}\) and \(\theta_{\pi^{\prime}}\) lie under \(\chi_{\pi}\) and \(\chi_{\pi^{\prime}}\), respectively._ Recall that if \(\theta\in\mathrm{Irr}(H)\), where \(H\) is a subgroup of \(G\), we write \(\mathrm{Irr}(G|\theta)\) to denote the set of irreducible constituents of \(\theta^{G}\), that is, the set of irreducible characters of \(G\) lying over \(\theta\). **Lemma 2.6**.: _Let \(N\triangleleft G\), where \(G\) is \(\pi\)-separable, and suppose that \(G/N\) is a \(\pi^{\prime}\)-group. Let \(\alpha,\beta\in\operatorname{Irr}(N)\) be \(\pi\)-special and \(\pi^{\prime}\)-special, respectively. Then \(\alpha\) is invariant in \(G\) if and only if every member of \(\operatorname{Irr}(G|\alpha\beta)\) is \(\pi\)-factored, and this happens precisely when one member of \(\operatorname{Irr}(G|\alpha\beta)\) is \(\pi\)-factored._ Proof.: If \(\alpha\) is invariant in \(G\), then by Lemma 4.2 of [14] every member of \(\operatorname{Irr}(G|\alpha\beta)\) is \(\pi\)-factored. Assume some character \(\chi\in\operatorname{Irr}(G|\alpha\beta)\) is \(\pi\)-factored. By Lemma 2.5, we see that \(\chi_{\pi}\) lies over \(\alpha\), and since \(G/N\) is a \(\pi^{\prime}\)-group and \(\chi_{\pi}\) has \(\pi\)-degree, we have \((\chi_{\pi})_{N}=\alpha\) by Clifford's theorem, and thus \(\alpha\) is invariant in \(G\). The result now follows. ### Vertex pairs We continue to assume that \(G\) is \(\pi\)-separable and \(\chi\in\operatorname{Irr}(G)\). Let \(Q\) be a \(\pi^{\prime}\)-subgroup of \(G\) and \(\delta\in\operatorname{Irr}(Q)\). Following Cossey [5], the pair \((Q,\delta)\) is called a **vertex pair** for \(\chi\) if there exists a subgroup \(U\) of \(G\) such that \(Q\) is a Hall \(\pi^{\prime}\)-subgroup of \(U\) and \(\delta=(\psi_{\pi^{\prime}})_{Q}\), where \(\psi\in\operatorname{Irr}(U)\) is \(\pi\)-factored with \(\psi^{G}=\chi\). (Recall that we use \(\psi_{\pi^{\prime}}\) to denote the \(\pi^{\prime}\)-special factor of \(\psi\).) We say that \((Q,\delta)\) is a **linear vertex** if \(\delta\in\operatorname{Lin}(Q)\) is a linear character of \(Q\). Clearly, all of the vertex pairs for \(\chi\) need not be conjugate in \(G\). The importance of linear vertex pairs is illustrated by the following result. **Lemma 2.7** (Theorem 4.6(d) of [5]).: _Let \(G\) be a \(\pi\)-separable group, and let \(\chi\in\operatorname{Irr}(G)\) be a lift of some \(\varphi\in\operatorname{I}_{\pi}(G)\). If \((Q,\delta)\) is a linear vertex pair for \(\chi\), then \(Q\) is a vertex for \(\varphi\)._ ### Navarro vertices The definition of Navarro vertices relies on the following fundamental result, whose proof can be seen Theorem 2.2 and Corollary 2.4 of [18]. Recall that if \(\theta\in\operatorname{Irr}(N)\), where \(N\triangleleft G\) and \(G\) is a group, we often use \(G_{\theta}\) to denote the inertia group of \(\theta\) in \(G\) and write \(\chi_{\theta}\) for the Clifford correspondent of any \(\chi\in\operatorname{Irr}(G|\theta)\) with respect to \(\theta\). **Lemma 2.8**.: _Let \(G\) be a \(\pi\)-separable group, and let \(\chi\in\operatorname{Irr}(G)\). Then there is a unique normal subgroup \(N\) of \(G\) maximal with the property that every irreducible constituent \(\theta\) of \(\chi_{N}\) is \(\pi\)-factored. Furthermore, if \(\theta\) is invariant in \(G\), that is, if \(G_{\theta}=G\), then \(N=G\) and \(\theta=\chi\)._ Now, the **normal nucleus**\((W,\gamma)\) for \(\chi\in\operatorname{Irr}(G)\) can be defined inductively. If \(\chi\) is \(\pi\)-factored, then we let \((W,\gamma)=(G,\chi)\). Otherwise, let \(N\) and \(\theta\) be as in Lemma 2.8. Then \(G_{\theta}<G\), and we define \((W,\gamma)\) to be the normal nucleus for \(\chi_{\theta}\). It is easy to see that \(\gamma\) is \(\pi\)-factored with \(\gamma^{G}=\chi\), and that the normal nucleus for \(\chi\) is uniquely determined up to conjugacy. Furthermore, let \(Q\) be a Hall \(\pi^{\prime}\)-subgroup of \(W\) and let \(\delta=(\gamma_{\pi^{\prime}})_{Q}\). Then the pair \((Q,\delta)\) is clearly a vertex pair for \(\chi\), which is called a **Navarro vertex** for \(\chi\). By the construction of normal nuclei, it is easy to see that all of the Navarro vertices for \(\chi\) are conjugate in \(G\). ### \(\pi\)-Induction Assume that \(G\) is a \(\pi\)-separable group with \(2\notin\pi\). For each subgroup \(H\) of \(G\), Isaacs defined the \(\pi\)-standard sign character \(\delta_{(G,H)}\), which is a linear character of \(H\) that has values \(\pm 1\). For definition and properties of this sign character, the reader is referred to [12]. **Lemma 2.9** (Theorem 2.5 of [12]).: _Let \(G\) be a \(\pi\)-separable group, where \(2\notin\pi\)._ (1) _If \(H\leq K\leq G\), then \(\delta_{(G,H)}=(\delta_{(G,K)})_{H}\delta_{(K,H)}\)._ (2) _If \(N\leq H\leq G\) and \(N\triangleleft G\), then \(N\) is contained in the kernel of \(\delta_{(G,H)}\)._ Using the \(\pi\)-standard sign character, Isaacs [12] introduced the notion of \(\pi\)-induction. **Definition 2.10**.: Let \(G\) be \(\pi\)-separable with \(2\notin\pi\), and suppose that \(\theta\) is a character of some subgroup \(H\) of \(G\). Write \(\theta^{\pi G}=(\delta_{(G,H)}\theta)^{G}\). We say that \(\theta^{\pi G}\) is the \(\pi\)**-induction** of \(\theta\) to \(G\). **Lemma 2.11** (Lemma 7.4 of [12]).: _Let \(G\) be \(\pi\)-separable with \(2\notin\pi\), and let \(\chi\in\operatorname{Irr}(G)\). Suppose that \(H\leq K\leq G\) and \(\theta\) is a character of \(H\). Then \((\theta^{\pi K})^{\pi G}=\theta^{\pi G}\)._ The next result is the Clifford correspondence for \(\pi\)-induction. **Lemma 2.12** (Lemma 7.5 of [12]).: _Let \(G\) be \(\pi\)-separable with \(2\notin\pi\), and let \(N\triangleleft G\) and \(\theta\in\operatorname{Irr}(N)\). Then the map \(\psi\mapsto\psi^{\pi G}\) defines a bijection \(\operatorname{Irr}(G_{\theta}|\theta)\to\operatorname{Irr}(G|\theta)\)._ The following is fundamental when studying induction of \(\pi\)-special characters. **Lemma 2.13** (Theorem 2.29 of [14]).: _Let \(\psi\in\operatorname{Irr}(U)\), where \(U\leq G\) and \(G\) is \(\pi\)-separable, and assume that every irreducible constituent of \(\psi^{G}\) is \(\pi\)-special. Then \(|G:U|\) is a \(\pi\)-number and \(\psi(1)\) is a \(\pi\)-number. Also, \(\psi\) is \(\pi\)-special if \(2\in\pi\), and \(\delta_{(G,U)}\psi\) is \(\pi\)-special if \(2\notin\pi\)._ ## 3. Proof of Theorem A We begin with a lemma, which is of fundamental importance for the proof of our Theorem A. **Lemma 3.1**.: _Let \(\chi\in\operatorname{Irr}(G)\) be a lift of \(\varphi\in\operatorname{I}_{\pi}(G)\), where \(G\) is \(\pi\)-separable with \(2\notin\pi\). Then the following are equivalent._ (a) _All of the vertex pairs for \(\chi\) are linear._ (b) _Every quasi-primitive character \(\psi\) inducing \(\chi\) has odd degree, where both \(\psi_{\pi}\) and \(\psi_{\pi^{\prime}}\) are primitive._ (c) _Every \(\pi\)-factored character \(\psi\in\operatorname{Irr}(U)\) inducing \(\chi\) has odd degree, where \(U\) is a subgroup of \(G\) containing \(O_{\pi}(G)O_{\pi^{\prime}}(G)\) and \(\psi_{\pi^{\prime}}\) is primitive._ Proof.: Note that by definition, (a) is equivalent to saying that each \(\pi\)-factored character that induces \(\chi\) has \(\pi\)-degree. To complete the proof, we fix a subgroup \(U\) of \(G\), and let \(\psi\in\operatorname{Irr}(U)\) be \(\pi\)-factored with \(\psi^{G}=\chi\). For notational simplicity, we write \(\alpha=\psi_{\pi}\) and \(\beta=\psi_{\pi^{\prime}}\), so that \(\psi=\alpha\beta\). We claim that \(\psi(1)\) is a \(\pi\)-number if and only if it is odd. To see this, note that \(2\not\in\pi\), so \(\psi(1)\) is odd whenever it is a \(\pi\)-number. Conversely, if \(\psi(1)\) is odd, then \(\beta(1)\) is also odd, and since \[(\alpha^{0}\beta^{0})^{G}=(\psi^{0})^{G}=(\psi^{G})^{0}=\chi^{0}=\varphi,\] it follows that \(\beta^{0}\in\operatorname{I}_{\pi}(U)\). Now Corollary 2.14 of [14] implies that \(\beta^{0}\) is rational valued, and Lemma 5.4 of [14] tells us that \(\beta^{0}\) must be principal. Thus \(\beta\) is linear, and \(\psi(1)=\alpha(1)\) is a \(\pi\)-number, as claimed. In particular, this proves that (a) implies both (b) and (c). Now assume (b). To establish (a), it suffices to show that \(\beta\) is linear. If \(\alpha=\sigma^{U}\), where \(\sigma\in\operatorname{Irr}(X)\) is primitive and \(X\leq U\), then \(\chi=(\alpha\beta)^{G}=((\sigma\beta_{X})^{U})^{G}=(\sigma\beta_{X})^{G}\). Note that \(|U:X|\) divides \(\alpha(1)\), which is a \(\pi\)-number, so \(\beta_{X}\) is \(\pi^{\prime}\)-special. Furthermore, by Lemma 2.13, we see that \(\delta_{(U,X)}\sigma\) is \(\pi\)-special, so \(\sigma\beta_{X}=(\delta_{(U,X)}\sigma)(\delta_{(U,X)}\beta_{X})\) is \(\pi\)-factored. To prove that \(\beta\) is linear, we can assume, therefore, that \(\alpha\) is primitive with \(\sigma\beta_{X}\) in place of \(\psi\) and \(X\) in place of \(U\). Similarly, if \(\beta=\rho^{U}\), where \(\rho\in\operatorname{Irr}(Y)\) is primitive and \(Y\leq U\). Then \(\alpha_{Y}\) is \(\pi\)-special as \(|U:Y|\) is a \(\pi^{\prime}\)-number, and \(\rho\) is \(\pi^{\prime}\)-special by Lemma 2.13 again. Now \(\chi=(\alpha\rho^{U})^{G}=(\alpha_{Y}\rho)^{G}\) and \(\alpha_{Y}\rho\) is \(\pi\)-factored. If we can prove that \(\rho\) is linear, then \(\rho^{0}\) must be principal, and since \((\rho^{0})^{U}=(\rho^{U})^{0}=\beta^{0}\in\operatorname{I}_{\pi}(U)\) (see the previous paragraph), it follows that \(U=Y\), and thus \(\beta=\rho\) is linear. We may assume, therefore, that \(\beta\) is primitive with \(\alpha_{Y}\rho\) in place of \(\psi\) and \(Y\) in place of \(U\). Now we repeat this progress until both \(\alpha\) and \(\beta\) are primitive. It is clear that primitive characters in any group are always quasi-primitive, and that a \(\pi\)-factored character in a \(\pi\)-separable group is quasi-primitive if and only if its \(\pi\)-special factor and \(\pi^{\prime}\)-special factor are all quasi-primitive (by basic properties of \(\pi\)-special characters). Thus \(\psi=\alpha\beta\) is quasi-primitive, and then (a) follows by (b). Finally, assume (c). Write \(N=O_{\pi}(G)O_{\pi^{\prime}}(G)\). Then each irreducible character of \(N\) is clearly \(\pi\)-factored. We claim that \(|NU:U|\) is a \(\pi\)-number. To see this, let \(\hat{\varphi}\in\operatorname{B}_{\pi}(G)\) be a lift of \(\varphi\), so that every irreducible constituent \(\theta\) of \(\hat{\varphi}_{N}\) is also a \(\operatorname{B}_{\pi}\)-character. Note that \(\theta\in\operatorname{Irr}(N)\) is \(\pi\)-factored, so \(\theta\) must be \(\pi\)-special (see Theorem 4.12 of [14]), and thus \(\theta^{0}\in\operatorname{I}_{\pi}(N)\) has \(\pi\)-degree lying under \(\varphi\). Moreover, since \((\psi^{0})^{G}=(\psi^{G})^{0}=\chi^{0}=\varphi\), it follows by Lemma 5.21 of [14] that \(|NU:U|\) is a \(\pi\)-number, as claimed. Writing \(\xi=\psi^{NU}\), we see by Lemma 3.1 of [20] that \(\xi\) is \(\pi\)-factored with \((\xi_{\pi^{\prime}})_{U}=\delta_{(NU,U)}\beta\). To establish (a), we may assume further that \(\psi\) is quasi-primitive and \(\beta\) is primitive by the equivalence of (a) with (b) just proved. Then \(\xi_{\pi^{\prime}}\) is also primitive with degree \(\beta(1)\), so (a) follows by (c) with \(\xi\) in place of \(\psi\) and \(NU\) in place of \(U\). Following the terminology of Cossey and Lewis [8], a \(\pi\)**-chain**\(\mathcal{N}\) of \(G\) is a chain of normal subgroups of \(G\): \[\mathcal{N}=\{1=N_{0}\leq N_{1}\leq\cdots\leq N_{k-1}\leq N_{k}=G\}\] such that \(N_{i}/N_{i-1}\) is either a \(\pi\)-group or a \(\pi^{\prime}\)-group for \(i=1,\ldots,k\), and \(\chi\in\operatorname{Irr}(G)\) is an \(\mathcal{N}\)**-lift** if every irreducible constituent of \(\chi_{N}\) is a lift for all \(N\in\mathcal{N}\). **Theorem 3.2**.: _Let \(G\) be \(\pi\)-separable with \(2\notin\pi\), and let \(\chi\in\operatorname{Irr}(G)\) be a lift of \(\varphi\in\operatorname{I}_{\pi}(G)\). Assume one of the following conditions._ (a)_\(\chi\) has a linear Navarro vertex._ (b)_\(\chi\) is an \(\mathcal{N}\)-lift for some \(\pi\)-chain \(\mathcal{N}\) of \(G\)._ _Then all of the vertex pairs for \(\chi\) are linear._ Proof.: By Lemma 3.1, it suffices to show that every \(\pi\)-factored character \(\psi\in\operatorname{Irr}(U)\) that induces \(\chi\) has odd degree, where \(U\leq G\) and \(\psi_{\pi^{\prime}}\) is primitive. We write \(\alpha=\psi_{\pi}\) and \(\beta=\psi_{\pi^{\prime}}\), so that \(\psi=\alpha\beta\). Suppose first that \(\chi\) is \(\pi\)-factored. To show the theorem in this situation, it suffices to establish that \(\chi(1)\) is a \(\pi\)-number as \(2\notin\pi\). In case (b), this is true by Lemma 2.3(2) of [20], and in case (a), since \((G,\chi)\) is itself a normal nucleus for \(\chi\), it follows by the definition of Navarro vertices that \(\chi\) has \(\pi\)-degree. Now suppose that \(\chi\) is not \(\pi\)-factored. Reasoning as we did in the proof of Theorem 3.4 of [20], there exists a normal subgroup \(N\) of \(G\) such that every irreducible constituent \(\theta\) of \(\chi_{N}\) is \(\pi\)-factored with \(\pi\)-degree and \(G_{\theta}<G\). (Specifically, in case (a) we let \(N\) be the unique maximal normal subgroup of \(G\) such that \(\chi_{N}\) has \(\pi\)-factored irreducible constituents, and in case (b) we choose \(N\in\mathcal{N}\) maximal with the property that all of the irreducible constituents of \(\chi_{N}\) are \(\pi\)-factored.) Furthermore, the Clifford correspondent \(\chi_{\theta}\) of \(\chi\) over \(\theta\) also satisfies the both conditions on \(\chi\) (by replacing \(G\) with \(G_{\theta}\)). Observe that \(\theta^{0}\) is irreducible with \(\pi\)-degree, and that \(\theta^{0}\) lies under \(\varphi\). Since \((\psi^{0})^{G}=\chi^{0}=\varphi\), we deduce by Lemma 5.21 of [14] that \(|NU:U|\) is a \(\pi\)-number. Now Lemma 3.1 of [20] applies, and writing \(\xi=\psi^{NU}\), we know that \(\xi\) is \(\pi\)-factored with \((\xi_{\pi^{\prime}})_{U}=\delta_{(NU,U)}\beta\). In particular, \(\beta\) is linear if and only if \(\xi_{\pi^{\prime}}\) is. It is easy to see that \(\xi_{\pi^{\prime}}\) is necessarily primitive as it is an extension of the primitive character \(\delta_{(NU,U)}\beta\) (by Mackey). Now replacing \(U\) by \(NU\) and \(\psi\) by \(\xi\), we may assume further that \(N\leq U\). Moreover, we can replace \(\theta\) by a conjugate and assume that \(\theta\) lies under \(\psi\). We consider the Clifford correspondents \(\psi_{\theta}\in\operatorname{Irr}(U_{\theta})\) and \(\chi_{\theta}\in\operatorname{Irr}(G_{\theta})\) of \(\psi\) and \(\chi\) over \(\theta\), respectively. Note that \(N\triangleleft U\) and that both \(\psi\) and \(\theta\) are \(\pi\)-factored, so \(\theta_{\pi}\) lies under \(\alpha\) and \(\theta_{\pi^{\prime}}\) lies under \(\beta\). Since \(\beta\) is assumed to be primitive, it follows that \(\beta_{N}\) is a multiple of \(\theta_{\pi^{\prime}}\), and hence \(\theta_{\pi^{\prime}}\) is invariant in \(U\). From this, we deduce that \(U_{\theta}\) is exactly the inertia group of \(\theta_{\pi}\) in \(U\). Let \(\tilde{\alpha}\) be the Clifford correspondent of \(\alpha\) over \(\theta_{\pi}\), so that \(\tilde{\alpha}^{U}=\alpha\). By Lemma 2.13, we see that \(|U:U_{\theta}|\) is a \(\pi\)-number and \(\delta_{(U,U_{\theta})}\tilde{\alpha}\) is \(\pi\)-special. Furthermore, we have that \(\tilde{\beta}\in\operatorname{Irr}(U_{\theta})\) is \(\pi^{\prime}\)-special, where \(\tilde{\beta}\) denotes the restriction of \(\beta\) to \(U_{\theta}\). Now \((\tilde{\alpha}\tilde{\beta})^{U}=\tilde{\alpha}^{U}\beta=\alpha\beta=\psi\), so \(\tilde{\alpha}\tilde{\beta}\) is irreducible, which clearly lies over \(\theta_{\pi}\theta_{\pi^{\prime}}=\theta\). It follows that \(\tilde{\alpha}\tilde{\beta}\) is the Clifford correspondent of \(\psi\) over \(\theta\), that is, \(\psi_{\theta}=\tilde{\alpha}\tilde{\beta}\). Also, we have \(\psi_{\theta}=(\delta_{(U,U_{\theta})}\tilde{\alpha})(\delta_{(U,U_{\theta})} \tilde{\beta})\), and thus \(\psi_{\theta}\) is \(\pi\)-factored. Note that \((\psi_{\theta})^{G_{\theta}}=\chi_{\theta}\) and \(G_{\theta}<G\). By induction on \(|G|\), we conclude that \(\psi_{\theta}\) has \(\pi\)-degree, and thus \(\psi(1)=|U:U_{\theta}|\psi_{\theta}(1)\) is a \(\pi\)-number. The proof is now complete. We can now prove Theorem A in the introduction, which is a special case of the following more general result (by taking \(\pi\) to be the set of all odd primes). Recall that quasi-primitive characters of solvable groups are primitive (see Theorem 11.33 of [13]). **Theorem 3.3**.: _Let \(G\) be a \(\pi\)-separable group with \(2\notin\pi\), and let \(\chi\in\operatorname{Irr}(G)\) be a lift of some \(\varphi\in\operatorname{I}_{\pi}(G)\). Then the following hold._ (a)_\(\chi\) has a linear Navarro vertex if and only if every vertex pair for \(\chi\) is linear. In that case, all of the twisted vertices for \(\chi\) are linear and conjugate in \(G\)._ (b)_\(\chi\) has a linear Navarro vertex if and only if every quasi-primitive character inducing \(\chi\) has odd degree._ (c) _If \(\chi\) is an \(\mathcal{N}\)-lift for some \(\pi\)-chain \(\mathcal{N}\) of \(G\), then \(\chi\) has a linear Navarro vertex._ Proof.: The result follows by Lemma 3.1 and Theorem 3.2, together with Theorem B of [20]. ## 4. A new modification of vertex pairs We first need to extend the definition of the set \(\tilde{L}_{\varphi}\), which appeared in the introduction, and for simplicity, we introduce a notion. **Definition 4.1**.: Let \(G\) be \(\pi\)-separable, and let \(\chi\in\operatorname{Irr}(G)\). We say that \(\chi\) is a **good lift** if \(\chi\) is a lift for some \(\varphi\in\operatorname{I}_{\pi}(G)\) with a linear Navarro vertex. The set of all good lifts for \(\varphi\) is denoted \(\tilde{L}_{\varphi}\). Of course, all of the lifts of a \(\pi\)-separable group are good whenever \(2\in\pi\) by Lemma 4.3 of [11], so this notion is useful only for \(2\notin\pi\). The following is a fundamental result. **Lemma 4.2**.: _Let \(\chi\in\operatorname{Irr}(G)\) be a good lift, where \(G\) is \(\pi\)-separable with \(2\notin\pi\). If \(\chi=\psi^{G}\) with \(\psi\in\operatorname{Irr}(U)\) and \(U\leq G\), then \(\psi\) is also a good lift._ Proof.: It is clear that each vertex pair of \(\psi\) is also a vertex pair of \(\chi\), and thus the result follows by Theorem 3.3. **Lemma 4.3**.: _Let \(\chi\in\operatorname{Irr}(G)\) be a good lift, where \(G\) is \(\pi\)-separable with \(2\notin\pi\), and let \(N\triangleleft G\). If \(\theta\in\operatorname{Irr}(N)\) is a \(\pi\)-factored constituent of \(\chi_{N}\), then \(\theta(1)\) is a \(\pi\)-number. In particular, \(\theta\) is a lift._ Proof.: By Lemma 2.8, there exists a normal nucleus \((W,\gamma)\) for \(\chi\) such that \(N\leq W\) and \(\theta\) lies under \(\gamma\). Then \(\gamma\in\operatorname{Irr}(W)\) is \(\pi\)-factored with \(\gamma^{G}=\chi\). Let \(Q\) be a Hall \(\pi^{\prime}\)-subgroup of \(W\) and write \(\delta=(\gamma_{\pi^{\prime}})_{Q}\). By definition, we see that \((Q,\delta)\) is a vertex pair for \(\chi\). Since \(\chi\) is a good lift, we have \(\delta(1)=1\), and thus \(\gamma(1)=\gamma_{\pi}(1)\), which is a \(\pi\)-number. Note that \(\theta(1)\) divides \(\gamma(1)\), so \(\theta(1)\) is a \(\pi\)-number. In particular, \(\theta_{\pi^{\prime}}\) is linear, which implies that \(\theta^{0}=(\theta_{\pi})^{0}\in\operatorname{I}_{\pi}(N)\) by Lemma 2.3. The proof is complete. We now modify the notion of vertex pairs, as mentioned in the Introduction. **Definition 4.4**.: Let \(\chi\in\operatorname{Irr}(G)\), where \(G\) is a \(\pi\)-separable group with \(2\notin\pi\), and suppose that \(Q\) is a \(\pi^{\prime}\)-subgroup of \(G\) and \(\delta\in\operatorname{Irr}(Q)\). We say that \((Q,\delta)\) is a \(*\)**-vertex** for \(\chi\) if there exist a subgroup \(U\leq G\) and a \(\pi\)-factored character \(\psi\in\operatorname{Irr}(U)\) with \(\chi=\psi^{\pi G}\), and such that \(Q\) is a Hall \(\pi^{\prime}\)-subgroup of \(U\) and \(\delta=(\psi_{\pi^{\prime}})_{Q}\). Our motivation for this is the following result (compare with Lemma 1.2). For the notion of a twisted vertex of a character \(\chi\in\operatorname{Irr}(G)\), we refer the reader to Definition 2.1 and the remark after Lemma 2.2 of [20]. **Theorem 4.5**.: _Let \(G\) be a \(\pi\)-separable group with \(2\notin\pi\), \(Q\) a \(\pi^{\prime}\)-subgroup of \(G\) and \(\delta\in\operatorname{Lin}(Q)\). If \(\chi\in\operatorname{Irr}(G)\) is a good lift, then \(\chi\) has twisted vertex \((Q,\delta)\) if and only if \(\chi\) has \(*\)-vertex \((Q,\delta_{(G,Q)}\delta)\). In particular, all \(*\)-vertices for \(\chi\) are linear and conjugate in \(G\)._ Proof.: Let \((Q,\delta)\) be any twisted vertex for \(\chi\). Then \(\delta\) is linear by Theorem 3.3. By definition, there exist \(U\leq G\) and \(\psi\in\operatorname{Irr}(U)\) with \(\chi=\psi^{G}\), and such that \(Q\) is a Hall \(\pi^{\prime}\)-subgroup of \(U\), \(\psi\) is \(\pi\)-factored and \(\delta=\delta_{(U,Q)}(\psi_{\pi^{\prime}})_{Q}\). On the other hand, notice that \(\chi=\psi^{G}=(\delta_{(G,U)}\psi)^{\pi G}\) and that \(\delta_{(G,U)}\psi_{\pi^{\prime}}\) is \(\pi^{\prime}\)-special. It follows that \((Q,\delta^{*})\) is a \(*\)-vertex for \(\chi\), where \(\delta^{*}=(\delta_{(G,U)}\psi_{\pi^{\prime}})_{Q}\). By Lemma 2.9, \(\delta_{(U,Q)}=\delta_{(G,Q)}(\delta_{(G,U)})_{Q}\), so \(\delta^{*}=\delta_{(G,Q)}\delta\). Therefore, \((Q,\delta)\mapsto(Q,\delta_{(G,Q)}\delta)\) is a well-defined map between the set of twisted vertices for \(\chi\) and the set of \(*\)-vertices for \(\chi\). Moreover, it is easy to see that this map is bijective. Furthermore, since \(\chi\) has a linear Navarro vertex, we conclude from Theorem 3.3 again that all of the twisted vertices for \(\chi\) are linear and conjugate, and hence all of the \(*\)-vertices of \(\chi\) are conjugate in \(G\). This completes the proof. ## 5. Proofs of Theorem B and Corollary C We begin by introducing some more notation. Let \(G\) be a \(\pi\)-separable group with \(2\notin\pi\). Suppose that \(Q\) is a \(\pi^{\prime}\)-subgroup of \(G\) and \(\varphi\in\operatorname{I}_{\pi}(G|Q)\), that is, \(\varphi\in\operatorname{I}_{\pi}(G)\) has vertex \(Q\). By Lemma 2.7 and Theorem 4.5, every good lift \(\chi\) for \(\varphi\) has a \(*\)-vertex of the form \((Q,\delta)\) for some \(\delta\in\operatorname{Lin}(Q)\). For notational convenience, we will write \(L_{\varphi}^{*}(Q,\delta)\) to denote the set of those good lifts for \(\varphi\) that have \((Q,\delta)\) as a \(*\)-vertex, so that \(\tilde{L}_{\varphi}=\bigcup_{\delta}L_{\varphi}^{*}(Q,\delta)\), where \(\delta\) runs over a set of representatives of the \(N_{G}(Q)\)-orbits in \(\operatorname{Lin}(Q)\). We need to investigate the behavior of characters in \(L_{\varphi}^{*}(Q,\delta)\) with respect to normal subgroups (compare with Lemma 3.1 of [7]). **Lemma 5.1**.: _Let \(\varphi\in\operatorname{I}_{\pi}(G)\), where \(G\) is a \(\pi\)-separable group with \(2\notin\pi\), and let \(Q\) be a \(\pi^{\prime}\)-subgroup of \(G\) with \(\delta\in\operatorname{Lin}(Q)\). Assume that \(\chi\in L_{\varphi}^{*}(Q,\delta)\) and \(N\triangleleft G\). If every irreducible constituent of \(\chi_{N}\) is \(\pi\)-factored, then the following hold._ (1)_\(Q\cap N\) is a Hall \(\pi^{\prime}\)-subgroup of \(N\)._ (2) _There exists a unique \(\pi^{\prime}\)-special character \(\beta\) of \(N\) such that \(\beta_{Q\cap N}=\delta_{Q\cap N}\). In particular, \(\beta\) is linear._ (3) _There is a \(\pi\)-special character \(\alpha\in\operatorname{Irr}(N)\) such that \(\alpha\beta\) is an irreducible constituent of \(\chi_{N}\), where \(\beta\) is as in (2). Furthermore \(\alpha^{0}=(\alpha\beta)^{0}\) is an irreducible constituent of \(\varphi_{N}\)._ (4) _Every irreducible constituent of \(\chi_{N}\) is a lift for some irreducible \(\pi\)-partial character of \(N\) with \(\pi\)-degree, and every irreducible constituent of \(\varphi_{N}\) can be lifted to an irreducible constituent of \(\chi_{N}\)._ Proof.: By Theorem 4.5, all \(*\)-vertices for \(\chi\) form a single \(G\)-conjugacy class, and so there exists a normal nucleus \((W,\gamma)\) of \(\chi\) where \(\gamma\in\operatorname{Irr}(W)\) is \(\pi\)-factored with \(\chi=\gamma^{G}=(\delta_{(G,W)}\gamma)^{\pi G}\), and such that \(Q\) is a Hall \(\pi^{\prime}\)-subgroup of \(W\) and \(\delta=(\delta_{(G,W)}\gamma_{\pi^{\prime}})_{Q}\). Applying Lemma 2.8, we have \(N\leq W\), and thus \(Q\cap N\) is a Hall \(\pi^{\prime}\)-subgroup of \(N\). This establishes (1). For (2), let \(\beta=(\gamma_{\pi^{\prime}})_{N}\), and note that \(\gamma_{\pi^{\prime}}(1)=\delta(1)=1\), so \(\beta\) is a linear \(\pi^{\prime}\)-special character of \(N\). By Lemma 2.9(2), we see that \(N\leq\operatorname{Ker}(\delta_{(G,W)})\), and hence \[\beta_{Q\cap N}=((\delta_{(G,W)}\gamma_{\pi^{\prime}})_{N})_{Q\cap N}=(( \delta_{(G,W)}\gamma_{\pi^{\prime}})_{Q})_{Q\cap N}=\delta_{Q\cap N}.\] Now the uniqueness of \(\beta\) is guaranteed by Lemma 2.2 with the roles of \(\pi\) and \(\pi^{\prime}\) interchanged. To show (3), let \(\alpha\) be an irreducible constituent of \((\gamma_{\pi})_{N}\). Then \(\alpha\) is \(\pi\)-special, so \(\alpha\beta\) is irreducible by Lemma 2.4. It is clear that \(\alpha\beta\) is an irreducible constituent of \(\gamma_{N}\), and since \(\gamma\) lies under \(\chi\), it follows that \(\alpha\beta\) is also an irreducible constituent of \(\chi_{N}\). Note that \(\beta\in\operatorname{Lin}(N)\) is \(\pi^{\prime}\)-special, which implies that \(\beta^{0}\in\operatorname{I}_{\pi}(N)\) is principal, and thus \(\alpha^{0}=(\alpha\beta)^{0}\) is a constituent of \((\chi_{N})^{0}=(\chi^{0})_{N}=\varphi_{N}\). Finally, we establish (4). By (3), we see that \(\alpha\beta\) is an irreducible constituent of \(\chi_{N}\) and that it is a lift. Since every irreducible constituent of \(\chi_{N}\) is conjugate to \(\alpha\beta\) by Clifford' theorem, the first statement follows. This, along the formula \(\varphi_{N}=(\chi^{0})_{N}=(\chi_{N})^{0}\), implies that the second statement holds. As an application, we establish a result that is similar to Lemma 3.3 of [7] for \(*\)-vertices. Although the proof is similar, we present it here for completeness. **Lemma 5.2**.: _Let \(G\) be \(\pi\)-separable with \(2\notin\pi\), and suppose that \(\chi,\psi\in L^{*}_{\varphi}(Q,\delta)\), where \(\varphi\in\mathrm{I}_{\pi}(G)\) and \(Q\) is a \(\pi^{\prime}\)-subgroup of \(G\) with \(\delta\in\mathrm{Lin}(Q)\). If \(N\) is a normal subgroup of \(G\), then the irreducible constituents of \(\chi_{N}\) are \(\pi\)-factored if and only if the irreducible constituents of \(\psi_{N}\) are \(\pi\)-factored._ Proof.: By symmetry, it is no loss to assume that each irreducible constituent of \(\chi_{N}\) is \(\pi\)-factored. We need to show that the irreducible constituents of \(\psi_{N}\) are also \(\pi\)-factored. Let \(L\triangleleft G\) be maximal with the property that \(L\leq N\) and the irreducible constituents of \(\psi_{L}\) are \(\pi\)-factored. If \(L=N\), we are done, and so we suppose that \(L<N\). Note that the irreducible constituents of \(\chi_{L}\) are also \(\pi\)-factored, and by Lemma 5.1, there exist \(\alpha_{1},\alpha_{2}\in\mathrm{X}_{\pi}(L)\) and \(\beta\in\mathrm{X}_{\pi^{\prime}}(L)\) such that \(\alpha_{1}\beta\) and \(\alpha_{2}\beta\) are irreducible constituents of \(\chi_{L}\) and \(\psi_{L}\), respectively, that \(\beta_{Q\cap L}=\delta_{Q\cap L}\), and that both \((\alpha_{1})^{0}\) and \((\alpha_{2})^{0}\) are irreducible constituents of \(\varphi_{L}\). Then \((\alpha_{1})^{0}\) and \((\alpha_{2})^{0}\) are conjugate in \(G\) by Clifford's theorem for \(\pi\)-partial characters (see Corollary 5.7 of [14]), which implies that \(\alpha_{1}\) and \(\alpha_{2}\) are \(G\)-conjugate by the injection from \(\mathrm{X}_{\pi}(L)\to\mathrm{I}_{\pi}(L)\) (see Lemma 2.3). Let \(M/L\) be a chief factor of \(G\) with \(M\leq N\). Then \(M/L\) is either a \(\pi\)-group or a \(\pi^{\prime}\)-group, and since we are assuming that \(\chi_{N}\) has \(\pi\)-factored irreducible constituents, each irreducible constituent of \(\chi_{M}\) is \(\pi\)-factored, and hence some member of \(\mathrm{Irr}(M|\alpha_{1}\beta)\) must be \(\pi\)-factored. By Lemma 2.6, we see that either \(\beta\) or \(\alpha_{1}\) is \(M\)-invariant, according as \(M/L\) is a \(\pi\)-group or a \(\pi^{\prime}\)-group. Clearly \(\alpha_{1}\) is \(M\)-invariant if and only if \(\alpha_{2}\) (which is \(G\)-conjugate to \(\alpha_{1}\)) is \(M\)-invariant. In both cases, we conclude that every member of \(\mathrm{Irr}(M|\alpha_{2}\beta)\) is \(\pi\)-factored by Lemma 2.6 again. Since \(\mathrm{Irr}(M|\alpha_{2}\beta)\) contains some irreducible constituent of \(\psi_{M}\), we know that the irreducible constituents of \(\psi_{M}\) are also \(\pi\)-factored. This contradicts the maximality of \(L\), thus proving the lemma. The following is part of Theorem 2.1 of [16] in which the whole group \(G\) was originally supposed to be solvable. Actually, it suffices to assume that \(G\) is \(\pi\)-separable for our purpose, and we omit the proof here because there is no need to repeat the argument word by word. **Lemma 5.3**.: _Let \(G\) be a \(\pi\)-separable group, and suppose that \(\varphi\in\mathrm{I}_{\pi}(G)\) has vertex \(Q\), where \(Q\) is a \(\pi^{\prime}\)-subgroup of \(G\). Let \(V\) be a subgroup of \(G\) containing \(Q\), and write \(\mathrm{I}_{\varphi}(V|Q)\) to denote the set of irreducible \(\pi\)-partial characters of \(V\) with vertex \(Q\) that induce \(\varphi\), that is,_ \[\mathrm{I}_{\varphi}(V|Q)=\{\eta\in\mathrm{I}_{\pi}(V|Q)\,|\,\eta^{G}=\varphi\}.\] _If \(G\) and \(V\) are chosen so that \(|G|+|V|\) is minimal subject to the condition that_ \[|\mathrm{I}_{\varphi}(V|Q)|>|N_{G}(Q):N_{V}(Q)|,\] _then \(V\) is a nonnormal maximal subgroup of \(G\), and \(\varphi_{N}\) has a unique irreducible constituent, where \(N\) is the core of \(V\) in \(G\)._ A key ingredient in our proof of Theorem B is the following corollary of Lemma 5.3. **Corollary 5.4**.: _Let \(G\) be \(\pi\)-separable with \(2\notin\pi\), and let \(Q\) be a \(\pi^{\prime}\)-subgroup of \(G\). Suppose that \(\varphi\in\mathrm{I}_{\pi}(G|Q)\) and \(Q\leq V\leq G\). Then \(|\mathrm{I}_{\varphi}(V|Q)|\leq|N_{G}(Q):N_{V}(Q)|\)._ Proof.: Assume that the result is false, and let \(G\) be a counterexample such that \(|G|+|V|\) is as small as possible. Let \(N\) be the core of \(V\) in \(G\). Applying Lemma 5.3, we see that \(V\) is a maximal subgroup of \(G\) with \(N<V\), and that \(\varphi_{N}=e\alpha\) where \(e\) is a positive integer and \(\alpha\in\mathrm{I}_{\pi}(N)\). In particular, \(\alpha\) is invariant in \(G\). Let \(K/N\) be a chief factor of \(G\). Then \(K\nsubseteq V\) by the definition of \(N\), and the maximality of \(V\) forces \(KV=G\). Write \(D=K\cap V\). Since \(G\) is \(\pi\)-separable, we know that \(K/N\) is either a \(\pi\)-group or a \(\pi^{\prime}\)-group. Suppose that \(K/N\) is a \(\pi^{\prime}\)-group. Let \(\theta\in\mathrm{I}_{\pi}(K)\) lie under \(\varphi\) and over \(\alpha\). Since \(\alpha\) is \(G\)-invariant, we have \(\theta_{N}=\alpha\) by Clifford's theorem for \(\mathrm{I}_{\pi}\)-characters (Corollary 5.7 of [14]), and thus \(\theta_{D}\) is irreducible. Then restriction defines a bijection from \(\mathrm{I}_{\pi}(G|\theta)\) onto \(\mathrm{I}_{\pi}(V|\theta_{D})\) (see Lemma 5.20 of [14]), which implies that \(\varphi_{V}\) is irreducible, and consequently, \(\varphi\) cannot be induced from \(V\). This shows that the set \(\mathrm{I}_{\varphi}(V|Q)\) is empty, a contradiction. So we are done in this case. Now suppose that \(K/N\) is a \(\pi\)-group. Then \(K/N\) has odd order because \(2\notin\pi\), and hence \(K/N\) is solvable by the Feit-Thompson odd-order theorem. We conclude that \(K/N\) is an abelian \(p\)-group for some odd prime \(p\), which implies that \(D=K\cap V\) is normal in \(G\), so \(D=N\). We will use \(\mathrm{B}_{\pi}\)-characters to derive a contradiction (see [14] for the definition and properties). Let \(\hat{\alpha}\in\mathrm{B}_{\pi}(N)\) be such that \((\hat{\alpha})^{0}=\alpha\). Since \(\alpha\) and \(\hat{\alpha}\) determine each other, it follows that \(\hat{\alpha}\) is also \(G\)-invariant. Fix \(\xi\in\mathrm{I}_{\varphi}(V|Q)\). Then \(\xi^{G}=\varphi\), which implies that \(\xi\) lies under \(\varphi\) (see Lemma 5.8 of [14]), and thus \(\xi\) lies over \(\alpha\). Let \(\hat{\xi}\in\mathrm{B}_{\pi}(V)\) be a lift for \(\xi\). Since the irreducible constituents of \(\hat{\xi}_{N}\) are still \(\mathrm{B}_{\pi}\)-characters, we see that \(\hat{\alpha}\) is an irreducible constituent of \(\hat{\xi}_{N}\). Furthermore, observe that \((\hat{\xi}^{G})^{0}=(\hat{\xi}^{0})^{G}=\xi^{G}=\varphi\), so \(\hat{\xi}^{G}\) is irreducible, which is not the case by Theorem 7.21 of [14]. This contradiction completes the proof. The following result, which we think is of independent interest, is the key to the proof of Theorem B in the introduction. **Theorem 5.5**.: _Let \(G\) be a \(\pi\)-separable group with \(2\notin\pi\), and suppose that \(\varphi\in\mathrm{I}_{\pi}(G)\) has vertex \(Q\), where \(Q\) is a \(\pi^{\prime}\)-subgroup of \(G\). If \(\delta\in\mathrm{Lin}(Q)\), then \(|L_{\varphi}^{*}(Q,\delta)|\leq|N_{G}(Q):N_{G}(Q,\delta)|\)._ Proof.: We proceed by induction on \(|G|\), and assume without loss that \(L_{\varphi}^{*}(Q,\delta)\) is not an empty set. By Lemma 2.8 and Lemma 5.2, we can fix \(N\triangleleft G\) such that for each \(\chi\in L_{\varphi}^{*}(Q,\delta)\), \(N\) is the unique maximal normal subgroup of \(G\) with the property that every irreducible constituent of \(\chi_{N}\) is \(\pi\)-factored. We will complete the proof by carrying out the following steps. **Step 1**.: _We may suppose that \(\varphi_{N}=e\mu\), where \(e\) is an integer and \(\mu\in\mathrm{I}_{\pi}(N)\)._ Proof.: By Lemma 6.33 of [14], we can choose an irreducible constituent \(\mu\) of \(\varphi_{N}\) such that the Clifford correspondent \(\varphi_{\mu}\) of \(\varphi\) with respect to \(\mu\) also has vertex \(Q\), that is, \(\varphi_{\mu}\in\mathrm{I}_{\pi}(G_{\mu}|Q)\). Set \(T=G_{\mu}\). Consider the conjugation action of \(N_{G}(Q)\) on \(\mathrm{Lin}(Q)\), and let \(\{\delta_{1},\ldots,\delta_{k}\}\) be a set of representatives of \(N_{T}(Q)\)-orbits on the \(N_{G}(Q)\)-orbit containing \(\delta\). We will construct an injection from \(L^{*}_{\varphi}(Q,\delta)\) into \(\bigcup_{i=1}^{k}\mathrm{L}^{*}_{\varphi_{\mu}}(Q,\delta_{i})\). To do this, we fix an arbitrary character \(\chi\in L^{*}_{\varphi}(Q,\delta)\). By Lemma 5.1(4), there exists an irreducible constituent \(\theta\) of \(\chi_{N}\) such that \(\theta\) is \(\pi\)-factored with \(\pi\)-degree and \(\theta^{0}=\mu\). Applying the Clifford correspondence for \(\pi\)-induction (see Lemma 2.12), we obtain a unique character \(\psi\in\mathrm{Irr}(G_{\theta}|\theta)\) satisfying \(\psi^{\pi G}=\chi\). Clearly, the equality \(\theta^{0}=\mu\) implies that \(G_{\theta}\leq G_{\mu}=T\), and so we can write \(\eta=\psi^{\pi T}\). Then \(\eta^{\pi G}=(\psi^{\pi T})^{\pi G}=\psi^{\pi G}=\chi\), and thus \[(\eta^{0})^{G}=((\delta_{(G,T)}\eta)^{0})^{G}=((\delta_{(G,T)}\eta)^{G})^{0}=( \eta^{\pi G})^{0}=\chi^{0}=\varphi,\] which forces \(\eta^{0}\in\mathrm{I}_{\pi}(T)\). On the other hand, since \(N\leq\mathrm{Ker}(\delta_{(T,G_{\theta})})\) by Lemma 2.9, we know that \(\delta_{(T,G_{\theta})}\psi\) lies over \(\theta\), and hence \(\eta=(\delta_{(T,G_{\theta})}\psi)^{T}\) lies over \(\theta\). So \(\eta^{0}\) lies over \(\theta^{0}=\mu\), and by the Clifford correspondence for \(\mathrm{I}_{\pi}\)-character, we conclude that \(\eta^{0}=\varphi_{\mu}\). Furthermore, since \(\chi\) is a good lift for \(\varphi\), it is easy to see that \(\eta\) is also a good lift for \(\varphi_{\mu}\). We may therefore assume that \(\eta\in\mathrm{L}^{*}_{\varphi_{\mu}}(Q,\delta^{\prime})\) for some linear character \(\delta^{\prime}\) of \(Q\). Since \(\eta^{\pi G}=\chi\), it follows that \((Q,\delta^{\prime})\) is also a \(*\)-vertex for \(\chi\). Now Theorem 4.5 implies that \((Q,\delta^{\prime})=(Q,\delta)^{g}\) for some element \(g\in G\), so \(g\in N_{G}(Q)\). From this we conclude that \(\eta\) has a \(*\)-vertex that is \(N_{T}(Q)\)-conjugate to one of the \((Q,\delta_{i})\) for some \(i\in\{1,\ldots,k\}\). Of course this pair \((Q,\delta_{i})\) is itself a \(*\)-vertex for \(\eta\), and we obtain \(\eta\in\bigcup_{i=1}^{k}\mathrm{L}^{*}_{\varphi_{\mu}}(Q,\delta_{i})\). We claim that \(\eta\) is determined uniquely by \(\chi\). To see this, it suffices to show that \(\eta\) is independent of the choice of \(\theta\). Assume that \(\theta^{\prime}\in\mathrm{Irr}(N)\) is a constituent of \(\chi_{N}\) with \((\theta^{\prime})^{0}=\mu\). Then \(\theta^{\prime}=\theta^{x}\) for some \(x\in G\), and hence \(\mu=(\theta^{\prime})^{0}=(\theta^{x})^{0}=(\theta^{0})^{x}=\mu^{x}\). It follows that \(x\in G_{\mu}=T\). Let \(\psi^{\prime}\in\mathrm{Irr}(G_{\theta^{\prime}}|\theta^{\prime})\) be such that \((\psi^{\prime})^{\pi G}=\chi\), and let \(\eta^{\prime}=(\psi^{\prime})^{\pi T}\). Then \(\psi^{\prime}=\psi^{x}\), and thus \(\eta^{\prime}=\eta^{x}=\eta\), as desired. Now we have established a well-defined map \(\chi\mapsto\eta\) from \(L^{*}_{\varphi}(Q,\delta)\) into \(\bigcup_{i=1}^{k}\mathrm{L}^{*}_{\varphi_{\mu}}(Q,\delta_{i})\). Since \(\eta^{\pi G}=\chi\), this map is injective, and hence we have \[|L^{*}_{\varphi}(Q,\delta)|\leq\sum_{i=1}^{k}|\mathrm{L}^{*}_{\varphi_{\mu}}(Q,\delta_{i})|.\] If \(T<G\), then the inductive hypothesis yields \[|\mathrm{L}^{*}_{\varphi_{\mu}}(Q,\delta_{i})|\leq|N_{T}(Q):N_{T}(Q,\delta_{i })|.\] Observe that \(|N_{T}(Q):N_{T}(Q,\delta_{i})|\) is the orbit size of the \(N_{T}(Q)\)-orbit containing \(\delta_{i}\) under the action of \(N_{T}(Q)\) on the \(N_{G}(Q)\)-orbit containing \(\delta\), and thus \[|L_{\varphi}^{*}(Q,\delta)|\leq\sum_{i=1}^{k}|N_{T}(Q):N_{T}(Q,\delta_{i})|=|N_ {G}(Q):N_{G}(Q,\delta)|.\] The result follows in this case, and we may therefore assume that \(G_{\mu}=G\). That is, \(\mu\) is invariant in \(G\), so that \(\varphi_{N}=e\mu\) for some integer \(e\). **Step 2**.: _There exists a character \(\theta\in\operatorname{Irr}(N)\) such that \(\theta\) is \(N_{G}(Q,\delta)\)-invariant and lies under every character in \(L_{\varphi}^{*}(Q,\delta)\). Furthermore, writing \(V=G_{\theta}\), we have_ \[|L_{\varphi}^{*}(Q,\delta)|\leq|\mathrm{I}_{\varphi}(V|Q)||N_{V}(Q):N_{V}(Q, \delta)|,\] _where \(\mathrm{I}_{\varphi}(V|Q)\) is the set of those members of \(\mathrm{I}_{\pi}(V|Q)\) that induce \(\varphi\), as in Lemma 5.3._ Proof.: By Lemma 5.1, we set \(\theta=\alpha\beta\), where \(\alpha\in\operatorname{Irr}(N)\) is the unique \(\pi\)-special character such that \(\alpha^{0}=\mu\) and \(\beta\in\operatorname{Irr}(N)\) is the unique \(\pi^{\prime}\)-special character with \(\beta_{Q\cap N}=\delta_{Q\cap N}\). Then \(\theta\) is irreducible and \(\theta^{0}=\alpha^{0}=\mu\). Furthermore, since \(\mu\) is invariant in \(G\) by Step 1, it follows that \(\alpha\) is \(G\)-invariant, and in particular, we have \(G_{\beta}=G_{\theta}=V\). Observe that \(N_{G}(Q,\delta)\) fixes \(\delta_{Q\cap N}\), so it leaves \(\beta\) unchanged. This shows that \(\theta\) is \(N_{G}(Q,\delta)\)-invariant, i.e. \(N_{G}(Q,\delta)\leq G_{\theta}\). By Lemma 5.1 again, it is easy to see that every character in \(L_{\varphi}^{*}(Q,\delta)\) lies over \(\theta\), that is, \(L_{\varphi}^{*}(Q,\delta)\subseteq\operatorname{Irr}(G|\theta)\). Assume first that \(V=G\). Then \(\theta\) is \(G\)-invariant, and by Lemma 2.8, we have \(N=G\). It follows that \(L_{\varphi}^{*}(Q,\delta)=\{\theta\}\), and the result trivially holds in this case. We can therefore assume that \(V<G\). For any character \(\chi\in L_{\varphi}^{*}(Q,\delta)\), there exists a unique \(\psi\in\operatorname{Irr}(V|\theta)\) such that \(\psi^{\pi G}=\chi\) by the Clifford correspondence for \(\pi\)-induction (Lemma 2.12), so that \(\chi\mapsto\psi\) defines an injection from \(L_{\varphi}^{*}(Q,\delta)\) into \(\operatorname{Irr}(V|\theta)\). We will show that \(\psi^{0}\in\mathrm{I}_{\varphi}(V|Q)\). First, we note that \[\varphi=\chi^{0}=(\psi^{\pi G})^{0}=((\delta_{(G,V)}\psi)^{G})^{0}=((\delta_{( G,V)}\psi)^{0})^{G}=(\psi^{0})^{G},\] so \(\psi^{0}\) is irreducible and induces \(\varphi\). Next, we claim that \((Q,\delta)\) is a \(*\)-vertex for \(\psi\). To see this, let \((Q^{*},\delta^{*})\) be a \(*\)-vertex for \(\psi\). Since \(\theta\) is the unique irreducible constituent of \(\psi_{N}\), we have \(\beta_{Q^{*}\cap N}=\delta_{Q^{*}\cap N}^{*}\) by Lemma 5.1. Also, since \((Q^{*},\delta^{*})\) is clearly a \(*\)-vertex for \(\chi\), there exists some element \(y\in G\) such that \((Q^{*},\delta^{*})=(Q,\delta)^{y}\) by Theorem 4.5. Now we have \[\beta_{Q^{y}\cap N}^{y}=(\beta_{Q\cap N})^{y}=(\delta_{Q\cap N})^{y}=\delta_{ Q^{y}\cap N}^{y}=\delta_{Q^{*}\cap N}^{*}=\beta_{Q^{*}\cap N}=\beta_{Q^{y} \cap N}.\] Since \(Q^{y}\cap N=(Q\cap N)^{y}\) is a Hall \(\pi^{\prime}\)-subgroup of \(N\) by Lemma 5.1 again, it follows from Lemma 2.2 that \(\beta^{y}=\beta\), and thus \(y\in G_{\beta}=V\). This shows that \((Q,\delta)\) is also a \(*\)-vertex for \(\psi\), as claimed. Finally, by Lemma 2.7, we see that \(\psi^{0}\in\mathrm{I}_{\pi}(V)\) has vertex \(Q\). Since we have established that \(\psi^{0}\) induces \(\varphi\), it follows that \(\psi^{0}\in\mathrm{I}_{\varphi}(V|Q)\), as wanted. Now let \(\mathrm{I}_{\varphi}(V|Q)=\{\zeta_{1},\ldots,\zeta_{m}\}\). Then \(\psi^{0}=\zeta_{i}\) for some \(i\in\{1,\ldots,m\}\), so that \(\psi\in\mathrm{L}_{\zeta_{i}}^{*}(Q,\delta)\). It follows that the map \(\chi\mapsto\psi\) defines an injection from \(L_{\varphi}^{*}(Q,\delta)\) into \(\bigcup_{i=1}^{m}\mathrm{L}_{\zeta_{i}}^{*}(Q,\delta)\), and we have \[|L_{\varphi}^{*}(Q,\delta)|\leq\sum_{i=1}^{m}|\mathrm{L}_{\zeta_{i}}^{*}(Q, \delta)|.\] Since \(V<G\), the inductive hypothesis guarantees that \(|\mathrm{L}_{\zeta_{i}}^{*}(Q,\delta)|\leq|N_{V}(Q):N_{V}(Q,\delta)|\), and hence \(|L_{\varphi}^{*}(Q,\delta)|\leq m|N_{V}(Q):N_{V}(Q,\delta)|=|\mathrm{I}_{ \varphi}(V|Q)||N_{V}(Q):N_{V}(Q,\delta)|\). We can now complete the proof of the theorem. By Step 2, we know that \(N_{G}(Q,\delta)\leq V\), so \(N_{V}(Q,\delta)=N_{G}(Q,\delta)\). Applying Corollary 5.4, we have \(|\mathrm{I}_{\varphi}(V|Q)|\leq|N_{G}(Q):N_{V}(Q)|\). This, together with the inequality in Step 2, yields \[|L_{\varphi}^{*}(Q,\delta)|\leq|N_{G}(Q):N_{V}(Q)||N_{V}(Q):N_{G}(Q,\delta)|= |N_{G}(Q):N_{G}(Q,\delta)|,\] and the proof is now complete. As before, we prove a more general result, which covers Theorem B by taking \(\pi=\{2\}^{\prime}\). **Theorem 5.6**.: _Let \(G\) be a \(\pi\)-separable group with \(2\notin\pi\), and suppose that \(\varphi\in\mathrm{I}_{\pi}(G)\) has vertex \(Q\), where \(Q\) is a \(\pi^{\prime}\)-subgroup of \(G\). If \(\delta\in\mathrm{Lin}(Q)\), then_ \[|\tilde{L}_{\varphi}(Q,\delta)|\leq|N_{G}(Q):N_{G}(Q,\delta)|,\] _and thus \(|\tilde{L}_{\varphi}|\leq|Q:Q^{\prime}|\)._ Proof.: Fix \(\delta\in\mathrm{Lin}(Q)\). By Lemma 4.5, we have \(\tilde{L}_{\varphi}(Q,\delta)=L_{\varphi}^{*}(Q,\delta_{(G,Q)}\delta)\), and since \(N_{G}(Q)\) fixes \(\delta_{(G,Q)}\), it follows by Theorem 5.5 that \[|L_{\varphi}^{*}(Q,\delta_{(G,Q)}\delta)|\leq|N_{G}(Q):N_{G}(Q,\delta_{(G,Q)} \delta)|=|N_{G}(Q):N_{G}(Q,\delta)|.\] This proves the first assertion, which implies that \(|\tilde{L}_{\varphi}|\leq|Q:Q^{\prime}|\) (by the remarks preceding the statement of Theorem B in the introduction). Finally, we prove Corollary C from the introduction. Recall that a finite group \(G\) is said to be an \(M\)**-group** if every \(\chi\in\mathrm{Irr}(G)\) is induced from a linear character of some subgroup of \(G\). By Theorem 6.23 of [13], if \(G\) is a solvable group with a normal subgroup \(N\), such that all Sylow subgroups of \(N\) are abelian and \(G/N\) is supersolvable, then each subgroup of \(G\) is an \(M\)-group. We will prove the following somewhat more general result, which clearly implies Corollary C. **Corollary 5.7**.: _Let \(\varphi\in\mathrm{I}_{\pi}(G)\) with vertex \(Q\), where \(G\) is \(\pi\)-separable with \(2\notin\pi\). Suppose that there exists a normal subgroup \(N\) of \(G\), such that \(N\) has odd order and each subgroup of \(G/N\) is an \(M\)-group. Then \(\tilde{L}_{\varphi}=L_{\varphi}\), and thus \(|L_{\varphi}|\leq|Q:Q^{\prime}|\)._ Proof.: Fix a character \(\chi\in L_{\varphi}\), and note that \(N\leq O_{\pi}(G)\) as \(2\notin\pi\). To prove that \(\chi\in\tilde{L}_{\varphi}\), by Lemma 3.1 it suffices to show that each \(\pi\)-factored character \(\psi\in\mathrm{Irr}(U)\) inducing \(\chi\) has odd degree, where \(N\leq U\leq G\) and \(\psi_{\pi^{\prime}}\) is primitive. Let \(\beta=\psi_{\pi^{\prime}}\). Then, by definition, \(N\leq\mathrm{Ker}\,\beta\), and we can view \(\beta\) as a character of \(U/N\). By assumption, however, \(U/N\) is an \(M\)-group, so the primitive character \(\beta\) must be linear. It follows that \(\psi\) has \(\pi\)-degree, and hence \(\psi(1)\) is an odd number. Finally, we have \(|L_{\varphi}|\leq|Q:Q^{\prime}|\) by Theorem 5.6. ## Acknowledgements This work was supported by the NSF of China (12171289) and the NSF of Shanxi Province (20210302123429 and 20210302124077).
2310.03244
Calibrating VLBI Polarization Data Using GPCAL. II. Time-Dependent Calibration
We present a new method of time-dependent instrumental polarization calibration for Very Long Baseline Interferometry (VLBI). This method has been implemented in the recently developed polarization calibration pipeline GPCAL. Instrumental polarization, also known as polarimetric leakage, is a direction-dependent effect, and it is not constant across the beam of a telescope. Antenna pointing model accuracy is usually dependent on time, resulting in off-axis polarimetric leakages that can vary with time. The method is designed to correct for the off-axis leakages with large amplitudes that can severely degrade linear polarization images. Using synthetic data generated based on real Very Long Baseline Array (VLBA) data observed at 43 GHz, we evaluate the performance of the method. The method was able to reproduce the off-axis leakages assumed in the synthetic data, particularly those with large amplitudes. The method has been applied to two sets of real VLBA data and the derived off-axis leakages show very similar trends over time for pairs of nearby sources. Furthermore, the amplitudes of the off-axis leakages are strongly correlated with the antenna gain correction factors. The results demonstrate that the method is capable of correcting for the off-axis leakages present in VLBI data. By calibrating time-dependent instrumental polarization, the rms-noise levels of the updated linear polarization images have been significantly reduced. The method is expected to substantially enhance the quality of linear polarization images obtained from existing and future VLBI observations.
Jongho Park, Keiichi Asada, Do-Young Byun
2023-10-05T01:26:20Z
http://arxiv.org/abs/2310.03244v1
# Calibrating VLBI Polarization Data Using GPCAL. II. Time-Dependent Calibration ###### Abstract We present a new method of time-dependent instrumental polarization calibration for Very Long Baseline Interferometry (VLBI). This method has been implemented in the recently developed polarization calibration pipeline GPCAL. Instrumental polarization, also known as polarimetric leakage, is a direction-dependent effect, and it is not constant across the beam of a telescope. Antenna pointing model accuracy is usually dependent on time, resulting in off-axis polarimetric leakages that can vary with time. The method is designed to correct for the off-axis leakages with large amplitudes that can severely degrade linear polarization images. Using synthetic data generated based on real Very Long Baseline Array (VLBA) data observed at 43 GHz, we evaluate the performance of the method. The method was able to reproduce the off-axis leakages assumed in the synthetic data, particularly those with large amplitudes. The method has been applied to two sets of real VLBA data and the derived off-axis leakages show very similar trends over time for pairs of nearby sources. Furthermore, the amplitudes of the off-axis leakages are strongly correlated with the antenna gain correction factors. The results demonstrate that the method is capable of correcting for the off-axis leakages present in VLBI data. By calibrating time-dependent instrumental polarization, the rms-noise levels of the updated linear polarization images have been significantly reduced. The method is expected to substantially enhance the quality of linear polarization images obtained from existing and future VLBI observations. high angular resolution -- techniques: interferometric -- techniques: polarimetric -- methods: data analysis + Footnote †: journal: ApJ ## 1 Introduction Polarization observations using Very Long Baseline Interferometry (VLBI) are ideally suited for studying the processes of mass accretion and jet formation (see, e.g., Park & Algaba 2022 and references therein). These processes occur at small physical scales and can only be observed with instruments with high angular resolution. Polarized emission from plasma in mass accretion flows and jets is directly related to their magnetic fields. Recent observations of the nearby elliptical galaxy M87 with the Event Horizon Telescope (EHT) demonstrate the power of VLBI polarimetry (Event Horizon Telescope Collaboration et al., 2019, 20, 20, 20, 20, 20). The EHT total intensity image of M87 reveals the presence of a prominent ring-like structure. This structure is interpreted as the result of gravitational light bending of synchrotron emission from a hot plasma surrounding the black hole, along with photon capture occurring at the event horizon. In the corresponding linear polarization images of the ring, the polarization position angles are arranged in a nearly azimuthal pattern (Event Horizon Telescope Collaboration et al., 2021). The comparison between these images and the General Relativistic Magnetohydrodynamic (GRMHD) simulation results indicates that a strong ordered, poloidal-dominated mag netic field may exist around the black hole, which is capable of generating the powerful jets seen in this source. (Event Horizon Telescope Collaboration et al., 2021). In addition, polarization observations of AGN jets using VLBI have provided important information about the collimation, acceleration, and particle acceleration processes in AGN jets (e.g., Asada et al., 2002; Jorstad et al., 2017; Gabuzda, 2018; Lister et al., 2018; Park et al., 2019, 2021; Lisakov et al., 2021). The linear polarization of AGN jets typically ranges from a few to a few tens of percent (e.g., Jorstad et al., 2017; Lister et al., 2018). In the past, the quality of VLBI linear polarization images of AGN jets was determined primarily by their sensitivity. However, recent VLBI arrays have increased their sensitivity considerably by enlarging the recording bandwidth and including very high sensitivity stations such as the phased Atacama Large Millimeter/submillimeter Array (ALMA; Matthews et al., 2018) and the phased Very Large Array (VLA). Consequently, systematic errors are becoming a more critical factor in determining the quality of VLBI linear polarization images. The most dominant systematic error affecting VLBI polarization data is "instrumental polarization". The source of this polarization signal is due to the antenna polarization not being exactly circular or linear, causing an unpolarized source to appear polarized. This polarization is commonly referred to as "polarization leakage" or "D-terms" and it must be corrected in the visibility data before producing an image. LPCAL is a task incorporated into Astronomical Image Processing System (AIPS, Greisen, 2003) and based on the linearized leakage model (Leppanen et al., 1995). It has been a standard program for calibrating instrumental polarization in VLBI data for a long time. In spite of its success for many studies utilizing various VLBI arrays (e.g., Casadio et al., 2017; Jorstad et al., 2017; Lister et al., 2018; Park et al., 2018, 2019; Gomez et al., 2022; Zhao et al., 2022), there are some limitations which prevent accurate calibration. An important limitation is that the "similarity approximation"1, which assumes similar total intensity and linear polarization structures for the calibrators (Cotton, 1993; Leppanen et al., 1995), is often violated for VLBI. The problem becomes more severe at high frequencies, where nearly all calibrators are resolved. Recently, a number of new calibration and imaging pipelines have been developed to improve calibration accuracy and linear polarization images, including the Generalized Polarization Calibration pipeline (GPCAL; Park et al., 2021), polsolve (Marti-Vidal et al., 2021), the eht-imaging software library (Chael et al., 2016, 2018), D-term Modeling Code (DMC, Pesce, 2021), THEMIS (Broderick et al., 2020), and Comrade (Tiede, 2022). Some of the pipelines were applied to the first linear polarization imaging of the M87 black hole (Event Horizon Telescope Collaboration et al., 2021). Footnote 1: As far as we are aware, the term “similarity approximation” was initially introduced in Leppanen et al. (1995). The concept was originally proposed by Cotton (1993), who also noted that the assumption of a direct proportionality between total intensity and linear polarization emission may not be valid. Nevertheless, most of the existing pipelines rely on the fundamental assumptions that instrumental polarization remains constant over a wide range of frequencies of a receiver and over a period of time during observation. VLBI arrays of recent generations can violate these assumptions, which results in systematic errors in the visibility data. This series of papers presents new methods for modeling the frequency- and time-dependent instrumental polarization in VLBI data, which have been implemented in GPCAL. The method for correcting frequency-dependent instrumental polarization is presented in a companion paper (Park et al. 2023; henceforth Paper I). The purpose of this paper is to introduce the method for correcting for time-dependent instrumental polarization. Instrumental polarization itself is believed to not change significantly during the observation. However, it is a direction-dependent effect (e.g., Smirnov, 2011), so the instrumental polarization varies depending on the direction of the antenna beam (due to the cross polarized sidelobes; see, e.g., Napier, 1989; Thum et al., 2008). More specifically, instrumental polarization of an antenna consists of two main components (see Section 4.3 in Napier, 1989). There is one at the center of the antenna beam that can be considered constant across the beam (\(D_{\rm on-axis}\)). The other component varies across the beam (\(D_{\rm off-axis}\)). The accuracy of antenna pointing during observation can be subject to variations over time, arising from stochastic winds and the deformation of antennas caused by sunlight, among other factors. Thus, effective D-terms can vary over time during an observation due to the direction-dependent effect. Only if the antenna pointing is sufficiently accurate throughout the observing period does instrumental polarization remain constant. Time-dependent leakage amplitudes are expected to be directly related to antenna pointing accuracy in this case. In general, high-frequency observations and antennas with large diameters usually have less accurate antenna pointing due to small beam sizes and stronger dish deformation at low elevations. Moreover, the instrumental polarization of phased arrays (such as the phased ALMA and VLA) is variable (e.g., Event Horizon Telescope Collaboration et al., 2019), as is the phasing efficiency. As demonstrated in this paper, even the D-terms for the Very Long Baseline Array (VLBA) may vary greatly due to inaccurate antenna pointing and changing weather conditions. GPCAL can increase the dynamic range of the VLBA linear polarization images by a factor of several by correcting the time-dependent instrumental polarization. The paper is organized as follows. In Section 2, we describe the radio interferometer measurement equation, which forms the basis of the GPCAL instrumental polarization model. We discuss in Section 3 the strategy for calibrating time-dependent polarization leakages. We validate the performance of the method with a synthetic data set in Section 4. In Section 5, we apply the method to real VLBA data sets and verify that time-dependent leakage correction using GPCAL can significantly enhance the quality of VLBI linear polarization images. We summarize and conclude in Section 6. ## 2 The Radio Interferometer Measurement Equation We use the radio interferometer measurement equation (RIME; Hamaker et al., 1996; Sault et al., 1996; Hamaker and Bregman, 1996; Hamaker, 2000; Smirnov, 2011), as described in Section 2 in Paper I. In this paper, we provide a brief explanation of the equation for the convenience of readers. The observed complex visibilities between two VLBI antennas, \(m\) and \(n\), are expressed in a visibility matrix, \(\mathbf{V}_{mn}\), of the form \[\mathbf{V}_{mn}=\begin{pmatrix}r_{mn}^{RR}&r_{mn}^{RL}\\ r_{mn}^{LR}&r_{mn}^{LL}\end{pmatrix}, \tag{1}\] where \(R\) and \(L\) refer to the right- and left-handed circular polarizations (RCP and LCP), respectively. The observed \(V_{mn}\) is corrupted by antenna gain and polarization leakage. It is convenient to arrange all the corruptions into a single Jones matrix (Jones, 1941): \[\mathbf{J}_{m} = \mathbf{G}_{m}\mathbf{D}_{m}\mathbf{P}_{m} \tag{2}\] \[= \begin{pmatrix}G_{m}^{R}&0\\ 0&G_{m}^{L}\end{pmatrix}\begin{pmatrix}1&D_{m}^{R}\\ D_{m}^{L}&1\end{pmatrix}\begin{pmatrix}e^{j\phi_{m}}&0\\ 0&e^{-j\phi_{m}}\end{pmatrix},\] where \(G\) is the complex antenna gain, \(D\) is the leakage factor (D-term), and \(\phi\) is the antenna field rotation angle. Subscripts and superscripts denote antenna numbers and polarization, respectively. The field rotation angle is a combination of the source's elevation (\(\theta_{\rm el}\)), parallactic angle (\(\psi_{\rm par}\)), and a constant offset for the rotation of antenna feed with respect to the azimuth axis (\(\phi_{\rm off}\)) via: \[\phi=f_{\rm el}\theta_{\rm el}+f_{\rm par}\psi_{\rm par}+\phi_{\rm off}, \tag{3}\] Cassegrain mounts have \(f_{\rm par}=1\) and \(f_{\rm el}=0\). Nasmyth-Right type mounts have \(f_{\rm par}=1\) and \(f_{\rm el}=+1\) and Nasmyth-Left type mounts have \(f_{\rm par}=1\) and \(f_{\rm el}=-1\). The observed \(\mathbf{V}_{mn}\) are modifications of the true \(\bar{\mathbf{V}}_{mn}\) through: \[\mathbf{V}_{mn}=\mathbf{J}_{m}\bar{\mathbf{V}}_{mn}\mathbf{J}_{n}^{H}, \tag{4}\] where H is the Hermitian operator. For circular feeds, the \(\bar{\mathbf{V}}\) are related to the Fourier transforms of the Stokes parameters (\(\tilde{I}\), \(\tilde{Q}\), \(\tilde{U}\), and \(\tilde{V}\)) via \[\bar{\mathbf{V}}_{mn}\equiv\begin{pmatrix}\mathscr{R}\mathscr{R}&\mathscr{L}\\ \mathscr{L}\mathscr{R}&\mathscr{L}\end{pmatrix}=\begin{pmatrix}\tilde{I}_{mn} +\tilde{V}_{mn}&\tilde{Q}_{mn}+j\tilde{U}_{mn}\\ \tilde{Q}_{mn}-j\tilde{U}_{mn}&\tilde{I}_{mn}-\tilde{V}_{mn}\end{pmatrix}. \tag{5}\] As explained in Paper I, GPCAL assumes that the antenna field-rotation angles are already corrected at an upstream calibration stage (before performing global fringe fitting). Thus, Equation 4 becomes: \[\mathbf{V}_{mn}=\mathbf{P}_{m}^{-1}\mathbf{G}_{m}\mathbf{D}_{m}\mathbf{P}_{m}\bar{\mathbf{V}}_{mn}\mathbf{P }_{n}^{H}\mathbf{D}_{n}^{H}\mathbf{G}_{n}^{H}(\mathbf{P}_{n}^{H})^{-1}, \tag{6}\] ## 3 Calibration Procedure ### Model Equation GPCAL assumes that antenna gains are already corrected during the upstream calibration and imaging/self-calibration procedures. The original GPCAL pipeline (Park et al., 2021) fits Equation 6 with the assumption of \(\mathbf{G}=\mathbf{I}\) to the observed cross-hand visibilities averaged over the frequency bandwidth. The pipeline assumes that polarimetric leakages are constant during the observation. However, if this assumption is violated, there are residual leakages in the data. Following the Appendix in Pesce (2021), we can write the true leakage matrix \(\mathbf{D}\) as: \[\mathbf{D}=\begin{pmatrix}1&D^{R}\\ D^{L}&1\end{pmatrix}, \tag{7}\] and the estimated leakage matrix \(\hat{\mathbf{D}}\) with the assumption of constant leakage terms during the observation, which can be different from \(\mathbf{D}\), as: \[\hat{\mathbf{D}}=\begin{pmatrix}1&D^{R}+\Delta^{R}\\ D^{L}+\Delta^{L}&1\end{pmatrix}. \tag{8}\] The visibility matrix after running the GPCAL pipeline would become: \[\mathbf{V}_{mn} = [\mathbf{P}_{m}^{-1}\hat{\mathbf{D}}_{m}\mathbf{P}_{m}]^{-1}[\mathbf{P}_{m}^{-1} \mathbf{D}_{m}\mathbf{P}_{m}]\bar{\mathbf{V}}_{mn} \tag{9}\] \[[\mathbf{P}_{n}^{H}\mathbf{D}_{n}^{H}(\mathbf{P}_{n}^{H})^{-1}][\mathbf{P}_{n}^{H }\hat{\mathbf{D}}_{n}^{H}(\mathbf{P}_{n}^{H})^{-1}]^{-1}\] \[= \mathbf{P}_{m}^{-1}\mathbf{R}_{m}\mathbf{P}_{m}\bar{\mathbf{V}}_{mn}\mathbf{P}_{n}^{H }\mathbf{R}_{n}^{H}(\mathbf{P}_{n}^{H})^{-1},\] where \(\mathbf{R}\) is a residual leakage matrix: \[\mathbf{R} \equiv \hat{\mathbf{D}}^{-1}\mathbf{D} \tag{10}\] \[\approx \begin{pmatrix}1&-\Delta^{R}\\ -\Delta^{L}&1\end{pmatrix}.\] The approximation holds when dropping out second-order terms. With this approximation, the off-diagonal terms in \(\mathbf{R}\) are the "residual" leakage terms, i.e., \([\mathbf{R}_{i,j}=\mathbf{D}_{i,j}-\hat{\mathbf{D}}_{i,j}]_{\in\neq j}\). First, we use the GPCAL pipeline to remove polarimetric leakages that are assumed to remain constant over time (\(\hat{\mathbf{D}}\)). We then fit Equation 9 to the corrected data in order to derive the "residual" time-dependent leakages. According to Pesce (2021), it may be more appropriate to redo calibration entirely with raw data (after data pre-processing) rather than incrementally calibrating a partially calibrated data set. In spite of this, we use this two-step procedure because, as we will demonstrate below, the signal-to-noise ratio of data plays a crucial role in the accurate estimation of time-dependent leakages. Therefore, before performing time-dependent instrumental polarization calibration, it is preferable to average the data over frequency. Nevertheless, as demonstrated in Paper I, modern VLBI arrays provide a large recording bandwidth, which can lead to significant variations in the D-terms over frequency. The frequency-dependent D-terms may introduce non-negligible non-closing errors in the data if they are not removed before averaging the data over frequency. It is possible to accurately correct frequency-dependent D-terms using our method presented in Paper I or even the GPCAL pipeline, which uses averaged data within each IF, which can remove the gross variations of D-terms across the entire frequency band. Thus, we use the data after correcting for \(\hat{\mathbf{D}}\), where the gross variations of D-terms over frequency have already been removed, and correct for the residual time-dependent leakages. The cross-hand visibilities in Equation 9 can be re-written as: \[r^{RL}_{mn} \approx (\tilde{Q}_{mn}+j\tilde{U}_{mn})+\Delta^{R}_{m}(t)e^{2j\phi_{m} }r^{LL}_{mn,\text{cal}}\] \[+ \Delta^{L*}_{n}(t)e^{2j\phi_{n}}r^{RR}_{mn,\text{cal}}\] \[+ \Delta^{R}_{m}(t)\Delta^{L*}_{n}(t)e^{2j(\phi_{m}+\phi_{n})}( \tilde{Q}_{mn}-j\tilde{U}_{mn})\] \[r^{LR}_{mn} \approx (\tilde{Q}_{mn}-j\tilde{U}_{mn})+\Delta^{L}_{m}(t)e^{-2j\phi_{n }}r^{RR}_{mn,\text{cal}}\] \[+ \Delta^{R*}_{n}(t)e^{-2j\phi_{n}}r^{LL}_{mn,\text{cal}}\] \[+ \Delta^{L}_{m}(t)\Delta^{R*}_{n}(t)e^{-2j(\phi_{m}+\phi_{n})}( \tilde{Q}_{mn}+j\tilde{U}_{mn}), \tag{11}\] where we replaced \(\mathscr{RR}\) and \(\mathscr{LR}\) using Equation 5 and \(\mathscr{RR}\) and \(\mathscr{LLE}\) by the final calibrated parallel-hand visibilities \(r^{RR}_{mn,\text{cal}}\) and \(r^{LL}_{mn,\text{cal}}\), respectively. We assume that \(\tilde{Q}\) and \(\tilde{U}\) are constant over time during the observation and depend only on the baseline coordinate \((u,v)\). We fit Equation 11 to the data for each scan to derive \(\Delta^{R}(t)\) and \(\Delta^{L}(t)\) using the Scipy curve_fit package. ### Calibration Strategy We fit Equation 11 to the data for each scan. There is a limited number of data points used for fitting in this case. In a scan with \(N\) antennas, there are \(4N\) free parameters (the real and imaginary parts of the D-terms for RCP and LCP for each antenna). If one tries to fit a model with \(4N\) degrees of freedom to a limited number of data points within a scan as compared to the entire dataset, there is likely to be a significant correlation among the parameters. We will demonstrate below that the best-fit D-terms in this case have large amplitudes because of the high correlation between parameters, while the assumed D-terms have small amplitudes in the synthetic data (Section 4). This is similar to performing an amplitude self-calibration at a very short solution interval on the total intensity data of weak sources, which results in antenna gain solutions with very high amplitudes. Due to the limited signal-to-noise ratio of the data (primarily as a result of the limited number of total data points) compared to the number of free parameters, it is challenging to accurately constrain the polarimetric leakages of all stations for each scan. Additionally, antenna pointing should be reasonably accurate for most stations and most scans. Nevertheless, some stations may have more inaccurate antenna pointing than others due to poor weather conditions or large diameters. For stations with inaccurate pointing during particular scans, the linear polarization models produced after correcting for on-axis instrumental polarization may differ substantially from the cross-hand visibilities of all baselines associated with those stations. If this is the case, one can assume that there are significant residual leakages (\(\Delta\) in Equation 11) only for those stations for the scans, and fix the residual D-terms for the other stations to be zero for fitting. By doing so, we will be able to avoid the strong correlation between fitting parameters and improve the accuracy of the fitting2. Footnote 2: For fitting, the method uses the visibility weights that are stored in the UVFITS files provided by the users. However, the users have the option of scaling down the visibility weights of particular antennas by a constant factor, as was done in the original GPCAL pipeline (Park et al., 2021). This feature proves advantageous for arrays featuring a large variance in sensitivity among the antennas, as it serves to prevent the fitting process from being dominated by the most sensitive stations. The first polarization imaging of M87 from the 2017 EHT observations (Event Horizon Telescope Collaboration et al., 2021) was also carried out using a similar approach. In some calibration pipelines, short baselines are used to calibrate the D-terms of the stations comprising the short baselines (ALMA and the Atacama Pathfinder Experiment (APEX) telescope in Chile and the James Clerk Maxwell Telescope (JCMT) and Submillimeter Array (SMA) on Maunakea in Hawaii). Those pipelines then assume that the D-terms of those stations have already been corrected and perform fitting for only the D-terms of the other stations based on the long baseline data. Through this two-step calibration strategy, which takes advantage of the fact that short baselines are less sensitive to complex source structures and have a high signal-to-noise ratio, it was possible to achieve good calibration accuracy and avoid strong correlations between the D-terms of many stations. Accordingly, we adopt the following calibration strategy for time-dependent calibration of instrumental polarization. 1. By using CLEAN with Difmap on the data corrected for the on-axis D-terms, Stokes \(Q\) and \(U\) images are produced. 2. For each scan and for each station \(m\), we compute a norm \[\chi^{2}_{m}=\sum_{n,k}w_{mn,k}\left(|r^{RL}_{mn,k}-\bar{r}^{RL}_{mn,k}|^{2}+| r^{LR}_{mn,k}-\bar{r}^{LR}_{mn,k}|^{2}\right),\] (12) where \(w_{mn,k}\) is the weight of the \(k\)th visibility matrix \(V^{k}\) in the scan for the baseline between stations \(m\) and \(n\), \(\bar{r}^{RL}_{mn,k}\equiv\tilde{Q}_{mn,k}+j\tilde{U}_{mn,k}\) and \(\bar{r}^{LR}_{mn,k}\equiv\tilde{Q}_{mn,k}-j\tilde{U}_{mn,k}\) are the model cross-hand visibilities corresponding to the \(k\)th visibility for the baseline derived from the Fourier Transforms of the Stokes \(Q\) and \(U\) CLEAN images. 3. Identify \(l\) number of stations having the largest norm values (i.e., the stations giving the largest and the second largest \(\chi^{2}_{m}\) for \(l=2\)). \(l\) is a free parameter controlled by the variable timecal_freepar in GPCAL, which is provided by users. The determination of the optimal value for timecal_freepar may vary depending on the data and the SNR of the source, as indicated in Table 1 and Section 4. In the case of VLBA data, it is recommended to adopt a conservative approach and set a small value, such as timecal_freepar = 1-2. 4. We fit Equation 11 to the cross-hand visibilities in the scan. Let the residual D-terms (\(\Delta^{R}\) and \(\Delta^{L}\)) for the stations identified in Step 3 be free parameters and those for the other stations be fixed to be zero during the fitting. 5. Repeat steps 1-4 for all scans. 6. Correct for the derived residual D-terms by inverting the Jones matrices in Equation 9. 7. Iterate the above steps as many times as specified by users. Utilize the results obtained in Step 6 to generate Stokes \(Q\) and \(U\) images during Step 1, which are subsequently employed in the remaining calibration steps. The method yields the reduced \(\chi^{2}\) of the fit at every iteration, which can serve as a valuable tool for selecting the appropriate number of iterations. While this procedure is similar to the instrumental polarization self-calibration procedure implemented in the original GPCAL pipeline (Park et al., 2021), it differs in that fitting is performed only for leakages of stations that exhibit large residuals between the model and cross-hand visibilities. Using synthetic data sets, we will demonstrate the effectiveness of this strategy in correcting for time-dependent leakages of large amplitudes (Section 4). The method is implemented in GPCAL, which is publicly available at [https://github.com/jhparakastro/gpcal](https://github.com/jhparakastro/gpcal). ## 4 Validation using synthetic data To validate the performance of the method, we used synthetic data derived from real VLBA data observed on 2018 Dec 08 at 43 GHz as part of the VLBA-BU-BLAZAR monitoring program3. Data calibration and analysis methods are described in Paper I. Stokes \(Q\) and \(U\) images were produced with CLEAN in Difmap for 10 sources; 3C 279, 3C 273, OJ 287, 3C 454.3, 1633+382, 3C 345, CTA 102, 3C 84, MKN 501, and 0235+164. The sources provide a wide range of total intensity and linear polarization structures and cover a wide range of signal-to-noise ratios. Footnote 3: [https://www.bu.edu/blazars/VLBAproject.html](https://www.bu.edu/blazars/VLBAproject.html) The synthetic data for these 10 sources were generated using GPCAL as described in Park et al. (2021). GPCAL generates synthetic data sets using Equation 6 and the \(I\), \(Q\), and \(U\) CLEAN models as ground truth source structures (i.e., \(\tilde{I}\), \(\tilde{Q}\), \(\tilde{U}\) in Equation 5), assuming \(\tilde{V}=0\). Based on the uncertainties of each real data point, we added thermal noise to the corresponding synthetic data point. Afterwards, the data was averaged over the entire frequency band in order to increase the signal-to-noise ratio. We assumed unity antenna gains, i.e., \(\mathbf{G}=\mathbf{I}\). We introduced on-axis D-terms, which are randomly chosen based on the D-term distribution estimated by GPCAL from the real data, and are assumed to remain constant over time. We then introduced off-axis D-terms that vary randomly from scan to scan and whose real and imaginary components follow a Gaussian distribution with a zero mean and a certain standard deviation. As explained in Section 3 and will be shown in Section 5, the off-axis D-terms for most stations are anticipated to have small amplitudes in most cases. However, some stations that experience large pointing errors may have large amplitudes in their off-axis D-terms. In order to reflect this realistic situation in the synthetic data set, we assumed a standard deviation of 0.02 and 0.01 for the VLBA PT and NL station's leakages, respectively, and a Figure 1: Comparison of the time-dependent, ground-truth D-term components (real and imaginary parts) assumed in the synthetic data sets shown on x-axis with the reconstructed D-term components derived by GPCAL shown on y-axis for each scan. Each station’s result is presented using distinct colors. For each source, a correlation is shown between the true and estimated D-terms in percentage units. Over all antennas, the \(L_{1}\equiv|D_{\rm Truth}-D_{\rm Recon}|\) norm is averaged over the real and imaginary components of the D-terms for RCP and LCP. Figure 2: PDF of \(D_{\rm Truth}-D_{\rm Recon}\) for each source shown in each panel. The red solid lines represent Gaussian distributions fitted to the PDFs. In each panel, the standard deviation (\(\sigma\)) of the best-fit Gaussian distribution is indicated in percentage units. The average S/N of the parallel-hand (\(\langle SNR\rangle_{\rm par}\)) and cross-hand visibilities (\(\langle SNR\rangle_{\rm cro}\)) for each source is denoted in each panel. standard deviation of 0.002 for the other station's leakages4. The method aims to identify and remove off-axis D-terms with large amplitudes from the data. Footnote 4: Obtaining the precise standard deviation values from actual data is a non-trivial task due to the dependence of off-axis D-term amplitudes on sources and scans. To address this issue, we rely on assuming values that would result in off-axis D-term amplitudes that are reasonably comparable to those obtained from the actual data. First, we ran the GPCAL pipeline on the synthetic data set to correct for the on-axis D-terms. Based on the assumption that the source 3C 84 is unpolarized, the pipeline estimates the initial D-terms using the weakly polarized source 3C 84 (with the degree of linear polarization \(\lesssim 1\%\) at 43 GHz; Kim et al., 2019). As a next step, it implements 10 iterations of instrumental polarization self-calibration using bright calibrators with moderate or high linear polarization: 3C 279, 3C 273, 3C 454.3, and OJ 287. The data for 3C 84 is included as well in this procedure under the assumption that it is unpolarized. For all sources, the derived on-axis D-terms are removed from the data. Using the method, we corrected for residual time-dependent polarimetric leakages for all simulated sources. We used timecal_freepar = 2. This means that during each iteration of calibration, the D-terms of the two stations with the largest \(\chi^{2}_{m}\) values for each scan are free parameters, while the D-terms of the other stations are fixed to be zero (see Steps 2 and 3 in Section 3). The time-dependent leakage calibration procedure was repeated ten times. We present in Figure 1 a comparison of the real and imaginary components of the time-dependent D-terms assumed in the synthetic data generation (x-axis) with the reconstructed D-terms estimated by GPCAL (y-axis). The reconstructed D-terms are the sum of the on-axis and off-axis terms. Each panel indicates the average \(L_{1}\equiv|D_{\rm Truth}-D_{\rm Recon}|\) norm. Our method can reconstruct the true D-terms accurately, except for MKN 501 and 0235+164, which have a low signal-to-noise ratio. Figure 2 shows the probability density function (PDF) of the difference between the reconstructed and truth D-term components for each source. As indicated in each panel, we fitted a Gaussian function to each PDF and derived its standard deviation (\(\sigma\)). For most sources, \(\sigma\) is close to 0.2%, which is the standard deviation of a Gaussian distribution used to generate random time-dependent D-term components, except for NL and PT. Consequently, the method can correct for time-dependent D-terms with large amplitudes from some stations, but cannot accurately constrain time-dependent D-terms with small amplitudes. A linear polarization image's quality is primarily limited by the former D-terms, while the latter is naturally expected in most realistic scenarios. Thus, the synthetic data test validates the effectiveness of the method, which is primarily designed to correct for time-dependent leakages with large amplitudes that usually appear at certain stations and/or scans. For the weak sources MKN 501 and 0235+164, \(\sigma\) was substantially greater than 0.2%. We denote the average S/N of the parallel-hand and cross-hand visibilities for each source in each panel of Figure 2. It is evident that the S/Ns of the cross-hand visibilities for these sources are notably low5 This result demonstrates that we cannot derive accurate solutions due to the limited signal to noise ratio for these sources despite the small number of parameters available for each round of fitting. Thermal noise would likely limit the quality of linear polarization images of these sources, and systematic errors such \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline timecal\_freepar & 3C 279 & 3C 273 & OJ 287 & 3C 454.3 & 1633+382 & 3C 345 & CTA 102 & 3C 84 & MKN 501 & 0235+164 \\ \hline 1 & 0.17 & 0.17 & 0.17 & 0.19 & 0.14 & 0.19 & 0.19 & 0.15 & 1.28 & 0.30 \\ 2 & 0.17 & 0.16 & 0.16 & 0.22 & 0.15 & 0.17 & 0.18 & 0.17 & 1.08 & 0.40 \\ 5 & 0.21 & 0.18 & 0.16 & 0.22 & 0.15 & 0.17 & 0.22 & 0.22 & 1.14 & 0.65 \\ 10 & 0.50 & 0.76 & 2.16 & 0.84 & 0.81 & 1.64 & 1.56 & 0.73 & 6.37 & 7.27 \\ \hline \end{tabular} Note. –Standard deviations of Gaussian distributions fitted to PDFs of \(D_{\rm Truth}-D_{\rm Recon}\) are shown in units of % depending on the timecal_freepar parameter. Different columns are used to display the results from different sources. \end{table} Table 1: Goodness of reconstruction of time-dependent leakages by changing the timecal_freepar parameter as time-dependent leakages would not have a significant impact. As a result, it is not recommended to perform time-dependent leakage calibration on weak sources. Probably the most critical parameter for the method would be timecal_freepar, since high values of this parameter would result in large correlations between the fitting parameters and overfitting. As a demonstration of this effect, we applied the method by changing this parameter and presented the sigma values for each source from each run in Table 1. When timecal_freepar = 2, the sigma values are the lowest for most of the bright sources. For weak sources MKN 501 and 0235+164, the values tend to become smaller as timecal_freepar decreases. With timecal_freepar = 10, i.e., when the D-terms of all 10 stations are free parameters, the values become very high for all sources. Using small timecal_freepar values is recommended for VLBA data at 43 GHz, though the optimal parameter may differ depending on the quality and array of the data. ## 5 Application to real data ### Data & Analysis Using two real VLBA data sets, we evaluate the effectiveness of the method. One is the same data analyzed in Paper I and in Section 4 (project code: BM462M) observed on 2018 Dec 08. In the period between August 2018 and October 2019, the VLBA pointing model was not optimized due to problems that occurred during the transition of the telescope control computers (Blanchard, 2021). Due to this, the quality of the VLBA data observed during this period was affected, particularly at high frequencies, where the antenna beam size is small. The inaccurate antenna pointing model also affected the BM462M data we analyzed. As a consequence, off-axis D-terms will be pronounced and will significantly limit the quality of linear polarization images derived from this data. A second set of data is from the M87 monitoring program observed using the VLBA simultaneously at 4.7, 5.1, 6.5, and 6.8 GHz in order to investigate the kinematics of the M87 jet (Park et al., 2019; Park et al. in prep.). We have selected the data observed on 2021 July 02 at 4.7 GHz (project code: BP249H). The VLBA Los Alamos (LA) station reported rain near the end of the observation, and the Stokes \(Q\) and \(U\) CLEAN models of all sources were not able to fit the data well after the rain began. Weather conditions may affect the leakage characteristics of this station, which motivates us to test our method. Data postcorrelation was performed with AIPS and hybrid imaging with CLEAN and self-calibration in Difmap. On-axis D-terms as well as frequency-dependent leakages were corrected using GPCAL, as described in Paper I. Following this, we corrected for time-dependent leakages based on our method, using the parameter timecal_freepar = 2 and iterating the calibration procedure 10 times. ### Results Figure 3 shows the off-axis leakages as a function of time for the BM462M data. Although we present the results for three antennas only as an example, the results for other antennas were also qualitatively similar. The primary purpose of this test is to compare the trend of the off-axis D-terms derived from sources located near each other in the sky. The accuracy of the antenna pointing model depends on the direction of the antenna and the time of day. As a result, nearby scans of nearby sources may also experience similar levels of antenna pointing errors. We can expect to observe similar trends in off-axis D-terms for these scans if the direction-dependent leakages and inaccurate antenna pointing model are the primary causes of time-dependent leakages. We compare the derived off-axis D-terms between 3C 273 and 3C 279 (separated by 10.4 degrees in the sky), 3C 454.3 and CTA 102 (6.8 degrees), and 3C 345 and 1633+382 (2.2 degrees). For all antennas, the trends of the derived off-axis D-terms from nearby sources are very similar. In each panel, we present the average \(L_{1}\) norm of the D-term components between nearby scans (within ten minutes) of these source pairs. We obtained smaller average \(L_{1}\) norms for closer pairs of sources, which is reasonable since antenna pointing offsets, and therefore off-axis D-terms, are direction-dependent. The method did not attempt to derive time-dependent D-terms for the Mauna Kea (MK) station for 3C 273 (the top middle panel in Figure 3). It is due to the fact that the \(\chi^{2}_{m}\) value for this station is always low during any calibration round. The 3C 273 jet displays a complex linear polarization structure with a moderate to high level of linear polarization (e.g., Attridge et al., 2005; Hada et al., 2016; Park et al., 2021). Thus, the cross-hand visibility of the long baselines associated with MK station has low amplitudes and low signal-to-noise ratios (at a \(\lesssim 0.1\) Jy level for this data). The effects of systematic errors, such as time-dependent leakages, are not significant for these baselines, and therefore GPCAL does not attempt to model them. Figure 4 illustrates the amplitudes of the gain-correction factors (\(1/|G|\)) and the amplitudes of the derived off-axis D-terms as a function of time for the VLBA Pie Town station. As a result of the inaccurate antenna pointing model, this station exhibits large gain correc Figure 3: The derived off-axis D-terms for the BM462M data as a function of time. In the diagram, the filled circles and open squares represent the D-terms for RCP and LCP, respectively. The upper and lower panels in each panel display the real and imaginary parts of the results for each antenna. Different rows of results are presented for different pairs of adjacent sources in the sky (3C 273 and 3C 279 in the top, 3C 454.3 and CTA 102 in the middle, and 3C 345 and 1633+382 in the bottom). In each figure, the averaged norm \(L_{1}\equiv|D_{i}-D_{j}|\) between the neighboring scans (separated by less than 10 minutes) for the adjacent sources \(i\) and \(j\) is shown for each polarization. The symbol \(\Delta d\) denotes the distance between adjacent sources in the celestial sphere. While we provide results for only three antennas as an illustration, the findings for other antennas were also qualitatively comparable. tion factors and off-axis D-terms. Both quantities for nearby source pairs exhibit very similar trends as expected. In addition, the trends between the gain correction factors and the off-axis D-terms are also very similar. Due to the direction-dependent nature of antenna leakage, the effects of off-axis D-terms become more prominent for large antenna pointing offsets. Figure 5 shows the amplitudes of the derived off-axis D-terms as a function of the gain correction factors for all scans and all sources. Regardless of the antenna, there is a strong positive correlation between the two quantities. It is demonstrated that time-dependent leakages are present in the data due to the direction-dependent D-terms and imperfect pointing off Figure 4: Amplitudes of gain correction factors (upper) and off-axis D-terms for the VLBA Pie Town station for different pairs of adjacent sources in the BM462M data set as a function of time. The filled circles and open squares represent the results for RCP and LCP, respectively. Figure 5: Off-axis D-term amplitudes as a function of gain correction factors for the BM462M data set. RCP and LCP are represented by filled circles and open squares, respectively. Different stations are represented by different colors. sets. Their amplitudes can be very large for scans that are affected by large pointing offsets. In Figure 6, we present the derived off-axis D-terms as a function of time for 3C 273 and M87, separated by 10.3 degrees in the sky, for the BP249H data. In line with the results obtained for BM462M data, the off-axis D-terms for these sources also exhibit similar trends. As the antenna pointing is accurate at this low frequency (4.7 GHz), the amplitudes of the off-axis D-terms are generally small. For LA station, however, large off-axis D-terms are observed after around 3 UT, when rainfall began. Both 3C 273 and M87 exhibited this trend, indicating that severe weather had an impact on the large off-axis D-terms. Observations at other frequencies have also yielded similar results, although they are not included in the present paper. ### Evaluation Figure 7 illustrates the linear polarization images of 3C 279 and 3C 345 derived from the BM462M data before (left) and after (right) time-dependent leakage correction. A Ricean de-biasing correction has been applied to the images (i.e., \(P_{\rm corr}=P_{\rm obs}\sqrt{1-(\sigma_{P}/P_{\rm obs})}\)), where \(P_{\rm corr}\) and \(P_{\rm obs}\) denote the corrected and original linearly polarized intensities (Wardle and Kronberg, 1974; Lister et al., 2018), respectively, and \(\sigma_{P}\equiv(\sigma_{Q}+\sigma_{U})/2\) is the average of the rms noise in the off-source regions in Stokes \(Q\) and \(U\) images (Hovatta et al., 2012). In each image, we note the polarization dynamic range, which is defined as the ratio of the peak linear polarization intensity to \(\sigma_{P}\). For each image, linear polarization vectors are drawn for linearly polarized intensities exceeding three times the \(\sigma_{P}\) value. Through the use of the time-dependent leakage correction method, the noise level of the polarization images has been significantly reduced, resulting in an increase in polarization dynamic range by factors of two to three. After the time-dependent leakage correction, linear polarization emission was observed in the jet regions of weak total intensity that was not evident in the original linear polarization images. In the case of 3C 279, which has a high linearly polarized flux (\(\sim 0.8\) Jy), the noise level in the polarization images is more dominated by systematic errors than by thermal noise. Using the method, residual systematic errors in the data are removed, making it possible to detect the weak polarization emission in the jet. A comparison of linear polarization images of 3C 273 and M87 based on the BP249H data with and without time-dependent leakage correction is presented in Figure 8. Because of the improvement in polarization dynamic range, weaker linear polarization emission was detected in both sources, similar to the results obtained from the BM462M data. ## 6 Conclusions In this series of papers, we present new methods for calibrating frequency and time-dependent leakages in VLBI data, which have been implemented in GPCAL. A wide fractional bandwidth is provided by modern VLBI arrays. The instrumental polarization is affected by chromatic effects since telescope systems are usually designed to produce the most optimal polarization response at a given nominal frequency (e.g., Marti-Vidal et al., 2021). In Paper I, we present a method for correcting leakages that vary in frequency. It is also possible for instrumental polarization to change over time. Figure 6: Same as Figure 3 but for the BP249H data. After UT 3, rain was reported at the VLBA Los Alamos station (LA; middle panel), following which the off-axis D-term amplitudes increased substantially. The reason for this is that instrumental polarization is a direction-dependent effect (e.g., Smirnov, 2011). Therefore, instrumental polarization can vary depending on the accuracy of the antenna pointing. Antenna pointing is subject to uncertainty, and can be adversely affected by the weather, such as strong winds and the deformation of antennas caused by sunlight, which may result in instrumental polarization that is time-dependent. In Figure 7: Linear polarization images of 3C 279 (upper) and 3C 345 (lower) from the BM462M data set. The hot color indicates the total intensity distribution and the colored ticks indicate the electric vector position angles (EVPAs), with the color scales indicating Ricean de-biased linearly polarized intensity. Presented in the left and right panels are images obtained before and after time-dependent leakage correction. There is a note in each panel indicating the linear polarization dynamic range, which is defined as the ratio of the peak linear polarization intensity to the linear polarization off-source rms noise. In the bottom right corner, the shape of the synthesized beam is shown. poor weather conditions and with large dishes, this effect is more pronounced. The purpose of this paper is to introduce a method for correcting time-dependent leakages. This method works on the data after correcting the "on-axis" D-terms, which are calculated assuming that the D-terms are constant during observation. In this manner, "residual" leakages are derived. It calculates how well the Stokes \(Q\) and \(U\) images fit the cross-hand visibilities for each antenna and for each scan based on the data. Next, it determines which antennas exhibit the greatest difference between the data and the model. By assuming that the D-terms of all other antennas are zero, the method determines the best-fit D-terms for the D-terms. Figure 8: Same as Figure 7 but for the BP249H data for 3C 273 (upper) and M87 (lower). these antennas6. The procedure is iterated until the solutions are converged, and the number of iterations can be controlled by the user. Footnote 6: We plan to implement another calibration strategy that will correct for off-axis leakages only for scans and stations that are affected by large antenna pointing offsets in the future. It is based on the tight correlation between the antenna gain correction factors and the amplitudes of the derived off-axis D-terms (Figures 4 and 5). This strategy may reduce the degrees of freedom in the fitting process, but it requires good images of the source’s total intensity. The method was tested using a synthetic data set containing time-dependent leakages generated from real VLBA data observed at 43 GHz. Leakages consist of two components: the on-axis, stable D-terms drawn from the distribution of the D-terms from the real data, as well as the off-axis, variable D-terms that are randomly selected from Gaussian distributions for each scan. It is assumed that two antennas have large standard deviations (0.02 and 0.01 for PT and NL stations, respectively) for the Gaussian distributions and other antennas have small standard deviations (0.002), in order to mimic some recently observed VLBA data sets at 43 GHz. Based on the method, we derived time-dependent leakages for the synthetic data set. We found that the method was able to successfully reconstruct the time-dependent leakages that were assumed in the generation of synthetic data. In most cases, the distribution of the difference between the estimated and ground-truth D-term components (the real and imaginary parts) could be described by Gaussian distributions with standard deviations less than 0.002. This result indicates that the method is capable of capturing the time-dependent leakages of large amplitudes, which are the dominant factors in limiting the quality of linear polarization images. The purpose of developing the method is actually to correct for the largest systematic errors in the data by using prior information that the off-axis D-term amplitudes of most antennas in realistic VLBI arrays are usually small. Our method was applied to two sets of real data obtained with the VLBA at 4.7 and 43 GHz. In the VLBA 4.7 GHz data, significant deviations are observed between the source polarization models and cross-hand visibilities for all baselines associated with the VLBA Los Alamos station after the time the station reported rain. An antenna control computer upgrade caused inaccurate antenna pointing models in the VLBA 43 GHz data. We derived the off-axis D-terms for several pairs of sources located close to one another in the sky as a function of time. The trends of the derived off-axis D-terms between nearby sources are very similar. Off-axis D-terms for nearby scans of less separated sources displayed a higher degree of consistency. Additionally, the amplitudes of the off-axis D-terms and the gain-correction factors show very similar trends. Based on these results, it can be concluded that the method is capable of capturing time-dependent leakages caused by direction-dependent leakages and imperfect antenna pointing. For all sources analyzed in this paper, the dynamic ranges of linear polarization images have been improved by factors of \(\gtrsim 2\) after correcting for time-dependent leakages using the method. According to these results, the method can significantly enhance the polarization image fidelity of VLBI data. Additionally, the method will be useful for improving global VLBI arrays operating at millimeter wavelengths, such as the GMVA and EHT, which have very small antenna beams because of the high observing frequencies and large dishes. We express our gratitude to the anonymous referee for conducting a comprehensive review of our manuscript, which greatly enhanced the quality of the paper. J.P. acknowledges financial support through the EACOA Fellowship awarded by the East Asia Core Observatories Association, which consists of the Academia Sinica Institute of Astronomy and Astrophysics, the National Astronomical Observatory of Japan, Center for Astronomical Mega-Science, Chinese Academy of Sciences, and the Korea Astronomy and Space Science Institute. This work is supported by the Ministry of Science and Technology of Taiwan grant MOST 109-2112-M-001-025 and 108-2112-M-001-051 (K.A). The VLBA is an instrument of the National Radio Astronomy Observatory. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated by Associated Universities, Inc. VLBA (NRAO) AIPS (Greisen, 2003), Difmap (Shepherd, 1997), GPCAL (Park et al., 2021), ParselTongue (Kettenis et al., 2006), Scipy (Virtanen et al., 2020)
2303.14600
Distribution in coprime residue classes of polynomially-defined multiplicative functions
An integer-valued multiplicative function $f$ is said to be polynomially-defined if there is a nonconstant separable polynomial $F(T)\in \mathbb{Z}[T]$ with $f(p)=F(p)$ for all primes $p$. We study the distribution in coprime residue classes of polynomially-defined multiplicative functions, establishing equidistribution results allowing a wide range of uniformity in the modulus $q$. For example, we show that the values $\phi(n)$, sampled over integers $n \le x$ with $\phi(n)$ coprime to $q$, are asymptotically equidistributed among the coprime classes modulo $q$, uniformly for moduli $q$ coprime to $6$ that are bounded by a fixed power of $\log{x}$.
Paul Pollack, Akash Singha Roy
2023-03-26T01:29:16Z
http://arxiv.org/abs/2303.14600v2
# Distribution in coprime residue classes of polynomially-defined multiplicative functions ###### Abstract. An integer-valued multiplicative function \(f\) is said to be polynomially-defined if there is a nonconstant separable polynomial \(F(T)\in\mathbb{Z}[T]\) with \(f(p)=F(p)\) for all primes \(p\). We study the distribution in coprime residue classes of polynomially-defined multiplicative functions, establishing equidistribution results allowing a wide range of uniformity in the modulus \(q\). For example, we show that the values \(\varphi(n)\), sampled over integers \(n\leq x\) with \(\varphi(n)\) coprime to \(q\), are asymptotically equidistributed among the coprime classes modulo \(q\), uniformly for moduli \(q\) coprime to \(6\) that are bounded by a fixed power of \(\log x\). Key words and phrases:uniform distribution, equidistribution, weak uniform distribution, weak equidistribution, multiplicative function 2020 Mathematics Subject Classification: Primary 11A25; Secondary 11N36, 11N64 ## 1. Introduction Let \(f\) be an integer-valued arithmetic function. We say \(f\) is uniformly distributed (or equidistributed) modulo \(q\) if, for each residue class \(a\bmod q\), \[\#\{n\leq x:f(n)\equiv a\pmod{q}\}\sim\frac{x}{q},\quad\text{as $x\to\infty$}.\] As a nontrivial example, let \(\Omega(n):=\sum_{p^{k}\|n}k\) be (as usual) the function counting the prime factors of \(n\) with multiplicity. Then \(\Omega(n)\) is uniformly distributed mod \(q\) for every positive integer \(q\). This result was first established by Pillai in 1940 [21] but today seems best viewed as a special case of a 1969 theorem of Delange [5] characterizing when additive functions are uniformly distributed: An integer-valued additive function \(f\) is equidistributed mod \(q\), for \(q\) odd, if and only if \[\sum_{p:\ d|f(p)}p^{-1}\quad\text{ diverges} \tag{1.1}\] for every divisor \(d>1\) of \(q\). If \(q\) is even, \(f\) is uniformly distributed mod \(q\) if and only if (a) (1.1) holds for every divisor \(d>2\) of \(q\)_and_ (b) either (1.1) holds when \(d=2\), or \(f(2^{r})\) is odd for every positive integer \(r\). For multiplicative functions, there are indications that uniform distribution is not the correct lens to look through. As a case study, consider Euler's \(\varphi\)-function. It is classical (e.g., implicit in work of Landau [13]) that for every \(q\), almost all positive integers \(n\) are divisible by a prime \(p\equiv 1\pmod{q}\). (Here and below, almost all means all numbers \(n\leq x\) with \(o(x)\) exceptions, as \(x\to\infty\).) But then \(q\mid p-1\mid\varphi(n)\). Thus, \(100\%\) of numbers \(n\) have \(\varphi(n)\) belonging to the residue class \(0\bmod q\), so that equidistribution mod \(q\) fails for every \(q>1\). Motivated by these observations, Narkiewicz in [17] introduces the notion of weak uniform distribution. He calls an integer-valued arithmetic function \(f\) weakly uniformly distributed (or weakly equidistributed) modulo \(q\) if \(\gcd(f(n),q)=1\) for infinitely many \(n\) and, for every _coprime_ residue class \(a\bmod q\), \[\#\{n\leq x:f(n)\equiv a\pmod{q}\}\sim\frac{1}{\varphi(q)}\#\{n\leq x:\gcd(f( n),q)=1\},\quad\text{as $x\to\infty$}. \tag{1.2}\] While \(\varphi(n)\) is not uniformly distributed modulo any \(q>1\), Narkiewicz shows in this same paper that \(\varphi(n)\) is weakly uniformly distributed modulo \(q\) precisely when \(\gcd(q,6)=1\). His proof goes by estimating the partial sums of \(\chi(\varphi(n))\), for Dirichlet characters \(\chi\bmod q\), and depends on the theory of mean values of multiplicative functions built up by Delange and Wirsing. Various criteria are available to decide weak equidistribution, but it remains a highly nontrivial task to completely determine, for a given \(f\), the set of \(q\) for which \(f\) is weakly equidistributed modulo \(q\); see Chapter VI of Narkiewicz's monograph [19] for an algorithmic solution to this problem in certain cases. Of special importance for us is the following partial classification, which is a special case of the main theorem of [18]. Call an integer-valued multiplicative function \(f\) polynomially-defined if for some nonconstant polynomial \(F(T)\in\mathbb{Z}[T]\), without multiple roots, we have \(f(p)=F(p)\) for all primes \(p\). When we refer to \(F\) in our results below, we mean the (unique) \(F\) associated to \(f\) in this way. **Proposition 1.1**.: _Let \(f\) be a polynomially-defined multiplicative function. There is a constant \(C=C(F)\) such that, if \(q\) is any positive integer all of whose prime factors exceed \(C\), then \(f\) is weakly equidistributed modulo \(q\)._ In all of the work mentioned so far, the modulus \(q\) was assumed to be fixed. It is of some interest to seek uniform versions of these results. Here uniform means that \(q\) should be allowed allowed to vary with \(x\) (the stopping point of our sample), in analogy with the Siegel-Walfisz theorem from prime number theory. Our first theorem shows that one has uniformity in \(q\) up to an arbitrary (but fixed) power of \(\log x\) when \(F\) is linear. **Theorem 1.2**.: _Let \(f\) be a fixed polynomially-defined function with \(F(T)=RT+S\), where \(R,S\in\mathbb{Z}\) with \(R\neq 0\). Fix a real number \(K>0\). The values \(f(n)\), for \(n\leq x\), are asymptotically weakly uniformly distributed modulo \(q\) for all moduli \(q\leq(\log x)^{K}\) coprime to \(6R\).1_ Footnote 1: That is, the limit relation (1.2) holds uniformly in \(q\), for these \(q\). Thus \(\varphi(n)\), sampled at numbers \(n\leq x\), is asymptotically weakly equidistributed mod \(q\) uniformly for \(q\leq(\log x)^{K}\) with \(\gcd(q,6)=1\). We are not sure what to conjecture for how far the range of uniformity can be extended. As discussed in [14], standard conjectures imply that for \(f(n)=\varphi(n)\), we cannot replace \((\log x)^{K}\) with \(L(x)^{1+\delta}\) for any \(\delta>0\), where \(L(x)=x^{\log\log\log x/\log\log x}\). When the defining polynomial \(F\) has degree larger than \(1\), our method applies but the results require some preparation to state. Let \(F(T)\in\mathbb{Z}[T]\) be nonconstant. For each positive integer \(q\), define \[\nu(q)=\#\{a\bmod q:\gcd(a,q)=1\text{ and }F(a)\equiv 0\pmod{q}\} \tag{1.3}\] and let \[\alpha(q)=\frac{1}{\varphi(q)}\#\{a\bmod q:\gcd(aF(a),q)=1\}. \tag{1.4}\] It is straightforward to check, using the Chinese Remainder Theorem, that \[\alpha(q)=\prod_{\begin{subarray}{c}\ell|q\\ \ell\text{ prime}\end{subarray}}\left(1-\frac{\nu(\ell)}{\ell-1}\right).\] If \(F\) has degree \(D\), then \(\nu(\ell)\leq D\) whenever \(\ell\) does not divide the leading coefficient of \(F\). Thus, if \(q\) is coprime to that coefficient and every prime dividing \(q\) exceeds \(D+1\), then \(\alpha(q)\) is nonzero. Furthermore, by a standard argument with Mertens' theorem, as long as \(\alpha(q)\) is nonzero, \[\alpha(q)\gg_{F}(\log\log{(3q)})^{-D}. \tag{1.5}\] The lower bound (1.5) will prove important later. In what follows, by \(\omega(q)\) we shall mean the number of distinct primes dividing \(q\). **Theorem 1.3**.: _Let \(f\) be a fixed, polynomially-defined multiplicative function. Fix \(\delta\in(0,1]\). There is a constant \(C=C(F)\) such that the following holds. For each fixed \(K\), the values \(f(n)\) for \(n\leq x\) are asymptotically weakly uniformly distributed mod \(q\) provided that \(q\leq(\log x)^{K}\), that \(q\) is divisible only by primes exceeding \(C\), and that either_ 1. \(q\) _is squarefree with_ \(\omega(q)\leq(1-\delta)\alpha(q)\log\log x/\log D\)_,_ or__ 2. \(q\leq(\log x)^{\alpha(q)(1-\delta)(1-1/D)^{-1}}\)_._ Conditions (i) and (ii) in Theorem 1.3 reflect genuine obstructions to uniformity. To motivate (i), fix an integer \(D\geq 2\), and let \(F(T)=(T-2)(T-4)\cdots(T-2D)+2\). Note that \(F\) is Eisenstein at \(2\), so \(F\) is irreducible over \(\mathbb{Q}\) and thus without multiple roots. Let \(f\) be the completely multiplicative function with \(f(p)=F(p)\) for all primes \(p\), and let \(q\) be a squarefree product of primes exceeding \(D+1\). Then \(F(p)\equiv 2\pmod{q}\) whenever \((p-2)\cdots(p-2D)\equiv 0\pmod{q}\). This congruence puts \(p\) in one of \(D^{\omega(q)}\) coprime residue classes mod \(q\). Hence, we expect \(\gg\frac{D^{\omega(q)}}{\varphi(q)}\frac{x}{\log x}\) primes \(p\leq x\) with \(F(p)\equiv 2\pmod{q}\), and we are assured this many primes (by Siegel-Walfisz) if \(q\) is bounded by a power of \(\log x\). On the other hand, Proposition 2.1 below implies (under this same restriction on the size of \(q\)) that the number of \(n\leq x\) with \(\gcd(f(n),q)=1\) is \(x/(\log x)^{1-(1+o(1))\alpha(q)}\). Thus, the residue class \(2\bmod q\) will be 'overrepresented' (vis-a-vis the expectation of weak uniform distribution) if \(D^{\omega(q)}>(\log x)^{(1+\delta)\alpha(q)}\) for a fixed \(\delta>0\), which can happen already with \(q\leq(\log x)^{O_{D}(1)}\). 2 It follows that (i) is essentially optimal. Footnote 2: One can take \(q\) to be the product of the primes from \(D+1\) up to \(K_{D}\log\log x\), for a suitably chosen constant \(K_{D}\). Here the prime ideal theorem is useful for estimating \(\alpha(q)\). To motivate (ii), fix \(D\geq 2\), and let \(f\) be the completely multiplicative function given by \(f(p)=(p-1)^{D}+1\) for all primes \(p\). Let \(q\) be a \(D\)th power, say \(q=q_{1}^{D}\). Then \(f(p)\equiv 1\) \((\bmod q)\) whenever \(p\equiv 1\pmod{q_{1}}\). Thus, if \(q\) is bounded by a power of \(\log x\), there will be \(\gg x/\varphi(q_{1})\log x\) primes \(p\leq x\) for which \(f(p)\equiv 1\pmod{q}\). On the other hand, if we assume all primes dividing \(q_{1}\) exceed \(D+1\), Proposition 2.1 implies that there are \(x/(\log x)^{1-(1+o(1))\alpha(q)}\) integers \(n\leq x\) with \(\gcd(f(n),q)=1\). It follows that the residue class \(1\bmod q\) will be over-represented if \(q^{1-1/D}=q/q_{1}>(\log x)^{(1+\delta)\alpha(q)}\). This means that for weak equidistribution we require \(q\) to be no more than \(\approx(\log x)^{\alpha(q)(1-1/D)^{-1}}\). So (ii) is essentially best possible as well. In both of the constructions described above, the obstruction to uniformity came from prime inputs \(p\). Tweaking the construction slightly, we could easily produce obstructions to uniformity of the form \(rp\), with \(r\) fixed (or even with \(r\) growing slowly with \(x\)). In our final theorem, we pinpoint the 'problem' here as one of having too few large prime factors. Specifically, we show that uniformity up to an arbitrary power of \(\log x\) can be restored by considering only inputs with sufficiently many prime factors exceeding \(q\). In fact, for squarefree moduli \(q\), it suffices to restrict to inputs with composite \(q\)-rough part. We write \(P(n)\) for the largest prime factor of \(n\), with the convention that \(P(1)=1\). We set \(P_{1}(n)=P(n)\) and define, inductively, \(P_{k}(n)=P_{k-1}(n/P(n))\). Thus, \(P_{k}(n)\) is the \(k\)th largest prime factor of \(n\), with \(P_{k}(n)=1\) if \(\Omega(n)<k\). **Theorem 1.4**.: _Let \(f\) be a fixed, polynomially-defined function. There is a constant \(C(F)\) such that the following hold._ 1. _For each fixed_ \(K>0\)_,_ (1.6) \[\#\{n\leq x:P_{D+2}(n)>q,\ f(n)\equiv a\pmod{q}\}\\ \sim\frac{1}{\varphi(q)}\#\{n\leq x:P_{D+2}(n)>q,\ \gcd(f(n),q)=1\}\quad\text{ as }x\to\infty,\] _uniformly for coprime residue classes_ \(a\bmod q\) _with_ \(q\leq(\log x)^{K}\) _and_ \(q\) _divisible only by primes exceeding_ \(C(F)\)_._ 2. _For each fixed_ \(K>0\)_,_ \[\#\{n\leq x:P_{2}(n)>q,\ f(n)\equiv a\pmod{q}\}\\ \sim\frac{1}{\varphi(q)}\#\{n\leq x:P_{2}(n)>q,\ \gcd(f(n),q)=1\}\quad\text{ as }x\to\infty,\] _uniformly for coprime residue classes_ \(a\bmod q\) _with_ \(q\) _squarefree,_ \(q\leq(\log x)^{K}\)_, and_ \(q\) _divisible only by primes exceeding_ \(C(F)\)_._ The method of the present paper refines that of the authors' earlier works [14, 23]. In those papers, it was crucial that the modulus \(q\) be either prime or 'nearly prime', in the sense that \(\sum_{\ell|q}1/\ell=o(1)\). The essential new ingredient here, which allows us to dispense with any such condition, is the exploitation of a certain ergodic (or mixing) phenomenon within the multiplicative group mod \(q\). As one illustration: Let \(q\) be a positive integer coprime to \(6\). From the collection of units \(u\bmod q\) for which \(u+1\) is also a unit, choose uniformly at random \(u_{1},u_{2},u_{3},\dots\), and construct the products \(u_{1},u_{1}u_{2},u_{1}u_{2}u_{3},\dots\). Once \(J\) is large, each unit mod \(q\) is roughly equally likely to appear as \(u_{1}\cdots u_{J}\). This particular example plays a starring role in our approach to the weak equidistribution of Euler's \(\varphi\)-function. When \(f=\varphi\), Theorem 1.2 is in the spirit of the Siegel-Walfisz theorem, with primes replaced by values of \(\varphi(n)\). For investigations of the corresponding 'Linnik's theorem', concerning the least \(n\) for which \(\varphi(n)\) falls into a given progression, see [2, 6, 7, 8]. Finally, it is worth mentioning that although in the spirit of Narkiewicz's results, we stated Theorems 1.2, 1.3 and 1.4 for \(F(T)\in\mathbb{Z}[T]\), our methods go through (with minor modifications) for integer-valued polynomials \(F\), namely those satisfying \(F(\mathbb{Z})\subset\mathbb{Z}\). Writing any such polynomial in the form \(G(T)/Q\) for some positive integer \(Q\) and \(G(T)\in\mathbb{Z}[T]\), we need only ensure in addition that the constant \(C(F)\) appearing in the aforementioned theorems exceeds \(Q\). ### Notation and conventions We do not consider the zero function as multiplicative (thus, if \(f\) is multiplicative, then \(f(1)=1\)). Throughout, the letters \(p\) and \(\ell\) are to be understood as denoting primes. Implied constants in \(\ll\) and \(O\)-notation may always depend on any parameters declared as "fixed"; other dependence will be noted explicitly (for example, with subscripts). We use \(\log_{k}\) for the \(k\)th iterate of the natural logarithm. When there is no danger of confusion, we write \((a,b)\) instead of \(\gcd(a,b)\). ## 2. A preparatory estimate: The frequency with which \((f(n),q)=1\) The following proposition is contained in results of Scourfield [26]. Nevertheless, we give a complete treatment here, for two reasons. First, we prefer to keep matters as self-contained as possible. Second, the results of [26] are much more precise than we will need. The weaker version below admits a simpler and shorter proof (although we make no claim to originality regarding the underlying ideas). For readability, we sometimes abbreviate \(\alpha(q)\) to \(\alpha\), suppressing the dependence on \(q\). **Proposition 2.1**.: _Fix a multiplicative function \(f\) with the property that \(f(p)=F(p)\) for all primes \(p\), where \(F(T)\in\mathbb{Z}[T]\) is nonconstant. Fix \(K>0\). If \(x\) is sufficiently large and \(q\leq(\log x)^{K}\) with \(\alpha=\alpha(q)>0\), then_ \[\#\{n\leq x:(f(n),q)=1\}=\frac{x}{(\log x)^{1-\alpha}}\exp(O((\log\log\left(3 q\right))^{O(1)})). \tag{2.1}\] We treat separately the implicit upper and lower bounds in Proposition 2.1. ### Upper bound The following mean value estimate is a simple consequence of [10, Theorem 01, p. 2] (and also of the more complicated Theorem 03 from that same chapter). **Lemma 2.2**.: _Let \(g\) be a multiplicative function with \(0\leq g(n)\leq 1\) for all \(n\). For all \(x\geq 3\),_ \[\sum_{n\leq x}g(n)\ll\frac{x}{\log x}\exp\left(\sum_{p\leq x}\frac{g(p)}{p} \right).\] _Here the implied constant is absolute._ If we set \(g(n):=\mathbb{1}_{\gcd(f(n),q)=1}\), then the left-hand side of (2.1) is precisely \(\sum_{n\leq x}g(n)\). Note that the multiplicativity of \(f\) implies the multiplicativity of \(g\). The following lemma, due independently to Norton [20, Lemma, p. 669] and Pomerance [24, Remark 1], allows us to estimate the sums of \(g(p)/p\) appearing in Lemma 2.2. **Lemma 2.3**.: _Let \(q\) be a positive integer, and suppose \(x\) is a real number with \(x\geq\max\{3,q\}\). For each coprime residue class \(a\bmod q\),_ \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\,(\bmod\,q)\end{subarray}}\frac{1}{p}=\frac{\log_{2}x}{\varphi(q)}+ \frac{1}{p_{q,a}}+O\left(\frac{\log\left(3q\right)}{\varphi(q)}\right),\] _where \(p_{q,a}\) denotes the least prime congruent to \(a\) modulo \(q\)._ **Lemma 2.4**.: _Let \(F(T)\in\mathbb{Z}[T]\) be a fixed nonconstant polynomial. For each positive integer \(q\) and each real number \(x\geq 3q\),_ \[\sum_{p\leq x}\frac{\mathbbm{1}_{\gcd(F(p),q)=1}}{p}=\alpha\log_{2}x+O((\log \log\left(3q\right))^{O(1)}),\] _where \(\alpha=\alpha(q)\) is as defined in (1.4)._ Proof.: Using the Mobius function to detect the coprimality condition, we write \[\begin{split}\sum_{\begin{subarray}{c}p\leq x\\ \gcd(F(p),q)=1\end{subarray}}\frac{1}{p}&=\sum_{\begin{subarray} {c}3q<p\leq x\\ \gcd(F(p),q)=1\end{subarray}}\frac{1}{p}+O(\log_{2}(100q))\\ &=\sum_{d|q}\mu(d)\sum_{\begin{subarray}{c}3q<p\leq x\\ d|F(p)\end{subarray}}\frac{1}{p}+O(\log_{2}(100q)).\end{split} \tag{2.2}\] If \(p\) is a prime with \(p>3q\), then \(d\mid F(p)\) precisely when \(p\) belongs to one of \(\nu(d)\) coprime residue classes modulo \(d\). By Lemma 2.3 (with \(d\) replacing \(q\)), \[\sum_{\begin{subarray}{c}3q<p\leq x\\ d|F(p)\end{subarray}}\frac{1}{p}=\frac{\nu(d)}{\varphi(d)}\log\log x+O\left( \frac{\nu(d)\log(3d)}{\varphi(d)}+\frac{\nu(d)\log_{2}(3q)}{\varphi(d)}\right).\] Substituting this estimate into (2.2) yields a main term of \((\sum_{d|q}\frac{\mu(d)\nu(d)}{\varphi(d)})\log_{2}x=\alpha\log_{2}x\), as desired. Turning to the errors, \[\sum_{\begin{subarray}{c}d|q\\ d\text{ squarefree}\end{subarray}}\frac{\nu(d)\log(3d)}{\varphi(d)} =\sum_{\begin{subarray}{c}d|q\\ d\text{ squarefree}\end{subarray}}\frac{\nu(d)}{\varphi(d)}(\log 3+\sum_{\ell|d}\log( \ell))\] \[\leq(\log 3)\sum_{\begin{subarray}{c}d|q\\ d\text{ squarefree}\end{subarray}}\frac{\nu(d)}{\varphi(d)}+\sum_{\ell|q}\log \ell\cdot\frac{\nu(\ell)}{\ell-1}\sum_{\begin{subarray}{c}r|q/\ell\\ r\text{ squarefree}\end{subarray}}\frac{\nu(r)}{\varphi(r)}\] \[\ll\bigg{(}\sum_{\begin{subarray}{c}d|q\\ d\text{ squarefree}\end{subarray}}\frac{\nu(d)}{\varphi(d)}\bigg{)}\bigg{(}1+ \sum_{\ell|q}\log\ell\cdot\frac{\nu(\ell)}{\ell-1}\bigg{)}.\] Now \(\sum_{d|q,\ d\ \text{squarefree}}\frac{\nu(d)}{\varphi(d)}=\prod_{\ell|q}(1+\nu( \ell)/(\ell-1))\ll(\log_{2}(3q))^{D}\) (keeping in mind that \(\nu(\ell)\leq D\) for all but \(O(1)\) many primes \(\ell\)). Furthermore, \[\sum_{\ell|q}\nu(\ell)\frac{\log\ell}{\ell-1}\ll\sum_{\ell|q}\frac{\log\ell}{ \ell}\leq\sum_{\ell\leq\log{(3q)}}\frac{\log\ell}{\ell}+\sum_{\begin{subarray} {c}\ell|q\\ \ell>\log(3q)\end{subarray}}\frac{\log\ell}{\ell}\ll\log_{2}{(3q)}+\frac{\log _{2}{(3q)}}{\log{(3q)}}\sum_{\begin{subarray}{c}\ell|q\\ \ell>\log(3q)\end{subarray}}1,\] and this is \[\ll\log_{2}{(3q)}+\frac{\log_{2}{(3q)}}{\log(3q)}\cdot\frac{\log q}{\log_{2}{ (3q)}}\ll\log_{2}{(3q)}.\] Thus, \(\sum_{d|q,\ d\ \text{squarefree}}\frac{\nu(d)\log{(3d)}}{\varphi(d)}\ll( \log_{2}(3q))^{D+1}\). Finally, \[\sum_{\begin{subarray}{c}d|q\\ d\ \text{squarefree}\end{subarray}}\frac{\nu(d)\log_{2}{(3q)}}{\varphi(d)}\ll \log_{2}{(3q)}\cdot\prod_{\ell|q}\left(1+\frac{\nu(\ell)}{\ell-1}\right)\ll( \log_{2}{(3q)})^{D+1}.\] Collecting estimates, \(\sum_{p\leq x}\mathbb{1}_{\gcd(F(p),q)=1}/p=\alpha\log_{2}{x}+O((\log_{2}{(3q) })^{D+1})\). The upper bound half of Proposition 2.1 follows (in slightly more precise form) immediately from Lemmas 2.2 and 2.4. In fact, we have shown the upper bound in the much wider range \(q\leq x/3\). ### Lower bound The following lemma is due to Barban [1, Lemma 3.5]; see also [25, Theorem 3.5, p. 61]. **Lemma 2.5**.: _Let \(g\) be a multiplicative function with \(0\leq g(n)\leq 1\) for all \(n\). For all \(x\geq 3\),_ \[\sum_{\begin{subarray}{c}n\leq x\\ n\ \text{squarefree}\end{subarray}}\frac{g(n)}{n}\gg\exp\left(\sum_{p\leq x} \frac{g(p)}{p}\right).\] _Here the implied constant is absolute._ Proof of the lower bound in Proposition 2.1.: Consider \(n\) of the form \(mP\), where \(m\leq x^{1/3}\) is a squarefree product of primes \(p\) with \(\gcd(f(p),q)=1\) and \(P\in(x^{1/2},x/m]\) is a prime with \((f(P),q)=1\). Each such \(n\) has \(f(n)=f(m)f(P)\) coprime to \(q\). Given \(m\) as above, we count corresponding \(P\). The prime \(P\) is restricted to one of the \(\alpha(q)\varphi(q)\) residue classes \(a\) mod \(q\) with \(\gcd(aF(a),q)=1\). Hence, given \(m\leq x^{1/3}\) as above, the Siegel-Walfisz theorem guarantees that there are \[\gg(\alpha(q)\varphi(q))\cdot\frac{1}{\varphi(q)}\frac{x}{m\log x}=\alpha(q) \frac{x}{m\log x}\] values of \(P\). Now sum on \(m\); by Lemma 2.5, \[\sum\frac{1}{m}=\sum_{\begin{subarray}{c}m\leq x^{1/3}\\ m\ \text{squarefree}\end{subarray}}\frac{\mathbb{1}_{\gcd(f(m),q)=1}}{m}\gg\exp \left(\sum_{p\leq x^{1/3}}\frac{\mathbb{1}_{\gcd(f(p),q)=1}}{p}\right).\] The final sum on \(p\) is within \(O(1)\) of the corresponding sum taken over all \(p\leq x\). The lower bound half of Proposition 2.1 now follows from Lemma 2.4, bearing in mind that \(\alpha(q)\gg(\log\log{(3q)})^{-D}\). ## 3. Framework for the proof of Theorems 1.3 and 1.4 Define \(J=J(x)\) by setting \[J=\lfloor\log\log\log{x}\rfloor.\] (For our purposes, any integer-valued function tending to infinity sufficiently slowly would suffice.) With \(\delta\) from the statement of Theorem 1.3, we let \(y=y(x)\) be defined by \[y:=\exp((\log{x})^{\delta/2})\] and we say that the positive integer \(n\) is convenient (with respect to a given large real number \(x\)) if (a) \(n\leq x\), (b) the \(J\) largest prime factors of \(n\) exceed \(y\), and (c) none of these \(J\) primes are repeated in \(n\). That is, \(n\) is convenient if \(n\) admits an expression \(n=mP_{J}\cdots P_{1}\), where \(P_{1},\ldots,P_{J}\) are primes with \[\max\{P(m),y\}<P_{J}<\cdots<P_{1}, \tag{3.1}\] \[P_{J}\cdots P_{1}\leq x/m. \tag{3.2}\] The framework developed in this section will go through in the proof of Theorem 1.4 (SS6) by setting \(\delta:=1\). Now let \(f\) be a fixed multiplicative function with \(f(p)=F(p)\) for all primes \(p\), where \(F(T)\in\mathbb{Z}[T]\) is nonconstant. Fix \(K>0\), and suppose that \(q\leq(\log{x})^{K}\). We set \[N(q)=\#\{n\leq x:\gcd(f(n),q)=1\},\] and we define \(N_{\mathrm{con}}(q)\) and \(N_{\mathrm{inc}}(q)\) analogously, incorporating the extra requirement that \(n\) be convenient or inconvenient, respectively. **Lemma 3.1**.: \(N(q)\sim N_{\mathrm{con}}(q)\)_, as \(x\to\infty\). Here the asymptotic holds uniformly in \(q\) with \(q\leq(\log{x})^{K}\) and \(\alpha(q)\neq 0\)._ Proof.: We must show that \(N_{\mathrm{inc}}(q)=o(N(q))\), as \(x\to\infty\). Suppose the integer \(n\leq x\) is counted by \(N_{\mathrm{inc}}(q)\). We can assume that \(P(n)>z:=x^{1/\log_{2}{x}}\). Indeed, by well-known results on smooth numbers (for instance [27, Theorem 5.13 and Corollary 5.19, Chapter III.5]), the number of \(n\leq x\) with \(P(n)\leq z\) is at most \(x/(\log{x})^{(1+o(1))\log_{3}{x}}\) and this is \(o(N(q))\) by our 'rough-and-ready' estimate of Proposition 2.1. We can similarly assume that \(n\) has no repeated prime factors exceeding \(y\), since the number of exceptions is \(O(x/y)\), which is again \(o(N(q))\). Write \(n=PAB\), where \(P=P(n)\) and \(A\) is the largest divisor of \(n/P\) supported on primes exceeding \(y\). Observe that \(AB=n/P\leq x/z\). So if \(A\) and \(B\) are given, the number of possibilities for \(P\) is bounded by \(\pi(x/AB)\ll x/AB\log{z}\ll x(\log\log{x})/AB\log{x}\). We sum on \(A,B\). As \(n\) has no repeated primes exceeding \(y\) but \(n\) is inconvenient, it must be that \(\Omega(A)<J\). Thus, \(\sum 1/A\leq(1+\sum_{p\leq x}1/p)^{J}\leq(2\log_{2}x)^{J}\leq\exp(O((\log_{3}x)^{2}))\). Using that \((f(B),q)=1\) (as \(f(n)=f(B)f(AP)\)) and that \(B\) is \(y\)-smooth, \[\sum\frac{1}{B}\leq\prod_{p\leq y}\left(\sum_{j=0}^{\infty}\frac{\mathbbm{1}_{ (f(p^{j}),q)=1}}{p^{j}}\right)\ll\exp\left(\sum_{p\leq y}\frac{\mathbbm{1}_{(f (p),q)=1}}{p}\right),\] and this is \(\ll(\log x)^{\alpha\delta/2}\exp(O((\log_{2}q)^{O(1)}))\) by Lemma 2.4. We conclude that these \(n\) make a contribution to \(N_{\mathrm{inc}}(q)\) of size at most \(\frac{x}{(\log x)^{1-\alpha\delta/2}}\exp(O((\log_{3}x)^{2}+(\log_{2}q)^{O(1)}))\). Since \(q\leq(\log x)^{K}\) and \(\alpha(q)\) obeys the lower bound (1.5), this contribution is also \(o(N(q))\). Let \(N(q,a)\) denote the number of \(n\leq x\) with \(f(n)\equiv a\pmod{q}\), and define \(N_{\mathrm{con}}(q,a)\) and \(N_{\mathrm{inc}}(q,a)\) analogously. By Lemma 3.1, the weak equidistribution of \(f\) mod \(q\) will follow if \(N(q,a)\sim\frac{1}{\varphi(q)}N_{\mathrm{con}}(q)\). As a first step in this direction, we compare \(N_{\mathrm{con}}(q)\) and \(N_{\mathrm{con}}(q,a)\). Clearly, \[N_{\mathrm{con}}(q)=\sum_{\begin{subarray}{c}m\leq x\\ \gcd(f(m),q)=1\end{subarray}}{\sum_{P_{1},\ldots,P_{J}}}^{\prime}1,\] where the \({}^{\prime}\) on the sum indicates that \(P_{1},\ldots,P_{J}\) run through primes satisfying (3.1), (3.2), and \[\gcd(f(P_{1})\cdots f(P_{J}),q)=1. \tag{3.3}\] Similarly, \[N_{\mathrm{con}}(q,a)=\sum_{\begin{subarray}{c}m\leq x\\ \gcd(f(m),q)=1\end{subarray}}{\sum_{P_{1},\ldots,P_{J}}}^{\prime\prime}1,\] where the \({}^{\prime\prime}\) condition indicates that we enforce (3.1), (3.2) and (in place of (3.3)) \[f(m)f(P_{1})f(P_{2})\cdots f(P_{J})\equiv a\pmod{q}. \tag{3.4}\] Let \[\mathcal{V}_{q}^{\prime}=\{(v_{1},\ldots,v_{J})\bmod q:\gcd(v_{1}\ldots v_{J},q)=1,\gcd(F(v_{1})\cdots F(v_{J}),q)=1\}\] and \[\mathcal{V}_{q,a,m}^{\prime\prime}=\{(v_{1},\ldots,v_{J})\bmod q:\gcd(v_{1} \ldots v_{J},q)=1,f(m)F(v_{1})\cdots F(v_{J})\equiv a\pmod{q}\}.\] Then (3.3) amounts to restricting \((P_{1},\ldots,P_{J})\), taken mod \(q\), to belong to \(\mathcal{V}_{q}^{\prime}\), while (3.4) restricts this same tuple to \(\mathcal{V}_{q,a,m}^{\prime\prime}\). By (1.4), \(\#\mathcal{V}_{q}^{\prime}=(\varphi(q)\alpha(q))^{J}\). The conditions (3.2) and (3.3) are independent of the ordering of \(P_{1},\ldots,P_{J}\). Thus, letting \(L_{m}=\max\{y,P(m)\}\), \[{\sum_{P_{1},\ldots,P_{J}}}^{\prime}1=\frac{1}{J!}\sum_{\begin{subarray}{c} \mathbf{v}\in\mathcal{V}_{q}^{\prime}\\ \text{ }P_{1}\cdots P_{J}\leq x/m\\ \text{ }P_{j}>L_{m}\end{subarray}}1. \tag{3.5}\] We proceed to remove the congruence conditions on the \(P_{j}\) from the inner sum. For each tuple \((v_{1},\ldots,v_{J})\bmod q\in\mathcal{V}_{q}^{\prime}\), \[\sum_{\begin{subarray}{c}P_{1},\ldots,P_{J}\text{ distinct}\\ P_{1}\cdots P_{J}\leq x/m\\ \text{each }P_{j}>L_{m}\\ \text{each }P_{j}\equiv v_{j}\bmod q\end{subarray}}1=\sum_{\begin{subarray}{c}P_{2}, \ldots,P_{J}\text{ distinct}\\ P_{2}\cdots P_{J}\leq x/mL_{m}\\ \text{each }P_{j}>L_{m}\\ \text{each }P_{j}\equiv v_{j}\bmod q\end{subarray}}\sum_{\begin{subarray}{c}P_{1} \neq P_{2},\ldots,P_{J}\\ L_{m}<P_{1}\leq x/mP_{2}\cdots P_{J}\\ P_{1}\equiv v_{1}\bmod q\end{subarray}}1.\] Since \(L_{m}\geq y\) and \(q\leq(\log x)^{K}=(\log y)^{2K/\delta}\), the Siegel-Walfisz theorem implies that \[\sum_{\begin{subarray}{c}P_{1}\neq P_{2},\ldots,P_{J}\\ L_{m}<P_{1}\leq x/mP_{2}\cdots P_{J}\\ P_{1}\equiv v_{1}\bmod q\end{subarray}}1=\frac{1}{\varphi(q)}\sum_{ \begin{subarray}{c}P_{1}\neq P_{2},\ldots,P_{J}\\ L_{m}<P_{1}\leq x/mP_{2}\cdots P_{J}\end{subarray}}1+O\left(\frac{x}{mP_{2} \cdots P_{J}}\exp(-C_{0}\sqrt{\log y})\right),\] for some positive constant \(C_{0}:=C_{0}(K,\delta)\) depending only on \(K\) and \(\delta\). Putting this back into the last display and bounding the \(O\)-terms crudely, we find that \[\sum_{\begin{subarray}{c}P_{1},\ldots,P_{J}\text{ distinct}\\ P_{1}\cdots P_{J}\leq x/m\\ \text{each }P_{j}>L_{m}\\ \text{each }P_{j}\equiv v_{j}\bmod q\end{subarray}}1=\frac{1}{\varphi(q)} \sum_{\begin{subarray}{c}P_{1},\ldots,P_{J}\text{ distinct}\\ P_{1}\cdots P_{J}\leq x/m\\ \text{each }P_{j}>L_{m}\\ (\forall j\geq 2)\ P_{j}\equiv v_{j}\bmod q\end{subarray}}1+O\left(\frac{x}{m} \exp\left(-\frac{1}{2}C_{0}(\log x)^{\delta/4}\right)\right).\] Proceeding in the same way to remove the congruence conditions on \(P_{2},\ldots,P_{J}\), we arrive at the estimate \[\sum_{\begin{subarray}{c}P_{1},\ldots,P_{J}\text{ distinct}\\ P_{1}\cdots P_{J}\leq x/m\\ \text{each }P_{j}\equiv v_{j}\bmod q\end{subarray}}1=\frac{1}{\varphi(q)^{J}} \sum_{\begin{subarray}{c}P_{1},\ldots,P_{J}\text{ distinct}\\ P_{1}\cdots P_{J}\leq x/m\\ \text{each }P_{j}>L_{m}\end{subarray}}1+O\left(\frac{x}{m}\exp\left(-\frac{1}{4 }C_{0}(\log x)^{\delta/4}\right)\right).\] Inserting this estimate into (3.5) and keeping in mind that \(\#\mathcal{V}_{q}^{\prime}\leq(\log x)^{KJ}\) (trivially), we conclude that \[N_{\text{con}}(q) =\sum_{\begin{subarray}{c}m\leq x\\ \text{gcd}(f(m),q)=1\end{subarray}}{\sum_{P_{1},\ldots,P_{J}}}^{\prime}1 \tag{3.6}\] \[=\sum_{\begin{subarray}{c}m\leq x\\ \text{gcd}(f(m),q)=1\end{subarray}}\frac{\#\mathcal{V}_{q}^{\prime}}{\varphi(q )^{J}}\Bigg{(}\frac{1}{J!}\sum_{\begin{subarray}{c}P_{1},\ldots,P_{J}\text{ distinct}\\ P_{1}\cdots P_{J}\leq x/m\\ \text{each }P_{j}>L_{m}\end{subarray}}1\Bigg{)}+O\left(x\exp\left(-\frac{1}{8}C_{0}( \log x)^{\delta/4}\right)\right).\] An entirely analogous argument yields the same estimate with \(N_{\text{con}}(q)\) replaced by \(N_{\text{con}}(q,a)\) and \(\mathcal{V}_{q}^{\prime}\) replaced by \(\mathcal{V}_{q,a,m}^{\prime\prime}\). Comparing (3.6) with its \(N_{\text{con}}(q,a)\) analogue and rewriting \[\frac{\#\mathcal{V}_{q,a,m}^{\prime\prime}}{\varphi(q)^{J}}=\frac{\#\mathcal{ V}_{q,a,m}^{\prime\prime}}{\#\mathcal{V}_{q}^{\prime}}\cdot\frac{\#\mathcal{V}_{q}^{ \prime}}{\varphi(q)^{J}},\] we are motivated to introduce the following hypothesis. **Hypothesis A**.: \(\frac{\#\mathcal{V}_{q,a,m}^{\prime\prime}}{\#\mathcal{V}_{q}^{\prime\prime}} \sim\frac{1}{\varphi(q)}\)_, as \(x\to\infty\), uniformly in \(q\) and \(a\) and uniformly in \(m\leq x\) with \(\gcd(f(m),q)=1\)._ We will soon see how to verify Hypothesis A in the situations described in Theorems 1.2, 1.3, and 1.4. The phrase "uniformly in \(q\) and \(a\)" in Hypothesis A should be read as "uniformly in \(q\) and \(a\) subject to the restrictions of these theorem statements". If Hypothesis A holds, we may deduce (keeping in mind Lemma 3.1, and that \(x\exp(-\frac{1}{8}C_{0}(\log x)^{\delta/4})=o(N(q)/\varphi(q))\)) \[N_{\mathrm{con}}(q,a) =\sum_{\begin{subarray}{c}m\leq x\\ \gcd(f(m),q)=1\end{subarray}}\sum_{P_{1},\ldots,P_{J}}^{\prime\prime}1\] \[=(1+o(1))\frac{1}{\varphi(q)}N_{\mathrm{con}}(q)+o\left(\frac{N( q)}{\varphi(q)}\right)=(1+o(1))\frac{1}{\varphi(q)}N(q).\] Since \(N(q,a)=N_{\mathrm{con}}(q,a)+N_{\mathrm{inc}}(q,a)\), weak uniform distribution mod \(q\) will follow if the contribution from \(N_{\mathrm{inc}}(q,a)\) is shown to be negligible. We record this condition as our next Hypothesis. **Hypothesis B**.: \(N_{\mathrm{inc}}(q,a)=o(N(q)/\varphi(q))\)_, as \(x\to\infty\), uniformly in \(q\) and \(a\)._ ## 4. Linearly defined functions: Proof of Theorem 1.2 We proceed to verify Hypotheses A and B. _Verification of Hypothesis A_.: Let \(m\leq x\) with \(\gcd(f(m),q)=1\), and let \(w\in\mathbb{Z}\) be a value of \(af(m)^{-1}\) modulo \(q\). We will estimate \(\#\mathcal{V}_{q,a,m}^{\prime\prime}\) via the product formula \(\#\mathcal{V}_{q,a,m}^{\prime\prime}=\prod_{\ell^{e}\parallel q}V_{\ell^{e}}^{ \prime\prime}\), where \[V_{\ell^{e}}^{\prime\prime}:=\#\{(v_{1},\ldots,v_{J})\bmod\ell^{e}:\gcd(v_{1} \ldots v_{J},\ell)=1,\ \prod_{i=1}^{J}(Rv_{i}+S)\equiv w\pmod{\ell^{e}}\}.\] By assumption, \((\ell,6R)=1\) for all \(\ell\mid q\). Suppose first that \(\ell\mid S\). Then the condition \(\gcd(v_{1}\ldots v_{J},\ell)=1\) is implied by \(\prod_{i=1}^{J}(Rv_{i}+S)\equiv w\pmod{\ell^{e}}\). Noting that the map \(v\mapsto Rv+S\) is a permutation of \(\mathbb{Z}/\ell^{e}\mathbb{Z}\), we see that \(V_{\ell^{e}}^{\prime\prime}=\varphi(\ell^{e})^{J-1}\) and \[\varphi(\ell^{e})V_{\ell^{e}}^{\prime\prime}=\varphi(\ell^{e})^{J}. \tag{4.1}\] When \(\ell\nmid S\), we must work somewhat harder. By inclusion-exclusion, \[V_{\ell^{e}}^{\prime\prime}=\sum_{j=0}^{J}(-1)^{j}\binom{J}{j}V_{\ell^{e},j}^ {\prime\prime}, \tag{4.2}\] where \[V_{\ell^{e},j}^{\prime\prime}=\#\{(v_{1},\ldots,v_{J})\bmod\ell^{e}:\ell \mid v_{1},v_{2},\ldots,v_{j},\ \prod_{i=1}^{J}(Rv_{i}+S)\equiv w\pmod{\ell^{e}}\}.\] If \(0\leq j<J\), then \(V^{\prime\prime}_{\ell^{e},j}=(\ell^{e-1})^{j}\varphi(\ell^{e})^{J-j-1}\): Each of \(v_{1},\ldots,v_{j}\) can be chosen arbitrarily from the \(\ell^{e-1}\) classes divisible by \(\ell\), while \(v_{j+1},\ldots,v_{J-1}\) can be chosen arbitrarily subject to each of \(Rv_{i}+S\) (for \(i=j+1,\ldots,J-1\)) being a unit mod \(\ell^{e}\); this then determines \(v_{J}\). Similarly, \(V^{\prime\prime}_{\ell^{e},J}=O((\ell^{e-1})^{J-1})\). Referring back to (4.2), \[\varphi(\ell^{e})V^{\prime\prime}_{\ell^{e}} =(\varphi(\ell^{e})-\ell^{e-1})^{J}+O(\ell^{e}(\ell^{e-1})^{J-1})\] \[=(\ell^{e}(1-2/\ell))^{J}\left(1+O(\ell(\ell-2)^{-J})\right). \tag{4.3}\] By (4.1) and (4.3), in either case for \(\ell\) we have \[\varphi(\ell^{e})V^{\prime\prime}_{\ell^{e}}=\left(\varphi(\ell^{e})\left(1- \frac{\nu(\ell)}{\ell-1}\right)\right)^{J}\cdot\left(1+O(\ell(\ell-2)^{-J}) \right).\] Multiplying over \(\ell\), \[\varphi(q)\#\mathcal{V}^{\prime\prime}_{q,a,m}=(\varphi(q)\alpha(q))^{J}\prod_ {\ell^{e}\|q}\left(1+O(\ell(\ell-2)^{-J})\right)=\#\mathcal{V}^{\prime}_{q} \prod_{\ell^{e}\|q}\left(1+O(\ell(\ell-2)^{-J})\right).\] So to verify Hypothesis A, it is enough to show that the final product is \(1+o(1)\). This follows if \(\sum_{\ell^{e}\|q}\ell(\ell-2)^{-J}=o(1)\), which is straightforward to prove: Since \(q\) is coprime to \(6\), we have for all large \(x\) that \[\sum_{\ell^{e}\|q}\ell(\ell-2)^{-J}<\sum_{\ell\geq 5}\ell(\ell-2)^{-J}\leq 3^{-J /2}\sum_{\ell\geq 5}\ell(\ell-2)^{-J/2}\leq 3^{-J/2}\sum_{\ell\geq 5}\ell(\ell-2)^{ -3}\ll 3^{-J/2}.\qed\] _Remark_.: It is also possible to estimate \(V^{\prime\prime}_{\ell^{e}}\) via character sums, which will be our primary tool for general \(F(T)\in\mathbb{Z}[T]\). By orthogonality (as in (5.1) below), \(\varphi(\ell^{e})V^{\prime\prime}_{\ell^{e}}=\sum_{\chi\bmod\ell^{e}}\bar{ \chi}(w)Z^{J}_{\chi}\), where \[Z_{\chi}: =\sum_{v\bmod\ell^{e}}\chi_{0}(v)\chi(Rv+S)\] \[=\sum_{u\bmod\ell^{e}}\chi(u)-\sum_{\begin{subarray}{c}u\bmod \ell^{e}\\ u\equiv S\bmod\ell\end{subarray}}\chi(u);\] here we have used that as \(v\) runs over coprime residues mod \(\ell^{e}\), the expression \(Rv+S\) runs over all the residues mod \(\ell^{e}\) except for those congruent to \(S\) mod \(\ell\). If \(\ell\mid S\), it is then immediate that \(Z_{\chi}=\mathbb{1}_{\chi=\chi_{0}}\varphi(\ell^{e})\) (with \(\chi_{0}\) denoting the principal character mod \(\ell^{e}\)), once again giving us \(\varphi(\ell^{e})V^{\prime\prime}_{\ell^{e}}=\varphi(\ell^{e})^{J}\). On the other hand, if \(\ell\nmid S\), then fixing a generator \(g\) mod \(\ell^{e}\) and considering the unique \(r\in\{0,1,\ldots,\varphi(\ell^{e})-1\}\) satisfying \(g^{r}\equiv S\pmod{\ell^{e}}\), we observe that the sets \(\{u\bmod\ell^{e}:u\equiv S\bmod\ell\}\) and \(\{g^{r+(\ell-1)k}\bmod\ell^{e}:0\leq k<\ell^{e-1}\}\) are equal. Hence, \[\sum_{\begin{subarray}{c}u\bmod\ell^{e}\\ u\equiv S\bmod\ell\end{subarray}}\chi(u)=1_{\chi^{\ell-1}=\chi_{0}}\chi(S) \ell^{e-1}.\] As such, \(Z_{\chi}=\mathbb{1}_{\chi=\chi_{0}}\ell^{e-1}(\ell-2)+O(1_{\chi^{\ell-1}=\chi_ {0},\ \chi\neq\chi_{0}}\ell^{e-1})\), which again leads to (4.3) since there are \(\ell-2\) nontrivial characters \(\chi\bmod\ell^{e}\) satisfying \(\chi^{\ell-1}=\chi_{0}\). Verification of Hypothesis B.: We proceed as in the proof of Lemma 3.1. Let \(n\leq x\) be an inconvenient solution to \(f(n)\equiv a\pmod{q}\). We can assume \(P(n)>z=x^{1/\log_{2}x}\), since the number of exceptional \(n\leq x\) is \(o(N(q)/\varphi(q))\). Similarly, we can assume that \(n\) has no repeated prime factors exceeding \(y=\exp((\log x)^{\delta/2})\). Write \(n=PAB\), where \(P:=P(n)\) and \(A\) is the largest divisor of \(n/P\) supported on primes exceeding \(y\). Then \(z<P\leq x/AB\) and \((RP+S)f(AB)\equiv a\pmod{q}\). Given \(A\) and \(B\), this congruence is satisfied for \(P\) belonging to at most one coprime residue class mod \(q\). So by the Brun-Titchmarsh inequality, given \(A\) and \(B\) there are \(\ll x/\varphi(q)AB\log{(z/q)}\ll x\log_{2}x/\varphi(q)AB\log x\) corresponding values of \(P\). Note that we have saved a factor of \(\varphi(q)\) here over the analogous estimate in Lemma 3.1. Summing on \(A,B\), and making the same estimates as in the argument for Lemma 3.1, yields \[N_{\rm inc}(q,a)\leq\frac{x}{\varphi(q)(\log x)^{1-\alpha\delta/2}}\exp(O(( \log_{3}x)^{2}+(\log_{2}q)^{O(1)})),\] and this is \(o(N(q)/\varphi(q))\). ## 5. General polynomially defined functions: Proof of Theorem 1.3 To check Hypothesis A in the context of Theorem 1.3, we require the following character sum estimate, which follows from the Weil bounds when \(e=1\) and from work of Cochrane [3] (see also [4]) when \(e>1\). See [23, Proposition 2.6] for a detailed discussion. **Lemma 5.1**.: _Let \(F_{1}(T)\),..., \(F_{K}(T)\in\mathbb{Z}[T]\) be nonconstant polynomials for which the product \(F_{1}(T)\cdots F_{K}(T)\) has no multiple roots. Let \(\ell\) be an odd prime not dividing the leading coefficient of any of the \(F_{k}(T)\) and not dividing the discriminant of \(F_{1}(T)\cdots F_{K}(T)\). Let \(e\) be a positive integer, and let \(\chi_{1},\ldots,\chi_{K}\) be Dirichlet characters modulo \(\ell^{e}\), at least one of which is primitive. Then_ \[\left|\sum_{x\bmod\ell^{e}}\chi_{1}(F_{1}(x))\cdots\chi_{K}(F_{K}(x))\right| \leq(d-1)\ell^{e(1-1/d)},\] _where \(d=\sum_{k=1}^{K}\deg F_{k}(T)\)._ Let \(\Delta(F)\) denote the discriminant of \(F(T)\) if \(F(0)=0\) and the discriminant of \(TF(T)\) if \(F(0)\neq 0\). Throughout this section and the next, we assume that \(C(F)\) is fixed so large that primes exceeding \(C(F)\) are odd and divide neither the leading coefficient of \(F\) nor \(\Delta(F)\). We also assume that \(C(F)>(4D)^{2D+2}\) where \(D=\deg F(T)\). _Verification of Hypothesis A_. Suppose that \(m\leq x\) has \(\gcd(f(m),q)=1\) and write \(w\) for a value of \(af(m)^{-1}\bmod q\). Then \(\#{\mathcal{V}}^{\prime\prime}_{q,a,m}=\prod_{e^{e}\parallel q}V^{\prime\prime} _{\ell^{e}}\) and \(\#{\mathcal{V}}^{\prime}_{q}=\prod_{\ell^{e}\parallel q}V^{\prime}_{\ell^{e}}\), where \[V^{\prime\prime}_{\ell^{e}}:=\#\{(v_{1},\ldots,v_{J})\bmod\ell^{e}:\gcd(v_{1} \ldots v_{J},\ell)=1,\ \prod_{i=1}^{J}F(v_{i})\equiv w\pmod{\ell^{e}}\}\] and \[V^{\prime}_{\ell^{e}}:=\#\{(v_{1},\ldots,v_{J})\bmod\ell^{e}:\gcd(v_{1} \ldots v_{J}F(v_{1})\cdots F(v_{J}),\ell)=1\}.\] With \(\chi_{0}\) denoting the principal Dirichlet character mod \(\ell^{e}\), \[\varphi(\ell^{e})V^{\prime\prime}_{\ell^{e}} =\sum_{\chi\bmod\ell^{e}}\bar{\chi}(w)\sum_{v_{1},\ldots,v_{J} \bmod\ell^{e}}\chi_{0}(v_{1}\cdots v_{J})\chi(F(v_{1})\cdots F(v_{J})) \tag{5.2}\] \[=V^{\prime}_{\ell^{e}}+\sum_{\begin{subarray}{c}\chi\bmod\ell^{ e}\\ \chi\neq\chi_{0}\end{subarray}}\bar{\chi}(w)Z^{J}_{\chi}, \tag{5.1}\] where \(Z_{\chi}:=\sum_{v\bmod\ell^{e}}\chi_{0}(v)\chi(F(v))\). For each \(\chi\) of conductor \(\ell^{e_{0}}\) with \(1\leq e_{0}\leq e\), Lemma 5.1 gives \(|Z_{\chi}|=\ell^{e-e_{0}}|\sum_{x\bmod\ell^{e_{0}}}\chi_{0}(x)\chi(F(x))|\leq D \ell^{(e-e_{0})+e_{0}(1-1/(D+1))}=D\ell^{e-e_{0}/(D+1)}\). (If \(\ell\) divides \(F(0)\), then \(\sum_{x\bmod\ell^{e_{0}}}\chi_{0}(x)\chi(F(x))=\sum_{x\bmod\ell^{e_{0}}}\chi (F(x))\), and we apply Lemma 5.1 with \(k=1\) and \(F_{1}(T)=F(T)\); otherwise we take \(k=2\), \(F_{1}(T)=T\), and \(F_{2}(T)=F(T)\).) As there are fewer than \(\ell^{e_{0}}\) characters of conductor \(\ell^{e_{0}}\), \[\bigg{|}\sum_{\begin{subarray}{c}\chi\bmod\ell^{e}\\ \chi\neq\chi_{0}\end{subarray}}\bar{\chi}(w)Z_{\chi}^{J}\bigg{|}\leq\sum_{1\leq e _{0}\leq e}\ell^{e_{0}}(D\ell^{e-e_{0}/(D+1)})^{J}=D^{J}\ell^{eJ}\sum_{1\leq e _{0}\leq e}\ell^{e_{0}(1-J/(D+1))}.\] Since \(J\geq D+2\) once \(x\) is sufficiently large, each term in the sum \(\sum_{1\leq e_{0}\leq e}\ell^{e_{0}(1-J/(D+1))}\) is smaller than half the previous, and \(\sum_{1\leq e_{0}\leq e}\ell^{e_{0}(1-J/(D+1))}\leq 2\ell^{1-J/(D+1)}\). Thus, \(|\sum_{\begin{subarray}{c}\chi\bmod\ell^{e}\\ \chi\neq\chi_{0}\end{subarray}}\bar{\chi}(w)Z_{\chi}^{J}|\leq 2D^{J}\ell^{eJ}\ell^{ 1-J/(D+1)}\). Since \(V_{\ell^{e}}^{\prime}=(\varphi(\ell^{e})\alpha(\ell^{e}))^{J}\), we conclude from (5.2) that \[\varphi(\ell^{e})V_{\ell^{e}}^{\prime\prime}=V_{\ell^{e}}^{\prime}(1+R_{\ell}), \tag{5.3}\] where \[|R_{\ell}|\leq 2D^{J}\left(\frac{\ell^{e}}{\varphi(\ell^{e})}\alpha(\ell^{e })^{-1}\right)^{J}\ell^{1-J/(D+1)}\leq 2(4D)^{J}\ell^{1-J/(D+1)}.\] (We use here that \(\ell^{e}/\varphi(\ell^{e}),\alpha(\ell^{e})^{-1}\leq 2\).) Multiplying over \(\ell\) in (5.3), we see that Hypothesis A will follow if \((4D)^{J}\sum_{\ell|q}\ell^{1-J/(D+1)}=o(1)\). To check this last inequality, observe that when \(x\) is large, \[(4D)^{J}\sum_{\ell|q}\ell^{1-J/(D+1)} \leq(4D)^{J}C(F)^{-J/(2D+2)}\sum_{\ell|q}\ell^{1-J/(2D+2)}\] \[\leq(4D/C(F)^{1/(2D+2)})^{J}\sum_{\ell}\ell^{-2}<2(4D/C(F)^{1/(2D +2)})^{J};\] this last quantity tends to \(0\) since \(C(F)>(4D)^{2D+2}\) and \(J\to\infty\). Verification of Hypothesis B.We follow the arguments for the corresponding step in SS4. Let \(\xi(q)\) be the maximum number of roots \(v\bmod q\) of any congruence \(F(v)\equiv a\pmod{q}\), where the maximum is over all residue classes \(a\bmod q\). Then there are at most \(\xi(q)\) possibilities for the residue class of \(P\) modulo \(q\) and our previous arguments yield \[N_{\mathrm{inc}}(q,a) \leq\xi(q)\frac{x}{\varphi(q)(\log x)^{1-\alpha\delta/2}}\exp(O( (\log_{3}x)^{2}+(\log_{2}q)^{O(1)}))\] \[<\xi(q)\frac{x}{\varphi(q)(\log x)^{1-2\alpha\delta/3}}.\] This last quantity is certainly \(o(N(q)/\varphi(q))\) as long as \(\xi(q)\ll(\log x)^{(1-\delta)\alpha}\) (say). By the choice of \(C(F)\), we have \(\xi(q)\leq D^{\omega(q)}\) for squarefree \(q\), verifying Hypothesis B for squarefree \(q\) having \(\omega(q)\leq(1-\delta)\alpha\log_{2}x/\log D\). On the other hand, by a result of Konyagin, each congruence \(F(v)\equiv a\pmod{q}\) has \(O(q^{1-1/D})\) roots modulo \(q\)[11, 12]. Consequently, Hypothesis B also holds true for \(q\leq(\log x)^{\alpha(1-\delta)(1-1/D)^{-1}}\), completing the proof of Theorem 1.3. ## 6. Equidistribution along inputs with several prime factors exceeding \(q\): Proof of Theorem 1.4.: Recall that for the purposes of Theorem 1.4, we take \(\delta:=1\) and \(y=\exp((\log x)^{1/2})\) in the framework developed in section 3. Lemma 3.1 still applies to show that \(N(q)\sim N_{\operatorname{con}}(q)\) as \(x\to\infty\), uniformly in \(q\leq(\log x)^{K}\) having \(\alpha(q)\neq 0\). In particular, if \(P_{D+2}(n)\leq q\), then \(P_{J}(n)<q\leq y\) (once \(x\) is large); thus \(n\) is inconvenient, placing it in a set of size \(o(N(q))\). It follows that the right-hand side of (1.6) is \(\sim N(q)/\varphi(q)\), and our task is that of showing the same for the left-hand side. The proof of Hypothesis A in SS5 gives \(N_{\operatorname{con}}(q,a)\sim N(q)/\varphi(q)\). It remains only to show that there are \(o(N(q)/\varphi(q))\) inconvenient \(n\) with \(P_{D+2}(n)>q\) and \(f(n)\equiv a\pmod{q}\). As usual, we can assume \(P(n)>z:=x^{1/\log_{2}x}\) and that \(n\) has no repeated prime factor exceeding \(y=\exp(\sqrt{\log x})\). Since \(n\) is inconvenient, we must have \(P_{J}(n)\leq y\). We suppose first that one of the largest \(D+2\) primes in \(n\) is repeated. Write \(n=PSm\), where \(P=P(n)\), \(S\) is the largest squarefull divisor of \(n/P\); hence, \(Sm\leq x/z\) and \(S>q^{2}\). Given \(S\) and \(m\), there are fewer than \(\pi(x/Sm)\ll x\log_{2}x/Sm\log x\) possibilities for \(P\). Summing on squarefull \(S>q^{2}\) bounds the number of \(n\), given \(m\), as \(\ll x\log_{2}x/qm\log x\). To handle the sum on \(m\), write \(m=AB\), where \(A\) is the largest divisor of \(m\) composed of primes exceeding \(y\). Then \(\Omega(A)<J\), while \(B\) is \(y\)-smooth with \(\gcd(f(B),q)=1\). Bounding \(\sum 1/A\) and \(\sum 1/B\) as in the proof of Lemma 3.1, we deduce that \(\sum 1/m\leq(\log x)^{\frac{1}{2}\alpha}\exp((\log_{3}x)^{O(1)})\). Putting it all together, we see that the number of \(n\) in this case is at most \(\frac{x}{q(\log x)^{1-\frac{1}{2}\alpha}}\exp((\log_{3}x)^{O(1)})\), which is \(o(N(q)/\varphi(q))\). We now suppose that each \(P_{i}:=P_{i}(n)\) appears to the first power in \(n\), for \(i=1,2,\ldots,D+2\), and we write \(n=P_{1}\cdots P_{D+2}m\). Since \(f(n)\equiv a\pmod{q}\), it must be that \(\gcd(f(m),q)=1\). Furthermore, letting \(w\) denote a value of \(af(m)^{-1}\bmod q\), \[(P_{1},\ldots,P_{D+2})\bmod q\in\mathcal{V}_{q}(w),\] where \[\mathcal{V}_{q}(w):=\{(v_{1},\ldots,v_{D+2})\bmod q:\gcd(v_{1}\cdots v_{D+2}, q)=1,\ F(v_{1})\cdots F(v_{D+2})\equiv w\pmod{q}\}.\] Let us estimate the size of \(\#\mathcal{V}_{q}(w)\). Put \[V_{\ell^{e}}=\#\{(v_{1},\ldots,v_{D+2})\bmod\ell^{e}:\gcd(v_{1}\cdots v_{D+2},\ell)=1,\ F(v_{1})\cdots F(v_{D+2})\equiv w\pmod{\ell^{e}}\},\] so that \(\#\mathcal{V}_{q}(w)=\prod_{\ell^{e}\mid q}V_{\ell^{e}}\). From the proof of (5.3), with \(J\) replaced by \(D+2\), \[\varphi(\ell^{e})V_{\ell^{e}}=(\alpha(\ell^{e})\varphi(\ell^{e}))^{D+2}(1+R_{ \ell}),\] where \(|R_{\ell}|\leq 2(4D)^{D+2}\ell^{-1/(D+1)}\ll\ell^{-1/(D+1)}\). Multiplying on \(\ell\) gives \[\varphi(q)\#\mathcal{V}_{q}(w) \ll\alpha(q)^{D+2}\varphi(q)^{D+2}\exp\left(O\big{(}\sum_{\ell \mid q}\ell^{-1/(D+1)}\big{)}\right) \tag{6.1}\] \[\ll\varphi(q)^{D+2}\exp(O((\log q)^{1-1/(D+1)})).\] Given \(P_{2},\ldots,P_{D+2}\), \(m\), and \(\mathbf{v}=(v_{1},\ldots,v_{D+2})\bmod q\in\mathcal{V}_{q}(w)\), the number of possibilities for \(P_{1}\) is \(\ll x\log_{2}x/\varphi(q)mP_{2}\cdots P_{D+2}\log x\), by Brun-Titchmarsh. Summing on \(P_{2},\ldots,P_{D+2}\), we see that the number of possibilities for \(n\) given \({\bf v}\) and \(m\) is \(\ll x(\log_{2}x)^{O(1)}/\varphi(q)^{D+2}m\log x\). (We use here that \[\sum_{\begin{subarray}{c}q<p\leq x\\ p\equiv v\,(\bmod\,q)\end{subarray}}\frac{1}{p}\ll\frac{\log_{2}x}{\varphi(q)},\] uniformly in the choice of \(v\), which follows from Brun-Titchmarsh and partial summation; alternatively, one can apply Lemma 2.3.) We sum on \({\bf v}\in{\mathcal{V}}_{q}(w)\), using (6.1), and then sum on \(m\), writing \(m=AB\) and making the estimates as earlier in this proof. We find that the total number of \(n\) is at most \[\frac{x}{\varphi(q)(\log x)^{1-\frac{1}{2}\alpha}}\exp(O((\log_{2}x)^{1-1/(D+ 1)})),\] which is \(o(N(q)/\varphi(q))\). _Proof of_ (b). We follow the proof of (a), replacing \(D+2\) everywhere by \(2\). It suffices to show that \[\varphi(\ell)V_{\ell}\leq\varphi(\ell)^{2}(1+O(1/\sqrt{\ell})) \tag{6.2}\] for each \(\ell\), for then \(\varphi(q)\#{\mathcal{V}}_{q}(w)\ll\varphi(q)^{2}\exp(O((\log q)^{1/2}))\), which is a suitable analogue of (6.1). Certainly \(V_{\ell}\) is bounded by the count of \(\mathbb{F}_{\ell}\)-points on the affine curve \(F(x)F(y)=w\). The polynomial \(F(x)F(y)-w\) is absolutely irreducible over \(\mathbb{F}_{\ell}\).3 Indeed, suppose that \(F(x)F(y)-w=U(x,y)V(x,y)\) for some \(U(x,y),V(x,y)\in\overline{\mathbb{F}}_{\ell}[x,y]\). Then for each root \(\theta\in\overline{\mathbb{F}}_{\ell}\) of \(F\), we find that \(-w=U(\theta,y)V(\theta,y)\), and so in particular \(U(\theta,y)\) is constant. Thus, if we write Footnote 3: The published version of the paper contained an incorrect argument for this claim. \[U(x,y)=\sum_{k\geq 0}a_{k}(x)y^{k},\] with each \(a_{k}(x)\in\overline{\mathbb{F}}_{\ell}[x]\), then \(a_{k}(\theta)=0\) for each \(k>0\). Since \(F\) has no multiple roots over \(\overline{\mathbb{F}}_{\ell}\), each such \(a_{k}(x)\) is forced to be a multiple of \(F(x)\), hence \(U(x,y)\equiv a_{0}(x)\pmod{F(x)}\). A symmetric argument shows that \(V(x,y)\equiv b_{0}(y)\pmod{F(y)}\) for some \(b_{0}(y)\in\overline{\mathbb{F}}_{\ell}[y]\), so that \(V(x,\theta)=b_{0}(\theta)\). Consequently, for any root \(\theta\in\overline{\mathbb{F}}_{\ell}\) of \(F\), \[-w\equiv F(x)F(\theta)-w\equiv U(x,\theta)V(x,\theta)\equiv a_{0}(x)b_{0}( \theta)\pmod{F(x)},\] which shows that \(U(x,y)\equiv a_{0}(x)\equiv c\pmod{F(x)}\) for some constant \(c\in\overline{\mathbb{F}}_{\ell}\). But this forces \(c=U(\theta,\theta)\), showing that \(F(x)\) divides \(U(x,y)-U(\theta,\theta)\). By symmetry, so does \(F(y)\), and we obtain \(U(x,y)=U(\theta,\theta)+F(x)F(y)Q(x,y)\) for some \(Q(x,y)\in\overline{\mathbb{F}}_{\ell}[x,y]\). Degree considerations now imply that for \(U(x,y)\) to divide \(F(x)F(y)-w\), either \(Q(x,y)\) is a nonzero constant, in which case \(V(x,y)\) is constant, or \(Q(x,y)=0\), in which case \(U(x,y)\) is constant. Now we apply the version of the Hasse-Weil bound appearing as [15, Corollary 2(b)]; this gives that the number of \(\mathbb{F}_{\ell}\)-points on \(F(x)F(y)=w\) is at most \(\ell+1+\frac{1}{2}(2D-1)(2D-2)\lfloor 2\sqrt{\ell}\rfloor\), which is \(\varphi(\ell)(1+O(1/\sqrt{\ell}))\), yielding (6.2). ## 7. Concluding remarks and further questions Elementary methods often enjoy a robustness surpassing their analytic counterparts, and our (quasi)elementary approach to weak uniform distribution is no exception. Not only does our method yield a range of uniformity in \(q\) wider than that (seemingly) accessible to more 'obvious' attacks via mean value theorems for multiplicative functions, but the method applies to functions that do not fit conveniently into the'multiplicative managerie'. We illustrate with the following theorem; note that the distribution in residue classes of the function \(A^{*}(n)\) below does not seem easily approached via mean value theorems. **Theorem 7.1**.: _Fix \(K\geq 1\). The sum of prime divisors function \(A(n):=\sum_{j=1}^{\Omega(n)}P_{j}(n)\), as well as the alternating sum of prime divisors function \(A^{*}(n):=\sum_{j=1}^{\Omega(n)}(-1)^{j-1}P_{j}(n)\), is asymptotically uniformly distributed to all moduli \(q\leq(\log x)^{K}\). In other words, as \(x\to\infty\),_ \[\sum_{\begin{subarray}{c}n\leq x\\ A(n)\equiv a\ (\mathrm{mod}\ q)\end{subarray}}1\ \sim\ \sum_{ \begin{subarray}{c}n\leq x\\ A^{*}(n)\equiv a\ (\mathrm{mod}\ q)\end{subarray}}1\ \sim\ \frac{x}{q}, \tag{7.1}\] _uniformly in moduli \(q\leq(\log x)^{K}\) and residue classes \(a\ \mathrm{mod}\ q\)._ _Remark_.: The uniform distribution of \(A(n)\ \mathrm{mod}\ q\) for each fixed \(q\) is a consequence of the theorem of Delange quoted in the introduction, with more precise results appearing in work of Goldfeld [9]. For varying \(q\), the problem seems to have been first considered in [22]; there Halasz's mean value theorem is used to show uniform distribution of \(A(n)\ \mathrm{mod}\ q\) for \(q\leq(\log x)^{\frac{1}{2}-\delta}\) (for any fixed \(\delta>0\)), a significantly narrower range than that allowed by Theorem 7.1. Proof of Theorem 7.1.: With \(y:=\exp(\sqrt{\log x})\), arguments analogous to (but simpler than) those in the proof of Lemma 3.1 show that the number of inconvenient \(n\leq x\) is \(o(x)\), while arguments analogous to (but simpler than) those in the verification of Hypothesis B of SS4 show that the number of inconvenient \(n\leq x\) having \(A(n)\equiv a\ (\mathrm{mod}\ q)\) or \(A^{*}(n)\equiv a\ (\mathrm{mod}\ q)\) is \(o(x/q)\). Hence, it suffices to show that \[N(q,a)\sim N^{*}(q,a)\ \sim\ \frac{1}{q}\sum_{\mathrm{convenient}\ n\leq x}1, \tag{7.2}\] where \(N(q,a)\) (respectively, \(N^{*}(q,a)\)) denotes the number of convenient \(n\leq x\) having \(A(n)\equiv a\ (\mathrm{mod}\ q)\) (resp., \(A^{*}(n)\equiv a\ (\mathrm{mod}\ q)\)). Proceeding as in SS3, we define, for an arbitrary residue class \(w\ \mathrm{mod}\ q\), \[\mathcal{V}_{q}(w):=\{(v_{1},\ldots,v_{J})\ \mathrm{mod}\ q:\gcd(v_{1}\ldots v _{J},q)=1,\ \sum_{j=1}^{J}v_{j}\equiv w\ (\mathrm{mod}\ q)\}\] and \[\mathcal{V}_{q}^{*}(w):=\{(v_{1},\ldots,v_{J})\ \mathrm{mod}\ q:\gcd(v_{1} \ldots v_{J},q)=1,\ \sum_{j=1}^{J}(-1)^{j-1}v_{j}\equiv w\ (\mathrm{mod}\ q)\},\] and we write \[N(q,a)=\sum_{m\leq x}\frac{1}{J!}\sum_{\mathbf{v}\in\mathcal{V}_{q,a,m}}\sum_{ \begin{subarray}{c}P_{1},\ldots,P_{J}\text{ distinct}\\ P_{1}\cdots P_{J}\leq x/m\\ \text{each }P_{j}>L_{m}\\ P_{j}\equiv v_{j}\!\!\!\!\!\!\pmod{q}\end{subarray}}1,\qquad N^{*}(q,a)= \sum_{m\leq x}\frac{1}{J!}\sum_{\mathbf{v}\in\mathcal{V}_{q,a,m}^{*}}\sum_{ \begin{subarray}{c}P_{1},\ldots,P_{J}\text{ distinct}\\ P_{1}\cdots P_{J}\leq x/m\\ \text{each }P_{j}>L_{m}\\ \text{each }P_{j}\equiv v_{j}\!\!\!\!\!\pmod{q}\end{subarray}}1,\] where \(\mathcal{V}_{q,a,m}:=\mathcal{V}_{q}(a-A(m))\) and \(\mathcal{V}_{q,a,m}^{*}:=\mathcal{V}_{q}^{*}(a-(-1)^{J}A^{*}(m))\). By \(J\) applications of Siegel-Walfisz, we now obtain \[N(q,a):=\sum_{m\leq x}\frac{\#\mathcal{V}_{q,a,m}}{\varphi(q)^{J}}\Bigg{(} \frac{1}{J!}\sum_{\begin{subarray}{c}P_{1},\ldots,P_{J}\text{ distinct}\\ P_{1}\cdots P_{J}\leq x/m\\ \text{each }P_{j}>L_{m}\end{subarray}}1\Bigg{)}+O\left(x\exp\left(-\frac{1}{8}C _{K}(\log x)^{1/4}\right)\right) \tag{7.3}\] \[N^{*}(q,a):=\sum_{m\leq x}\frac{\#\mathcal{V}_{q,a,m}^{*}}{\varphi(q)^{J}} \Bigg{(}\frac{1}{J!}\sum_{\begin{subarray}{c}P_{1},\ldots,P_{J}\text{ distinct}\\ P_{1}\cdots P_{J}\leq x/m\\ \text{each }P_{j}>L_{m}\end{subarray}}1\Bigg{)}+O\left(x\exp\left(-\frac{1}{8}C _{K}(\log x)^{1/4}\right)\right), \tag{7.4}\] for some constant \(C_{K}>0\) depending only on \(K\). As an analogue of our Hypothesis A, we claim that as \(x\to\infty\), \[\#\mathcal{V}_{q,a,m}\sim(\mathds{1}_{2|q}+2\cdot\mathds{1}_{2|q,\,J\equiv a -A(m)\!\!\!\!\!\pmod{2}})\frac{\varphi(q)^{J}}{q}, \tag{7.5}\] \[\#\mathcal{V}_{q,a,m}^{*}\sim(\mathds{1}_{2|q}+2\cdot\mathds{1}_{2|q,\,J\equiv a -(-1)^{J}A^{*}(m)\!\!\!\!\pmod{2}})\frac{\varphi(q)^{J}}{q}, \tag{7.6}\] uniformly in \(m\leq x\) and in \(q\leq(\log x)^{K}\). (If \(\mathds{1}_{2|q}+2\cdot\mathds{1}_{2|q,\,J\equiv a-A(m)\!\!\!\!\pmod{2}}=0\), the asymptotic (7.5) should be interpreted as the claim \(\mathcal{V}_{q,a,m}\) is empty, and similarly for (7.6).) To this end, it suffices to show that \[\#\mathcal{V}_{q}^{*}(w)=\#\mathcal{V}_{q}(w)\sim(\mathds{1}_{2|q}+2\cdot \mathds{1}_{2|q,\,J\equiv w\!\!\!\!\pmod{2}})\frac{\varphi(q)^{J}}{q}, \tag{7.7}\] uniformly in \(q\leq(\log x)^{K}\) and in residue classes \(w\!\!\!\!\!\pmod{q}\). The equality in (7.7) follows immediately from the one-to-one correspondence \((v_{1},\cdots,v_{J})\leftrightarrow(v_{1},-v_{2},\cdots,(-1)^{J-1}v_{J})\) between \(\mathcal{V}_{q}(w)\) and \(\mathcal{V}_{q}^{*}(w)\). To see the asymptotic, we write \(\#\mathcal{V}_{q}(w)=\prod_{\ell^{e}\|q}V_{\ell^{e}}\), where for each prime power \(\ell^{e}\parallel q\), \[V_{\ell^{e}}: =\#\{(v_{1},\ldots,v_{J})\!\!\!\!\!\!\pmod{\ell^{e}}:\gcd(v_{1} \ldots v_{J},\ell)=1,\ \sum_{j=1}^{J}v_{j}\equiv w\!\!\!\!\pmod{\ell^{e}}\}\] \[=\frac{\varphi(\ell^{e})^{J}}{\ell^{e}}+\frac{1}{\ell^{e}}\sum_{0<r <\ell^{e}}\exp\left(-\frac{2\pi irw}{\ell^{e}}\right)S_{\ell}(r)^{J},\] with \(S_{\ell}(r):=\sum_{v\!\!\!\!\!\mod{\ell^{e}},\ (v,\ell)=1}\exp(2\pi irv/\ell^{e})\) (a Ramanujan sum). Since \(S_{\ell}(r)=\mathds{1}_{\ell^{e-1}\|r}(-\ell^{e-1})\) for all \(r\in\{1,\cdots,\ell^{e}-1\}\) (see, for instance, [16, Theorem 4.1, p. 110]), we deduce that as \(x\to\infty\), \[\#\mathcal{V}_{q}(w)=(\mathbb{1}_{2|q}+2\cdot\mathbb{1}_{2|q,\,J\equiv w\,(\text{ mod }2)})\frac{\varphi(q)^{J}}{q}\prod_{\begin{subarray}{c}\ell|q\\ \ell>2\end{subarray}}\left(1+O\left(\frac{1}{(\ell-1)^{J-1}}\right)\right),\] leading to (7.7), since \(\sum_{\ell|q,\,\ell>2}1/(\ell-1)^{J-1}=o(1)\) as \(J\to\infty\). Plugging (7.5) and (7.6) into (7.3) and (7.4) respectively, and carrying out our initial reductions in reverse order completes the proof of (7.2), and hence also that of (7.1), for odd \(q\leq(\log x)^{K}\). On the other hand, when \(q\) is even we obtain \[N(q,a)=\frac{2}{q}\sum_{\begin{subarray}{c}n\leq x\\ A(n)\equiv a\,(\text{mod }2)\end{subarray}}1+o\left(\frac{x}{q}\right),\quad N ^{*}(q,a)=\frac{2}{q}\sum_{\begin{subarray}{c}n\leq x\\ A^{*}(n)\equiv a\,(\text{mod }2)\end{subarray}}1+o\left(\frac{x}{q}\right);\] here, it has been noted that \(a-A(m)\equiv J\,\,(\text{mod }2)\) is equivalent to \(A(mP_{1}\cdots P_{J})\equiv a\,\,(\text{mod }2)\), and likewise for \(A^{*}\) in place of \(A\). Since \(A(n)\) is known to be equidistributed mod \(2\) (as discussed in the remarks preceding the theorem), and \(A^{*}(n)\equiv A(n)\,\,\,(\text{mod }2)\), the theorem follows. The flexibility of our method suggests the possibility of extensions in several different directions. One natural generalization is to study simultaneous weak equidistribution for a finite family of polynomially-defined multiplicative functions. Problems of this kind with fixed moduli were investigated by Narkiewicz in [18], and initial results towards uniformity were obtained in [23]. It should now be possible to draw more complete conclusions. Going in a different direction, one could apply our method to additive functions, aiming perhaps at a uniform generalization of the quoted theorem of Delange. One could even consider simultaneous equidistribution of additive and multiplicative functions; here estimates for hybrid character sums, as in [3], should prove useful. We close on a more speculative note. The mixing exploited in this paper can be interpreted as a quantitative ergodicity phenomenon for random walks on multiplicative groups. However, our proofs go through character sum estimates; one might say that no actual Markov chains were harmed in the production of our arguments. It would be interesting to investigate the extent to which the (rather substantially developed) theory of Markov chain mixing could be brought directly to bear on these kinds of uniform and weak uniform distribution questions. This has the potential to open up applications in situations where character sum technology is unavailable. ## Acknowledgements We thank the referee for carefully reading the manuscript and for making helpful suggestions that have improved the results and the exposition. The first named author (P.P.) is supported by NSF award DMS-2001581.
2308.16365
New bounds on the generalized Ramsey number $f(n,5,8)$
Let $f(n,p,q)$ denote the minimum number of colors needed to color the edges of $K_n$ so that every copy of $K_p$ receives at least $q$ distinct colors. In this note, we show $\frac{6}{7}(n-1) \leq f(n,5,8) \leq n + o(n)$. The upper bound is proven using the "conflict-free hypergraph matchings method" which was recently used by Mubayi and Joos to prove $f(n,4,5) = \frac{5}{6}n + o(n)$.
Enrique Gomez-Leos, Emily Heath, Alex Parker, Coy Schwieder, Shira Zerbib
2023-08-30T23:44:49Z
http://arxiv.org/abs/2308.16365v2
# New bounds on \(f(n,5,8)\) ###### Abstract Let \(f(n,p,q)\) denote the minimum number of colors needed to color the edges of \(K_{n}\) so that every copy of \(K_{p}\) receives at least \(q\) distinct colors. In this note, we show \(\frac{6}{7}(n-1)\leq f(n,5,8)\leq n+o(n)\). The upper bound is proven using the "conflict-free hypergraph matchings method" which was recently used by Mubayi and Joos to prove \(f(n,4,5)=\frac{5}{6}n+o(n)\). ## 1 Introduction Given graphs \(H\) and \(G\) and positive integer \(q\), an \((H,q)\)_-coloring_ of \(G\) is an edge-coloring in which each copy of \(H\) receives at least \(q\) colors. We denote by \(f(G,H,q)\) the minimum number of colors required for an \((H,q)\)-coloring of \(G\). When \(G=K_{n}\) and \(H=K_{p}\), we use the notation \(f(n,p,q)\). Note that determining \(f(n,p,2)\) for all \(n,p\) is equivalent to determining the classical multicolor Ramsey numbers. Introduced by Erdos and Shelah [15, 16], these numbers were further explored by Erdos and Gyarfas [17] in the case where \(G\) and \(H\) are complete graphs and by Axenovich, Furedi, and Mubayi [3] in the case where \(G\) and \(H\) are complete bipartite graphs. Since then, the problem has been studied by many researchers, including [2, 4, 5, 9, 10, 11, 12, 18, 23, 24, 25, 26, 27]. Among other results, Erdos and Gyarfas used a probabilistic argument to give a general upper bound on \(f(n,p,q)\), showing \[f(n,p,q)=O\left(n^{\frac{p-2}{\binom{p}{2}-q+1}}\right).\] Using a randomized coloring process and the differential equation method, Bennett, Dudek, and English [7] later improved this bound by a logarithmic factor for values of \(q\) and \(p\) with \(q\leq(p^{2}-26p+55)/4\). Recently, Bennett, Delcourt, Li, and Postle [6] extended this result to all values of \(p\) and \(q\) except at the values \(q=\binom{p}{2}-p+2\) and \(q=\binom{p}{2}-\left\lfloor\frac{p}{2}\right\rfloor+2\), where the local lemma bound of Erdos and Gyarfas is known to be tight. They showed that for an \(n\)-vertex graph \(G\), \[f(G,H,p)=O\left(\left(\frac{n^{|V(H)-2|}}{\log n}\right)^{\frac{1}{|E(H)|-q+1} }\right).\] Moreover, they generalized this result to give an analogous upper bound for list colorings and for hypergraphs of higher uniformity. The proof in [6] uses a new method introduced independently by Delcourt and Postle [13] and by Glock, Joos, Kim, Kuhn, and Lichev [19] for finding "forbidden submatchings" or "conflict-free hypergraph matchings." This method has been applied to a wide variety of problems; see for example [8, 13, 14, 19, 20]. In particular, Joos and Mubayi [21] recently used this method to show that \[f(n,4,5)=\frac{5}{6}n+o(n).\] This result had previously been obtained by Bennett, Cushman, Dudek, and Pralat [5] using a randomized coloring process involving the differential equation method, and answers a question of Erdos and Gyarfas [17]. They also used similar methods to prove new upper bounds showing that \(f(K_{n,n},C_{4},3)=\frac{2}{3}n+o(n)\) and \(f(K_{n},C_{4},3)=\frac{1}{2}n+o(n)\). In this paper, we adapt their technique to improve the upper bound on \(f(n,5,8)\). **Theorem 1**.: _We have \(f(n,5,8)\leq n+o(n)\)._ To prove this theorem, we construct a coloring of \(K_{n}\) in two stages. In the first stage, we define hypergraphs \(\mathcal{H}\) and \(\mathcal{C}\) for which a \(\mathcal{C}\)-free matching in \(\mathcal{H}\) corresponds to a partial edge-coloring of \(K_{n}\) with \(n\) colors in which each color class consists of vertex-disjoint copies of edges and paths of length \(2\). In the second stage, we randomly color the remaining edges with a set of \(o(n)\) colors and show that each copy of \(K_{5}\) in the resulting coloring of \(K_{n}\) receives at least \(8\) distinct colors. The new ingredient in our proof, which makes it different than the proof in [21], is that for every vertex \(x\in V(K_{n})\) and color \(i\), we create two copies \(x_{i}\) and \(x_{i}^{\prime}\) of \(x\), and only one of them will be included in the vertices of our hypergraph \(\mathcal{H}\), where \(x_{i}\) is included with probability \(p\) and \(x_{i}^{\prime}\) with probability \(1-p\) for some fixed constant \(p\). If \(x_{i}^{\prime}\) is included in \(V(\mathcal{H})\), then \(x\) will not be allowed to have an incident edge colored \(i\), and therefore \(x\) will be an isolated vertex in the color class \(i\). In addition, we obtain an improved lower bound on \(f(n,5,8)\). **Theorem 2**.: _We have \(f(n,5,8)\geq\frac{6}{7}(n-1)\)._ In Section 2, we prove this lower bound. In Section 3, we introduce the main tool for our upper bound, namely, the conflict-free hypergraph matching method. Finally, in Section 4, we use this technique to improve the upper bound on \(f(n,5,8)\). ## 2 Lower bound In this section we prove Theorem 2. Consider an arbitrary edge-coloring of \(K_{n}\) using \(m\) colors in which every copy of \(K_{5}\) receives at least \(8\) colors. Note that each maximal component in any color class can have at most \(3\) edges, so it must be one of \(K_{2},P_{3},P_{4},K_{1,3}\), or \(K_{3}\). Define \[A=\text{the set of maximal monochromatic components isomorphic to }K_{2},\] \[B=\text{the set of maximal monochromatic components isomorphic to }P_{3},\] \[C=\text{the set of maximal monochromatic components isomorphic to }P_{4},\] \[D=\text{the set of maximal monochromatic components isomorphic to }K_{1,3},\] \[E=\text{the set of maximal monochromatic components isomorphic to }K_{3}.\] Next, we partition \(B\) into two sets as follows: \[B_{1}=\{X\in B:|V(X)\cap V(Y)|\leq 1\text{ for all }Y\in B-\{X\}\}\] \[B_{2}=\{X\in B:|V(X)\cap V(Y)|\geq 2\text{ for some }Y\in B-\{X\}\}\] By definition, we have \(B_{2}=B-B_{1}\). Observe that if there is a copy \(X\) of \(K_{3}\) in color \(i\), then there can be no other edges of color \(i\), and moreover, the edges between \(V(X)\) and \(V(K_{n})-V(X)\) are all of distinct colors. Therefore, if there is a monochromatic copy of \(K_{3}\), then there are at least \(3(n-3)+1\) colors. So, we may assume there is no monochromatic triangle in this coloring. Thus, we have \[\binom{n}{2}=|A|+2|B|+3|C|+3|D|. \tag{1}\] **Lemma 3**.: _We have_ \[|A|\geq|B_{1}|+\frac{1}{2}|B_{2}|+2|C|+3|D|. \tag{2}\] Proof.: For a graph \(H\), let \(L(H)\) be the set of edges in the complement of \(H\). Note that for any \(X\in C\cup D\), \(L(X)\) must contain three elements from \(A\) (of three different colors), since otherwise we get a copy of \(K_{5}\) with less than 8 colors. For the same reason, for any \(X\in D\) and \(Y\in B\cup C\cup D\), then \(X,Y\) share at most one vertex, and therefore \(L(X)\cap L(Y)=\emptyset\). Suppose \(X=x_{1}x_{2}x_{3}x_{4}\) is an element of \(C\). We claim that for any \(Y\in B\cup C\cup D\), \((L(X)-x_{1}x_{4})\cap L(Y)=\emptyset\). Indeed, if not, then one of the paths \(x_{1}x_{2}x_{3}\) or \(x_{2}x_{3}x_{4}\) shares two vertices with \(Y\), which results in a \(K_{5}\) seeing less than 8 colors. Observe that for any \(X\in B_{1}\), \(L(X)\) must contain an element from \(A\), since otherwise we get a \(K_{5}\) with less than 8 colors. Note also that for any \(X,Y\in B_{2}\), \(L(X\cup Y)\) must be two elements from \(A\). Now, for any \(X,Y\in B_{2}\) that share two vertices, there is no \(Z\in B_{2}\) such that \(Z\) shares two vertices with either \(X\) or \(Y\), since otherwise we get a \(K_{5}\) with less than 8 colors. That is, for any \(X\in B_{2}\), there is a unique \(Y\in B_{2}\) such that \(X\) and \(Y\) share two vertices. Therefore, the components in \(B_{2}\) come in pairs \((X_{1},Y_{1}),\ldots,(X_{|B_{2}|/2},Y_{|B_{2}|/2})\), where \(|V(X_{i})\cap V(Y_{i})|=2\) for all \(1\leq i\leq|B_{2}|/2\). Further, for \(i\neq j\), \(|V(Z_{i})\cap V(Z_{j})|\leq 1\), where \(Z_{i}\in\{X_{i},Y_{i}\},Z_{j}\in\{X_{j},Y_{j}\}\). Next, observe that for any pair \((X_{i},Y_{i})\), one of the maximal components isomorphic to \(K_{2}\) must be in \(L(X_{i})\) or \(L(Y_{i})\). Denote this edge by \(e_{i}\). We claim that \(e_{i}\not\in L(X_{j}\cup Y_{j})\) for all \(j\neq i\). Suppose there exists some \(1\leq i,j\leq|B_{2}|/2\) such that \(e_{i}\in L(X_{j}\cup Y_{j})\). Then, since \(|V(X_{j}\cup Y_{j})|=4\) and \(e_{i}\in L(X_{j}\cup Y_{j})\), \(X_{i}\) shares two vertices with \(X_{j}\cup Y_{j}\). Therefore, \(|V(X_{i}\cup X_{j}\cup Y_{j})|=5\) and there are three maximal monochromatic components isomorphic to \(P_{3}\) in the induced graph on these five vertices, creating a copy of \(K_{5}\) with less than 8 colors, a contradiction. We conclude the following: * every \(X\in D\) contributes 3 unique edges, namely the edges in \(L(X)\), to \(A\), * every \(X=x_{1},x_{2}x_{3}x_{4}\in C\) contributes 2 unique edges, namely the edges in \(L(X)-\{x_{1}x_{4}\}\), to \(A\), * every \(X\in B_{1}\) contributes one unique edge, namely the edge in \(L(X)\), to \(A\), and * every pair \((X_{i},Y_{i})\in B_{2}\) as defined above contributes one unique edge, namely the edge \(e_{i}\), to \(A\). Thus we get \[A\geq|B_{1}|+\frac{1}{2}|B_{2}|+2|C|+3|D|,\] proving the lemma. **Lemma 4**.: _We have_ \[mn-2\binom{n}{2}\geq|D|-|B_{1}|. \tag{3}\] Proof.: We count the number of pairs \((v,i)\) such that a vertex \(v\) is incident to an edge of color \(i\), in two different ways. First, the number of such pairs is \[2|A|+3|B|+4|C|+4|D|=2|A|+3(|B_{1}|+|B_{2}|)+4|C|+4|D|.\] For a fixed vertex \(v\in V(K_{n})\), define \[D_{v}^{3}=\{X\in D:d_{X}(v)=3\}\quad\text{and}\quad C_{v}^{2}=\{X\in C:d_{X}( v)=2\}.\] Observe that \(\sum_{v\in V(K_{n})}|D_{v}^{3}|=|D|\) and \(\sum_{v\in V(K_{n})}|C_{v}^{2}|=2|C|\). Note that for any \(X\in D_{v}^{3}\), the set of colors \(S(X)\) appearing on the edges of \(L(X)\) cannot appear on any edge incident to \(v\). Also, for any \(Y=y_{1}v_{y}2y_{3}\in C_{v}^{2}\), the color \(S(Y)\) appearing on the edge \(y_{1}y_{3}\) does not appear on any edge incident to \(v\). Moreover, for any pair \(X\in D_{v}^{3}\) and \(Y\in C_{v}^{2}\), we have \(S(X)\cap S(Y)=\emptyset\) since otherwise, this would lead to a \(K_{5}\) seeing fewer than 8 colors. Finally, for any pair \((X_{i}=x_{1}x_{2}x_{3},Y_{i}=y_{1}y_{2}y_{3})\) in \(B_{2}\), the color appearing on \(L(X)\) does not appear on any edge incident to \(x_{2}\) and the color appearing on \(L(Y)\) does not appear on any edge incident to \(y_{2}\). Each of these colors counted are unique since otherwise, this would result in a \(K_{5}\) seeing fewer than 8 colors. All together, this implies that the number of pairs \((v,i)\) such that a vertex \(v\) is incident to an edge of color \(i\) is at most \(\sum_{v\in V(K_{n})}(m-3|D_{v}^{3}|-|C_{v}^{2}|)-|B_{2}|\). Thus we obtain, \[2|A|+3(|B_{1}|+|B_{2}|)+4|C|+4|D| \leq\sum_{v\in V(K_{n})}(m-3|D_{v}^{3}|-|C_{v}^{2}|)-|B_{2}|\] \[=mn-3|D|-2|C|-|B_{2}|.\] By rearranging, we get \(mn-2\binom{n}{2}\geq|D|-|B_{1}|\) as desired. **Lemma 5**.: _We have_ \[mn-2\binom{n}{2}\geq-2|A|+2|B_{1}|-4|B_{2}|-6|C|-2|D|. \tag{4}\] Proof.: By averaging, there exists a vertex \(v\) that is adjacent to at least \(3|B_{1}|/n\) elements from \(B_{1}\). For each \(X\in B_{1}\) with \(X\) incident to \(v\), we get two colors (one from \(X\) and one from \(L(X)\)). Neither of these colors can be the same as two colors coming from some other \(Y\in B_{1}\) with \(Y\) incident to \(v\). Also, we get four colors for each \(X\in D_{v}^{3}\) (one from \(X\) and three from the edges of \(L(X)\)). Further, none of these colors can be the same as any color coming from some other \(Y\in D_{v}^{3}\) nor can they be the same as any color coming from \(Y\in B_{1}\) with \(Y\) incident to \(v\). Therefore, \(m\geq 4|D_{v}^{3}|+2\frac{3|B_{1}|}{n}\). Summing over all vertices, we get \[mn\geq 4|D|+6|B_{1}|.\] Combining with (1), this shows \(mn-2\binom{n}{2}\geq-2|A|+2|B_{1}|-4|B_{2}|-6|C|-2|D|\). Now, setting \(P=mn-2\binom{n}{2}\) and combining equations (1), (2), (3), (4), we get the following linear program: \[\min P\] \[\text{s.t.}\ \ \binom{n}{2}=|A|+2(|B_{1}|+|B_{2}|)+3(|C|+|D|)\] \[\ \ \ |A|\geq|B_{1}|+\frac{1}{2}|B_{2}|+2|C|+3|D|\] \[\ \ \ P\geq|D|-|B_{1}|\] \[\ \ \ \ P\geq-2|A|+2|B_{1}|-4|B_{2}|-6|C|-2|D|\] \[\ \ \ A,B_{1},B_{2},C,D,P\in\mathbb{Z}\] \[\ \ \ A,B_{1},B_{2},C,D\geq 0\] Solving this linear program, we obtain \(mn-2\binom{n}{2}=P\geq-\frac{2}{7}\binom{n}{2}\), and thus we have \(m\geq\frac{6}{7}(n-1)\), as needed. ## 3 Conflict-free hypergraph matching method In order to prove our upper bound on \(f(n,5,8)\), we will use the version stated in [21] of the conflict-free hypergraph matching theorem from [19]. Given a hypergraph \(\mathcal{H}\) and a vertex \(v\in V(\mathcal{H})\), its _degree_\(\deg_{\mathcal{H}}(v)\) is the number of edges in \(\mathcal{H}\) containing \(v\). The maximum degree and minimum degree of \(\mathcal{H}\) are denoted by \(\Delta(\mathcal{H})\) and \(\delta(\mathcal{H})\), respectively. For \(j\geq 2\), \(\Delta_{j}(\mathcal{H})\) denotes the maximum number of edges in \(\mathcal{H}\) which contain a particular set of \(j\) vertices, over all such sets. In addition, for a (not necessarily uniform) hypergraph \(\mathcal{C}\) and an integer \(k\), let \(\mathcal{C}^{(k)}\) be the set of edges in \(\mathcal{C}\) of size \(k\). For a vertex \(u\in V(\mathcal{C})\), let \(\mathcal{C}_{u}\) denote the hypergraph \(\{C\backslash\{u\}\mid C\in E(\mathcal{C}),u\in C\}\). Given a hypergraph \(\mathcal{H}\), a hypergraph \(\mathcal{C}\) is a _conflict system_ for \(\mathcal{H}\) if \(V(\mathcal{C})=E(\mathcal{H})\). A set of edges \(E\subset\mathcal{H}\) is _\(\mathcal{C}\)-free_ if \(E\) contains no subset \(C\in\mathcal{C}\). Given integers \(d\geq 1\), \(\ell\geq 3\), and \(\varepsilon\in(0,1)\), we say \(\mathcal{C}\) is _\((d,\ell,\varepsilon)\)-bounded_ if \(\mathcal{C}\) satisfies the following conditions: 1. \(3\leq|C|\leq\ell\) for all \(C\in\mathcal{C}\); 2. \(\Delta(\mathcal{C}^{(j)})\leq\ell d^{j-1}\) for all \(3\leq j\leq\ell\); 3. \(\Delta_{j^{\prime}}(\mathcal{C}^{(j)})\leq d^{j-j^{\prime}-\varepsilon}\) for all \(3\leq j\leq\ell\) and \(2\leq j^{\prime}\leq j-1\). Finally, given a \((d,\ell,\varepsilon)\)-bounded conflict system \(\mathcal{C}\) for a hypergraph \(\mathcal{H}\), we will define a type of weight function which can be used to guarantee that the almost-perfect matching given by Theorem 6 below satisfies certain quasirandomome properties. We say a function \(w:{\mathcal{H}\choose j}\to[0,\ell]\) for \(j\in\mathbb{N}\) is a _test function_ for \(\mathcal{H}\) if \(w(E)=0\) whenever \(E\in{\mathcal{H}\choose j}\) is not a matching, and we say \(w\) is _\(j\)-uniform_. For a function \(w:A\to\mathbb{R}\) and a finite set \(X\subset A\), let \(w(X):=\sum_{x\in X}w(x)\). If \(w\) is a \(j\)-uniform test function, then for each \(E\subset\mathcal{H}\), let \(w(E)=w({E\choose j})\). Given \(j,d\in\mathbb{N}\), \(\varepsilon>0\), and a conflict system \(\mathcal{C}\) for hypergraph \(\mathcal{H}\), we say a \(j\)-uniform test function \(w\) for \(\mathcal{H}\) is _\((d,\varepsilon,\mathcal{C})\)-trackable_ if \(w\) satisfies the following conditions: 1. \(w(\mathcal{H})\geq d^{j+\varepsilon}\); 2. \(w(\{E\in{\mathcal{H}\choose j}:E\supseteq E^{\prime}\})\leq w(\mathcal{H})/d ^{j^{\prime}+\varepsilon}\) for all \(j^{\prime}\in[j-1]\) and \(E^{\prime}\in{\mathcal{H}\choose j^{\prime}}\); 3. \(|(\mathcal{C}_{e})^{(j^{\prime})}\cap(\mathcal{C}_{f})^{(j^{\prime})}|\leq d ^{j^{\prime}-\varepsilon}\) for all \(e,f\in\mathcal{H}\) with \(w(\{E\in{\mathcal{H}\choose j}:e,f\in E\})>0\) and all \(j^{\prime}\in[\ell-1]\); 4. \(w(E)=0\) for all \(E\in{\mathcal{H}\choose j}\) that are not \(\mathcal{C}\)-free. **Theorem 6** ([19], Theorem 3.3).: _For all \(k,\ell\geq 2\), there exists \(\varepsilon_{0}>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0})\), there exists \(d_{0}\) such that the following holds for all \(d\geq d_{0}\). Suppose \(\mathcal{H}\) is a \(k\)-regular hypergraph on \(n\leq\exp(d^{\varepsilon^{3}})\) vertices with \((1-d^{-\varepsilon})d\leq\delta(\mathcal{H})\leq\Delta(\mathcal{H})\leq d\) and \(\Delta_{2}(\mathcal{H})\leq d^{1-\varepsilon}\). Suppose \(\mathcal{C}\) is a \((d,\ell,\varepsilon)\)-bounded conflict system for \(\mathcal{H}\), and suppose \(\mathcal{W}\) is a set of \((d,\varepsilon,\mathcal{C})\)-trackable test functions for \(\mathcal{H}\) of uniformity at most \(\ell\) with \(|\mathcal{W}|\leq\exp(d^{\varepsilon^{3}})\). Then, there exists a \(\mathcal{C}\)-free matching \(\mathcal{M}\subset\mathcal{H}\) of size at least \((1-d^{-\varepsilon^{3}})n/k\) with \(w(\mathcal{M})=(1\pm d^{-\varepsilon^{3}})d^{-j}w(\mathcal{H}))\) for all \(j\)-uniform \(w\in\mathcal{W}\)._ We will say that a hypergraph \(\mathcal{H}\) with \((1-d^{-\varepsilon})d\leq\delta(\mathcal{H})\leq\Delta(\mathcal{H})\leq d\) is _almost \(d\)-regular_. In addition, we will use the Lovasz Local Lemma [1]. For a set of events \(\mathcal{E}\) and a graph \(G\) on vertex set \(\mathcal{E}\), we say that \(G\) is a _dependency graph_ for \(\mathcal{E}\) if each events \(E\in\mathcal{E}\) is not adjacent to events which are mutually independent. **Lemma 7** (Lovasz Local Lemma).: _Let \(\mathcal{E}\) be a finite set of events in a probability space \(\Theta\) and let \(G\) be a dependency graph for \(\mathcal{E}\). Suppose there is an assignment \(x:\mathcal{E}\to[0,1)\) of real numbers to \(\mathcal{E}\) such that for all \(E\in\mathcal{E}\), we have_ \[\mathbb{P}(E)\leq x(E)\prod_{E^{\prime}\in N(E)}(1-x(E^{\prime})). \tag{5}\] _Then, the probability that none of the events in \(\mathcal{E}\) happens is_ \[\mathbb{P}\left(\bigcap_{E\in\mathcal{E}}\bar{E}\right)\geq\prod_{E\in \mathcal{E}}(1-x(E))>0.\] We will also need the following concentration inequality. **Theorem 8**.: _(McDiarmid's inequality [22]) Suppose \(X_{1},\ldots,X_{m}\) are independent random variables. Suppose \(X\) is a a real-valued random variable determined by \(X_{1},\ldots,X_{m}\) such that changing the outcome of \(X_{i}\) changes \(X\) by at most \(b_{i}\) for all \(i\in[m]\). Then, for all \(t>0\), we have_ \[\mathbb{P}[|X-\mathbb{E}[X]|\geq t]\leq 2\exp\Bigl{(}-\frac{2t^{2}}{\sum_{i\in[m]}b _{i}^{2}}\Bigr{)}.\] ## 4 Upper bound Our coloring process occurs in two stages. The first coloring uses \(n\) colors to color a majority of the edges of \(K_{n}\). This coloring is defined by constructing appropriate hypergraphs \(\mathcal{H}\) and \(\mathcal{C}\) for which a \(\mathcal{C}\)-free matching in \(\mathcal{H}\) corresponds to a partial coloring of \(K_{n}\) with each copy of \(K_{5}\) receiving at least 8 colors. In particular, this coloring will be a tiling of a subgraph of \(K_{n}\) with 2-colored triangles, where no two triangles which intersect in a vertex share a color. We will say that a copy of \(K_{5}\) is _bad_ if it is colored with at most 7 colors. In the following lemma, we show that each bad \(K_{5}\) in our partial coloring must contain one of a handful of _bad subgraphs_, which we refer to as having _type_\(t\in\{a,b,c,d,e,f\}\). **Lemma 9**.: _Let \(f:E(K_{n})\to C\) be an edge-coloring where every color class consists of vertex-disjoint edges and 2-edge paths, and any two monochromatic 2-edge paths share at most one vertex. In addition, assume any two 2-colored triangles which share a vertex must have disjoint sets of colors. Then every bad \(K_{5}\) contains one of the following types of bad subgraphs (shown in Figure 1):_ 1. _An alternating_ \(C_{4}\) _formed by two monochromatic matchings,_ 2. _An alternating_ \(C_{5}\) _with one monochromatic matching and a second color on a disjoint edge and path,_ 3. _The subgraph_ \(Q\) _which contains a two-colored triangle and consists of two monochromatic matchings and one monochromatic path,_ 4. _A subgraph consisting of one monochromatic matching and two monochromatic paths,_ 5. _A subgraph consisting of two monochromatic matchings and one monochromatic path, or_ 6. _A subgraph consisting of three monochromatic matchings._ Proof.: Fix a bad copy \(K\) of \(K_{5}\), and suppose it does not contain a type \(a\) bad subgraph. (That is, \(K\) has no 2-colored copy of \(C_{4}\).) We will show that \(K\) must contain one of the other five types of bad subgraphs. In order for \(K\) to receive at most 7 colors on its 10 edges, there must be at least three color repeats. Since every color class contains vertex-disjoint edges and 2-edge paths, each color can appear at most 3 times on \(K\). Moreover, if a color \(i\in C\) appear three times, then it must appear on a 2-edge path and an edge. In this case, any other color repeat must appear on a monochromatic 2-edge matching, because any two monochromatic 2-edge paths can only intersect in one vertex and any two triangles which share a vertex have disjoint colors. Therefore, \(K\) contains a subgraph of type \(b\). Thus, we may assume that each color appears at most twice on \(K\). So, \(K\) must contain at least three monochromatic 2-edge matchings or paths. Since there cannot be three monochromatic 2-edge paths in \(K\), \(K\) must contain one of the bad subgraphs of type \(c\), \(d\), \(e\), or \(f\). Moreover, it can be easily shown in these cases that \(K\) must contain one of the nine subgraphs in Figure 1. We will define a hypergraph \(\mathcal{H}\) and conflict system \(\mathcal{C}\) so that the edges of \(\mathcal{C}\) correspond to bad subgraphs in \(K_{n}\). Our key technical result below, Theorem 10, guarantees that our choices of \(\mathcal{H}\) and \(\mathcal{C}\) satisfy the requirements of Theorem 6 needed to give a \(\mathcal{C}\)-free matching of \(\mathcal{H}\) which corresponds to a partial \((5,8)\)-coloring of \(K_{n}\). To color the remaining edges of \(K_{n}\), we will apply a random coloring using a set of \(n^{1-\delta}\) new colors. Properties (IV) and (V) of Theorem 10 allow us to use the Lovasz Local Lemma to show that the resulting union of these two colorings is a \((5,8)\)-coloring of \(K_{n}\). Figure 1: Bad subgraphs in \(K_{n}\) which correspond to edges in the conflict hypergraph \(\mathcal{C}\) In order to state Theorem 10, we need some additional terminology. Given a partial edge-coloring of \(K_{n}\), we say a set of uncolored edges \(E^{\prime}\subset E(K_{n})\)_completes_ a bad subgraph of type \(t\in\{a,b,c,d,e,f\}\) if there is a way to assign colors to \(E^{\prime}\) which would create a bad subgraph of type \(t\). In particular, we will be interested in the cases where an edge or a 2-edge matching completes a bad subgraph. **Theorem 10**.: _There exists \(\delta>0\) such that for all sufficiently large \(n\) in terms of \(\delta\), there exists an edge-coloring of a subgraph \(F\subset K_{n}\) with at most \(n\) colors and the following properties:_ 1. _Every color class consists of vertex-disjoint edges and 2-edge paths._ 2. _For all triangles_ \(xyz\) _in_ \(K_{n}\) _where_ \(xy\) _and_ \(yz\) _receive the same color_ \(i\) _and_ \(xz\) _is colored_ \(\ell\)_, the vertex_ \(y\) _is an isolated vertex in color class_ \(\ell\)_, and_ \(xz\) _forms a component in color class_ \(\ell\)_._ 3. _There are no bad subgraphs in_ \(F\)_._ 4. _The graph_ \(L=K_{n}-E(F)\) _has maximum degree at most_ \(n^{1-\delta}\)_._ 5. _For each uncolored edge_ \(xy\in E(L)\) _and for each type_ \(t\in\{a,b,c,d,e,f\}\) _of bad subgraph, there are at most_ \(n^{1-\delta}\) _edges_ \(x^{\prime}y^{\prime}\in E(L)\) _with_ \(\{x,y\}\cap\{x^{\prime},y^{\prime}\}=\emptyset\) _for which_ \(\{xy,x^{\prime}y^{\prime}\}\) _completes a bad subgraph of type_ \(t\)_._ ### Proof of Theorem 10 For clarity, we use \(k\) throughout the proof when discussing the number of colors to distinguish between counting vertices and counting colors. We start by constructing a random vertex set as follows. Let \(W=\bigcup_{i\in[k]}W_{i}\), where \(W_{1},\ldots,W_{k}\) are disjoint copies of \(V(K_{n})\). Initially, for \(i\in[k]\), set \(V_{i}=V_{i}^{\prime}=\emptyset\). Now, for each vertex \(w_{i}\) in \(W_{i}\), independently with probability \(p=1/6\) add a copy \(v_{i}^{\prime}\) of \(w_{i}\) to the set \(V_{i}^{\prime}\). Otherwise, with probability \(1-p\), add a copy \(v_{i}\) of \(w_{i}\) to the set \(V_{i}\). Let \(V=\bigcup_{i\in[k]}V_{i}\) and \(V^{\prime}=\bigcup_{i\in[k]}V_{i}^{\prime}\). Note that while this process is similar to the one used by Joos and Mubayi [21], it differs in that we add vertices to a new set \(V^{\prime}\) with probability \(p\) rather than simply deleting the vertices. Now, we construct our 9-uniform hypergraph \(\mathcal{H}\) with vertex set \(E(K_{n})\cup V\cup V^{\prime}\). For each triangle \(K=uvw\) in \(K_{n}\) and each pair of distinct colors \(i,\ell\in[k]\), we add the edge \[\{uv,uw,vw,u_{i},v_{i},w_{i},v_{\ell},w_{\ell},u_{\ell}^{\prime}\}\] to \(\mathcal{H}\) if \(u_{i},v_{i},w_{i},v_{\ell},w_{\ell}\in V\) and \(u_{\ell}^{\prime}\in V^{\prime}\). We will denote this edge in \(\mathcal{H}\) by \(e=(K,i,\ell)\). Note that a matching in \(\mathcal{H}\) corresponds to a collection of edge-disjoint triangles \(uvw\) in \(K_{n}\), where \(uv\) and \(uw\) have color \(i\) and \(vw\) has color \(\ell\), and where no other 2-colored triangle containing \(u\) uses color \(\ell\) (because only one of the two vertices \(u_{\ell}\) and \(u_{\ell}^{\prime}\) exists in \(V(\mathcal{H})\)). Thus, a matching in \(\mathcal{H}\) yields a partial coloring of \(K_{n}\) which satisfies properties (I) and (II) of Theorem 10. In order to achieve properties (III)-(V), we will later define an appropriate conflict hypergraph \(\mathcal{C}\) and trackable test functions for \(\mathcal{H}\). But first, we check that the degree and codegree conditions for \(\mathcal{H}\) needed to apply Theorem 6 are satisfied. In particular, we will show that \(\mathcal{H}\) is essentially \(d\)-regular, where \(d=\frac{5^{5}}{2\cdot 6^{5}}n^{3}\). Then it is easy to check that \(\Delta_{2}(\mathcal{H})=O(d/n)<d^{1-\varepsilon}\) for \(\varepsilon\in(0,\frac{1}{4})\). As in [21], each vertex in \(\mathcal{H}\) of the form \(uv\in E(K_{n})\) has expected degree \[\mathbb{E}[d_{\mathcal{H}}(uv)]=(n-2)\cdot k(k-1)\cdot 3\cdot(1-p)^{5}p\] and each vertex of the form \(u_{i}\in V_{i}\) has expected degree \[\mathbb{E}[d_{\mathcal{H}}(u_{i})]=\left(\binom{n-1}{2}+(n-1)(n-2)+(n-1)(n-2) \right)\cdot(k-1)\cdot(1-p)^{4}p.\] In addition, we have that the expected degree of a vertex of the form \(u_{i}^{\prime}\) is \[\mathbb{E}[d_{\mathcal{H}}(u_{i}^{\prime})]=\binom{n-1}{2}\cdot(k-1)\cdot(1-p )^{5}.\] By our choices of \(p=1/6\) and \(k=n\), each vertex in \(\mathcal{H}\) has degree \(d\approx\frac{5^{5}}{2\cdot 6^{5}}n^{3}\). We can apply McDiarmid's inequality to show concentration of these degrees for each type of vertex in \(\mathcal{H}\). First, fix \(uv\in E(G)\) and consider how the degree of \(uv\) in \(\mathcal{H}\) will change if some \(w\in V\) is instead placed in \(V^{\prime}\) (or vice versa). For each \(w\in V\cup V^{\prime}\), let \(b_{w}=n^{2}\) if \(w\) is a copy of \(u\) or \(v\) and \(b_{w}=n\) otherwise. Thus, \(\sum_{w}b_{w}^{2}=O(n^{5})\). Now instead fix \(u_{i}\in V_{i}\) or \(u_{i}^{\prime}\in V_{i}^{\prime}\). For each \(w\in V\cup V^{\prime}\backslash\{u_{i}\}\) let \(b_{w}=n^{2}\) if \(w\) is a copy of \(u\) or \(w\in V_{i}\cup V_{i}^{\prime}\), and let \(b_{w}=n\) otherwise. Again, we have \(\sum_{w}b_{w}^{2}=O(n^{5})\), so in either case, applying Theorem 8 shows concentration on an interval of length \(O(n^{5/2})\). Thus, we have with high probability that \[\frac{5^{5}}{2\cdot 6^{6}}n^{3}-O(n^{8/3})\leq\delta(\mathcal{H})\leq\Delta( \mathcal{H})\leq\frac{5^{5}}{2\cdot 6^{6}}n^{3}+O(n^{8/3}).\] In addition, McDiarmid's inequality implies that for any two \(u_{i},v_{i}\in V\), there are \(O(n^{2})\) edges containing both \(u_{i}\) and \(v_{i}\) with high probability. To see this, define \(b_{w}\) for \(w\in V\backslash\{u_{i},v_{i}\}\) by \(b_{w}=n\) if \(w\) is a copy of \(u\) or \(v\) or if \(w\in V_{i}\), and by \(b_{w}=1\) otherwise. We will also need to show that another quantity is close to its expected value, but for now, we assume that this is the case and fix a choice for \(\mathcal{H}\) with these properties to refer to as the deterministic 9-uniform hypergraph \(\mathcal{H}\). We set \(d=\Delta(\mathcal{H})\), so \(\mathcal{H}\) is essentially \(d\)-regular and \(d=\Theta(n^{3})\). We now define a hypergraph \(\mathcal{C}\) with vertex set \(E(\mathcal{H})\) and edges of size 4, 5, and 6 which is a conflict system for \(\mathcal{H}\). The edges of \(\mathcal{C}\) correspond to bad subgraphs in \(K_{n}\) which arise from 4, 5, or 6 triangles in \(K_{n}\) that form a matching in \(\mathcal{H}\). More precisely, given a bad subgraph \(H\) of type \(t\), we include \(E_{H}=\{e_{xy}:xy\in E(H)\}\) in \(\mathcal{C}\) if \(E_{H}\) is a matching in \(\mathcal{H}\). We call this edge of \(\mathcal{C}\) a _conflict_ of type \(t\). It is easy to verify that for every type of bad subgraph \(H\), \(4\leq|E_{H}|\leq 6\). Indeed, this follows from the fact that any monochromatic path of length 2 in \(H\) corresponds to a single element in \(E_{H}\), every 2-colored triangle in \(H\) corresponds to a single element in \(E_{H}\), and every monochromatic matching of size 2 corresponds to a two elements in \(E_{H}\). For every graph edge \(xy\) in \(H\), let \(e_{xy}\) be the \(\mathcal{H}\)-edge in \(E_{H}\) containing the element \(xy\). We now check the degree conditions needed to show that the conflict hypergraph \(\mathcal{C}\) is \((d,O(1),\varepsilon)\)-bounded for all \(\varepsilon\in(0,\frac{1}{4})\). Condition (C1) is met since \(4\leq|C|\leq 6\) for all \(C\in\mathcal{C}\). For condition (C2), we consider the maximum degree in \(\mathcal{C}^{(j)}\) for each \(4\leq j\leq 6\). To this end, fix \(e=(K,i,\ell)\in V(\mathcal{C})\) with \(K=xyz\) where \(xy\) and \(yz\) receive color \(i\) and \(xz\) receives color \(\ell\). To count the conflicts of type \(a\) or \(b\) containing \(e\), note that there are \(O(n^{4})\) ways to pick a second edge in \(\mathcal{H}\) containing color \(i\) or \(\ell\), say \(e^{\prime}=(K^{\prime},i,m)\) with \(K^{\prime}=uvw\). Then each of the other two \(\mathcal{H}\)-edges in the conflict must contain a graph edge with one vertex in \(\{x,y,z\}\) and the other in \(\{u,v,w\}\), and these two \(\mathcal{H}\)-edges must share at least one color, so there are \(O(d)\) ways to pick a third edge in \(\mathcal{H}\) and \(O(d/n)\) ways to pick a fourth edge in \(\mathcal{H}\) to complete the conflict of type \(a\) or \(b\). By similar reasoning, there are \(O(n^{9})\) conflicts of type \(c\) in \(\mathcal{C}^{(4)}\) containing \(e\) to which \(e\) contributes a single graph edge. There are an additional \(O(n^{9})\) conflicts of type \(c\) containing \(e\) to which \(e\) contributes all three graph edges, \(xy\), \(yz\), and \(zx\). To see this, note that there are \(O(n^{4})\) ways to pick a second \(\mathcal{H}\)-edge \(K^{\prime}\) containing the color \(\ell\), after which the third and fourth \(\mathcal{H}\)-edges in the conflict must each contain a graph edge with one vertex from \(\{x,y,z\}\) and one vertex from \(K^{\prime}\). Thus, there are \(O(d)\) choices for the third edge in the conflict, and since the third and fourth edges also share a color, there are \(O(d/n)\) choices for the fourth edge to complete a type \(c\) conflict. It remains to count the conflicts of type \(t\in\{d,e,f\}\) which contain \(e\). We may assume for any bad subgraph \(H\) corresponding to a conflict \(C\) of type \(t\) which contains \(e\) that \(e\) contributes either the monochromatic path \(xyz\) in color \(i\) or the edge \(xy\) in color \(i\). Let \(j\) be the size of a conflict of type \(t\). To count the conflicts of type \(t\) containing \(e\) in the first way, note that there are there are \(O(n^{2})\) ways to pick the other two graph vertices in \(H\), \(\Delta(\mathcal{H})=O(d)\) ways each to pick another \(j-3\)\(\mathcal{H}\)-edges in \(C\) which will be in matchings of distinct colors (since we know each of these \(\mathcal{H}\)-edges must contain a particular graph edge), and \(\Delta_{2}(\mathcal{H})=O(d/n)\) ways each to pick the last two \(\mathcal{H}\)-edges to complete \(C\) (since for each, we know either a graph edge and a color which it must contain or we know two graph edges which it must contain). Similarly, for edges in \(\mathcal{C}^{(j)}\) containing \(e\) in the second way, there are \(O(n^{3})\) ways to pick the other graph vertices in \(C\), \(\Delta(\mathcal{H})=O(d)\) ways each to pick another \(j-4\)\(\mathcal{H}\)-edges in \(C\) (since we know each of these \(\mathcal{H}\)-edges must contain a particular graph edge), and \(O(d/n)\) choices each for the last three \(\mathcal{H}\)-edges (since we know for each of these either a graph edge and color or two graph edges which it must contain). Thus, \(\Delta(\mathcal{C}^{(j)})=O(d^{j-1})\) for each \(4\leq j\leq 6\). Finally, condition (C3) can be verified using very similar arguments to bound the codegrees by \(\Delta_{j^{\prime}}(\mathcal{C}^{(j)})=O(d^{j-j^{\prime}}/n)<d^{j-j^{\prime}- \frac{1}{t}}\) for all \(4\leq j\leq 6\) and \(2\leq j^{\prime}\leq j-1\). Thus, \(\mathcal{C}\) is a \((d,O(1),\varepsilon)\)-bounded conflict system for \(\mathcal{H}\) for all \(\varepsilon\in(0,\frac{1}{4})\), as desired. In order to obtain property (IV) of Theorem 10, we define the following test functions and check that they are \((d,\varepsilon,\mathcal{C})\)-trackable. For each \(v\in V(K_{n})\), let \(S_{v}\subset E(K_{n})\) be the set of \(n-1\) edges in \(K_{n}\) incident to \(v\). Let \(w_{v}:E(\mathcal{H})\to[0,2]\) be the weight function that assigns every edge of \(\mathcal{H}\) the size of its intersection with \(S_{v}\). Then, \(w_{v}(\mathcal{H})=\sum_{e\in S_{x}}d_{\mathcal{H}}(e)=nd-O(n^{3})\), proving (W1). Since \(w_{v}\) is a \(1\)-uniform test function, (W2)-(W4) are trivially satisfied. Therefore, for each \(v\in V(K_{n})\), \(w_{v}\) is \((d,\varepsilon,\mathcal{C})\)-trackable. We could now apply Theorem 6 to obtain properties (I)-(IV) of Theorem 10. Indeed, for suitable \(\varepsilon\in(0,\frac{1}{4})\) and sufficiently large \(n\), Theorem 6 yields a \(\mathcal{C}\)-free matching \(M\subset\mathcal{H}\) such that for every \(v\in V(K_{n})\), we have \[w_{v}(M)>(1-d^{-\varepsilon^{3}})d^{-1}w_{v}(\mathcal{H})>(1-n^{-\delta})n\] for \(\delta<\varepsilon^{3}\log_{n}(d)\). Thus, for every \(v\in V(K_{n})\), there are at most \(n^{1-\delta}\) edges in \(K_{n}\) incident to \(v\) that do not belong to a triangle selected by \(M\). This proves property (IV). In order to guarantee property (V) of Theorem 10, we define several additional test functions. These test function will ensure that we do not create bad subgraphs when coloring \(L=E(K_{n})-F\) by the second, random coloring. We will need some additional terminology to describe these new test functions. For a conflict \(C\in\mathcal{C}\), we will call a subset \(C^{\prime}\subset C\) a _subconflict_ of \(C\). Given a subgraph \(G\subset L\) and a subconflict \(C^{\prime}\subset C\), we say \(G\)_comptes the conflict \(C\) with \(C^{\prime}\)_ if \(G\) completes a bad subgraph in \(K_{n}\) with the subgraph of \(F\) corresponding to \(C^{\prime}\). That is, an uncolored subgraph \(G\) completes \(C\) with \(C^{\prime}\) if there is a way to color the edges of \(G\) so that \(G\) together with the colored subgraph of \(F\) corresponding to \(C^{\prime}\) is a bad subgraph. In particular, we will use this idea when \(G\) is a single edge in \(K_{n}\) or a pair of disjoint edges in \(K_{n}\). For all distinct \(x,y\in V(K_{n})\), for \(j_{x},j_{y}\in\{1,2\}\), and for any type \(t\) of conflict, we define \(\mathcal{P}_{j_{x},j_{y},t}\) to be the set of all subconflicts \(C^{\prime}\) with the following properties: 1. \(C^{\prime}\) contains \(\mathcal{H}\)-edges \(e_{x}=(K_{x},\alpha_{x},\beta_{x})\) and \(e_{y}=(K_{y},\alpha_{y},\beta_{y})\) containing \(x\) and \(y\), respectively, such that either \(K_{x}\cap K_{y}=\emptyset\) and \(|\{\alpha_{x},\beta_{x}\}\cap\{\alpha_{y},\beta_{y}\}|\geq 1\) or \(|(K_{x}\cap K_{y})-\{x,y\}|=1\) and \(\{\alpha_{x},\beta_{x}\}\cap\{\alpha_{y},\beta_{y}\}=\emptyset\), 2. there is an edge \(x^{\prime}y^{\prime}\in E(L)\) disjoint from \(xy\) such that \(\{xy,x^{\prime}y^{\prime}\}\) completes a conflict \(C\) of type \(t\) with \(C^{\prime}\), and 3. for each \(z\in\{x,y\}\), if \(\alpha_{z}\) is the color incident to \(z\) in \(K_{z}\) which appears in the bad subgraph corresponding to \(C\), then \(\alpha_{z}\) is incident to \(z\) exactly \(j_{z}\) times in \(K_{z}\). In the special case \(t=c\) where all three edges of \(K_{x}\) appear in the bad subgraph, set \(j_{x}=2\). Note that given a conflict of type \(t\) with size \(j\in\{4,5,6\}\), the subconflicts in \(\mathcal{P}_{j_{x},j_{y},t}\) will have size \(j-2\). Furthermore, some of these sets will be empty, as there may be no subconflicts for a particular choice of \(x,y,j_{x}\), and \(j_{y}\), so we disregard these cases for the rest of the proof. We can show using McDiarmid's Inequality that in the random \(9\)-uniform hypergraph \(\mathcal{H}\) considered earlier, we have for all distinct \(x,y\in V(K_{n})\), \(j_{x},j_{y}\in\{1,2\}\), and \(t\in\{a,b,c\}\) that with high probability, \[|\mathcal{P}_{j_{x},j_{y},t}|=\frac{\left(p(1-p)^{5}\right)^{2}}{j_{x}j_{y}} \cdot k^{3}n^{4}\pm O(n^{20/3}).\] Indeed, for \(w\in V\backslash\{x,y\}\), define \(b_{w}=n^{6}\) if \(w\) is a copy of \(x\) or \(y\) and \(b_{w}=n^{5}\) otherwise; hence \(\sum_{w\in V}b_{w}^{2}=O(n^{13})\). We can similarly show for all \(x,y\in V(K_{n})\), \(j_{x},j_{y}\in\{1,2\}\), and \(t\in\{d,e,f\}\) that with high probability, if \(j\) is the size of a conflict of type \(t\), then \[|\mathcal{P}_{j_{x},j_{y},t}|=\frac{\left(p(1-p)^{5}\right)^{j-2}}{j_{x}j_{y}} \cdot k^{j}n^{2j-5}\pm O(n^{3j-\frac{16}{3}}).\] Since \(k=n\), this gives for all \(t\in\{a,b,c,d,e,f\}\) that with high probability, \[|\mathcal{P}_{j_{x},j_{y},t}|=\left(\frac{5^{5}}{6^{6}}\right)^{j-2}\frac{n^{3j -5}}{j_{x}j_{y}}\pm O(n^{3j-\frac{16}{3}}).\] We will assume from now on that we have chosen \(\mathcal{H}\) such that this property holds. Let \(w_{x,y,j_{x},j_{y},t}\) be the indicator weight function for the subconflicts in \(\mathcal{P}_{j_{x},j_{y},t}\). Assume for now that these are \((d,\varepsilon,\mathcal{C})\)-trackable test functions for \(\mathcal{H}\) for all \(\varepsilon\in(0,\frac{1}{4})\). By including these weight functions when we apply Theorem 6, we obtain a matching \(M\) such that \[\left|\binom{M}{j-2}\cap\mathcal{P}_{j_{x},j_{y},t}\right|=w_{x,y,j_{x},j_{y}, t}(M)\leq(1+d^{-\varepsilon^{3}})d^{-j+2}|\mathcal{P}_{j_{x},j_{y},t}|\leq(1+n^{- \delta})\frac{n}{3^{j-2}j_{x}j_{y}} \tag{6}\] for each \(x,y,j_{x},j_{y},t\). In addition, we define \(\mathcal{T}_{j_{x},j_{y},t}\) to be the set of all subconflicts \(C\) with the same properties as \(\mathcal{P}_{j_{x},j_{y},t}\), except that property 2 is replaced by the condition that \(\{xy\}\) completes a conflict of type \(t\) with \(C\). So, we can think of each subconflict \(C\) in \(\mathcal{T}_{j_{x},j_{y},t}\) as extending a subconflict \(C^{\prime}\) in \(\mathcal{P}_{j_{x},j_{y},t}\) by one \(\mathcal{H}\)-edge \((K,\gamma,\gamma^{\prime})\) where \(K\) is edge-disjoint from \(K_{x}\) and \(K_{y}\) and contains the graph edge \(x^{\prime}y^{\prime}\) with which \(xy\) completes \(C^{\prime}\). Thus, given a conflict of type \(t\) with size \(j\in\{4,5,6\}\), the subconflicts in \(\mathcal{T}_{j_{x},j_{y},t}\) will have size \(j-1\). As before, some of these sets will be empty, as there may be no subconflicts for a particular choice of \(x,y,j_{x}\), and \(j_{y}\), so we disregard these cases for the rest of the proof. Fix some \(C^{\prime}\in\mathcal{P}_{j_{x},j_{y},t}\). Since \(\mathcal{H}\) is essentially \(d\)-regular, we have \(d_{\mathcal{H}}(x^{\prime}y^{\prime})=d\pm O(n^{2})\), and since \(\Delta_{2}(\mathcal{H})=O(d/n)\), almost all edges containing \(x^{\prime}y^{\prime}\) in \(\mathcal{H}\) form a matching with \(e_{x}\) and \(e_{y}\). Thus, we have \[|\mathcal{T}_{j_{x},j_{y},t}|=j_{x}j_{y}(d\pm O(n^{2}))|\mathcal{P}_{j_{x},j_ {y},t}|.\] Let \(w^{\prime}_{x,y,j_{x},j_{y},t}\) be the indicator weight function for the subconflicts in \(\mathcal{T}_{j_{x},j_{y},t}\), and again assume for now that these are \((d,\varepsilon,\mathcal{C})\)-trackable test functions for \(\mathcal{H}\) for all \(\varepsilon\in(0,\frac{1}{4})\). Applying Theorem 6 with all of our weight functions, we obtain a matching \(M\) such that \[\left|\binom{M}{j-1}\cap\mathcal{T}_{j_{x},j_{y},t}\right|=w^{\prime}_{x,y,j_ {x},j_{y},t}(M)\geq(1-d^{-\varepsilon^{3}})d^{-(j-1)}|\mathcal{T}_{j_{x},j_{y },t}|\geq(1-n^{-\delta})\frac{n}{3^{j-2}} \tag{7}\] for each \(x,y,j_{x},j_{y},t\). By (6) and (7), the number of edges described in property (V) of Theorem 10 is at most \[\sum_{j_{x},j_{y}\in\{1,2\}}j_{x}j_{y}\left|\binom{M}{j-2}\cap\mathcal{P}_{j _{x},j_{y},t}\right|-\left|\binom{M}{j-1}\cap\mathcal{T}_{j_{x},j_{y},t} \right|\leq n^{1-\delta}.\] This proves (V). Thus, all that remains is to show that \(w_{x,y,j_{x},j_{y},t}\) and \(w^{\prime}_{x,y,j_{x},j_{y},t}\) are \((d,\varepsilon,\mathcal{C})\)-trackable test functions for \(\mathcal{H}\) for all \(\varepsilon\in(0,\frac{1}{4})\). By our estimations of \(|\mathcal{P}_{j_{x},j_{y},t}|=\Theta(nd^{j-2})\) and \(|\mathcal{T}_{j_{x},j_{y},t}|=\Theta(nd^{j-1})\), condition (W1) holds for \(w_{x,y,j_{x},j_{y},t}\) and \(w^{\prime}_{x,y,j_{x},j_{y},t}\). Also note that condition (W4) is vacuously true for both functions. To see condition (W2) for \(w_{x,y,j_{x},j_{y},t}\), fix an edge \(e=(K,i,\ell)\) in \(\mathcal{H}\) with \(K=xuv\), and suppose that \(e\) is in at least one subconflict \(C^{\prime}\in\mathcal{P}_{j_{x},j_{y},t}\). We consider cases based on the size \(j\) of the conflicts of type \(t\). First, let \(j=4\). Then any edge \(f\) for which \(C^{\prime}=\{e,f\}\) must contain either one of the \(\mathcal{H}\)-vertices \(y_{i}\) or \(y_{\ell}\) or one of the graph edges \(\{yu,yv\}\). Thus, there are \(O(d)<n^{7}/d^{1+\varepsilon}\) pairs in \(\mathcal{P}_{j_{x},j_{y},t}\) containing \(e\). If instead \(j=5\), then there are two inequalities to check. Note that any subconflict \(C^{\prime}=\{e,f,f^{\prime}\}\) in \(\mathcal{P}_{j_{x},j_{y},t}\) must have an edge \(f\) containing an \(\mathcal{H}\)-vertex in \(\{y_{i},y_{\ell},yu,yv\}\), and an edge \(f^{\prime}\) containing either two graph vertices in \(e\cup f\) (and hence an \(\mathcal{H}\)-vertex of the type \(ab\)) or a graph vertex and a color in \(e\cup f\) (and hence an \(\mathcal{H}\)-vertex of the type \(a_{i}\)). Thus, there are at most \(O(d^{2})<n^{10}/d^{1+\varepsilon}\) such subconflicts containing \(e\). If we now also fix a second edge \(f=(K^{\prime},s,t)\) in \(\mathcal{H}\) which is in at least one subconflict \(C^{\prime}\) with \(e\) in \(\mathcal{P}_{j_{x},j_{y},t}\), then either \(s\in\{i,\ell\}\) (or \(t\in\{i,\ell\}\)) or \(K\) and \(K^{\prime}\) share a vertex. In the first case, the third edge \(f^{\prime}\) in any \(C^{\prime}\) containing \(e,f\) must contain one of the graph edges between a vertex in \(K\) and one in \(K^{\prime}\), and in the second case, \(f^{\prime}\) must contain a graph vertex from \(K\) or \(K^{\prime}\) and a color in \(\{i,\ell,s,t\}\). So, there are \(O(d)<n^{10}/d^{2+\varepsilon}=n^{4-\varepsilon}\) such subconflicts containing \(e,f\). The cases for \(j=6\) are similar. There are \(O(d^{3})<n^{13}/d^{1+\varepsilon}\) subconflicts \(C^{\prime}=\{e,e^{\prime},f,f^{\prime}\}\in\mathcal{P}_{j_{x},j_{y},t}\) which contain \(e\) since there are \(k^{2}n^{2}\) ways to pick an edge \(e^{\prime}\) which shares a graph vertex with \(e\), then \(O(d)\) ways to pick a second edge \(f\) which shares a graph vertex with \(e\cup e^{\prime}\) with a fixed color, and finally \(O(d/n)\) ways to pick the third edge \(f^{\prime}\) since we know both a graph edge and color it must contain. In addition, there are \(O(d^{2})<n^{13}/d^{2+\varepsilon}\) ways to pick a subconflict containing a fixed pair \(e,e^{\prime}\) and \(O(d)<n^{13}/d^{3+\varepsilon}\) ways to pick a subconflict containing a fixed triple \(e,e^{\prime},f\). Thus, condition (W2) is satisfied for \(w_{x,y,j_{x},j_{y},t}\). To see property (W2) for \(w^{\prime}_{x,y,j_{x},j_{y},t}\), recall that each subconflict \(C\) in \(\mathcal{T}_{j_{x},j_{y},t}\) is formed by adding an edge \(f\) in \(\mathcal{H}\) to a subconflict \(C^{\prime}\) in \(\mathcal{P}_{j_{x},j_{y},t}\), where \(f\) must contain the graph edge \(x^{\prime}y^{\prime}\) with which \(xy\) completes \(C^{\prime}\). So, there are at most \(O(d)\) subconflicts \(C\) which extend a particular \(C^{\prime}\). Now suppose \(t\) has size \(j\), fix an edge \(e=(K,i,\ell)\) in \(\mathcal{H}\), and suppose that \(e\) is in at least one subconflict \(C\in\mathcal{T}_{j_{x},j_{y},t}\). Since \(w_{x,y,j_{x},j_{y},t}\) satisfies property (W2), the number of subconflicts \(C\) containing \(e\) is at most \(O(d)\cdot n^{3j-5}/d^{1+\varepsilon}\leq w^{\prime}_{x,y,j_{x},j_{y},t}/d^{1+\varepsilon}\), as desired. The cases where we fix two or three edges in \(\mathcal{H}\) follow similarly. Now we will show that \(w_{x,y,j_{x},j_{y},t}\) satisfies property (W3). To this end, fix two \(\mathcal{H}\)-edges \(e=(K,i,\ell)\) and \(f=(K^{\prime},s,t)\) which are in at least one subconflict in \(\mathcal{P}_{j_{x},j_{y},t}\) together. We will show for each \(j\in\{4,5,6\}\) that \(|(\mathcal{C}_{e})^{j-1}\cap(\mathcal{C}_{f})^{j-1}|\leq d^{j-1-\varepsilon}\). Note that the number of conflicts of size \(j\in\{4,5,6\}\) containing \(e\) (and not necessarily \(f\)) is at most \(\Delta(C^{(j)})=O(d^{j-1})\). However, any subconflict of size \(j-1\) which also completes a conflict with \(f\) must use either two additional fixed vertices (if \(K\) and \(K^{\prime}\) are disjoint) or one other fixed vertex and one more fixed color (if \(K\) and \(K^{\prime}\) intersect in a vertex), and thus there are at most \(O(d^{j-1}/n^{2})<O(d^{j-1-\varepsilon})\) such subconflicts. By the same reasoning, \(w^{\prime}_{x,y,j_{x},j_{y},t}\) satisfies property (W3). ### Proof of Theorem 1 Applying Theorem 10 with \(2\delta\) in place of \(\delta\), we obtain a coloring of a subgraph \(F\subset K_{n}\) with the five desired properties. In particular, the remaining uncolored subgraph \(L=K_{n}-E(F)\) has maximum degree \(\Delta(L)\leq n^{1-\delta}\) by property (IV). We now randomly color the edges of \(L\) from a set \(P\) of \(k=n^{1-\delta}\) new colors. For each edge in \(L\), we assign its color with equal probability \(1/k\), independently of the other edges. We will show using the Local Lemma that the union of these two colorings of \(F\) and \(L\) is a \((5,8)\)-coloring of \(K_{n}\). In order to do so, we define several types of bad events. First, for any pair of adjacent edges \(e,f\) in \(L\), we define \(A_{e,f}\) to be the event that both \(e\) and \(f\) receive the same color. Then \(\mathbb{P}(A_{e,f})=k^{-1}\). The other bad events will correspond to appearances of bad subgraphs. By our construction of the coloring of \(F\), none of these subgraphs can appear in \(K_{n}\) using only edges colored in \(F\). Furthermore, since we use disjoint sets of colors on \(F\) and \(L\), and each color in a bad subgraph appears twice (except for type \(b\) bad subgraphs, where one color appears three times), it suffices to define at most three types of bad events for each type of bad subgraph (with one, two, or three monochromatic matchings coming from \(L\)). Some of the bad subgraphs do not require all three types of bad events; for example, in bad subgraphs of type \(c\), the \(2\)-colored triangle must be in \(F\), so a single type of bad event suffices in this case. We will say that a subgraph \(H\) of \(K_{n}\) is _potentially bad_ if there is a way to color its edges in \(L\) using colors from \(P\) that would create a bad subgraph. That is, \(G\) is potentially bad if \(H\cap L\) completes \(H\) into a bad subgraph. For example, any copy of \(C_{4}\) in \(L\) is potentially bad, as is any copy of \(C_{4}\) in which one pair of matching edges is in \(F\) and the other is in \(L\). For each potentially bad subgraph \(H\) in \(K_{n}\) and corresponding type \(t\) of bad subgraph, let \(B_{H,t}\) be the event that the edges of \(H\cap L\) receive colors from \(P\) which make \(H\) into a bad subgraph of type \(t\). Note that if \(m=|H\cap L|\), then \(m\in\{2,4,6\}\) since \(H\cap L\) can consists of one, two, or three \(2\)-edge matchings. Then \(\mathbb{P}(B_{H,t})\leq 2k^{-m/2}\). Let \(\mathcal{E}\) be the set of all bad events defined above. Two events are edge-disjoint if their corresponding edges in \(L\) are distinct. Let \(E\in\mathcal{E}\). There are at most \(6\) ways to pick a graph edge \(xy\) in \(E\), and for each type \(t\) of bad event, we will bound the number of these which share the edge \(xy\) with \(E\). There are at most \(\Delta(L)=kn^{-\delta}\) events \(A_{e,f}\) which contain \(xy\). Now we will consider events \(B_{H,t}\). If \(m=2\), then by property (V) of the matching used to color \(F\), we know there are at most \(n^{1-2\delta}=kn^{-\delta}\) events \(B_{H,t}\) which contain \(xy\). If \(m=4\), then \(t\in\{a,e,f\}\). First, suppose \(t=a\). There are \(O(\Delta(L))^{2}=O(k^{2}n^{-2\delta})\) events \(B_{H,a}\) which contain \(xy\), since there are \(\Delta(L)\) ways each to pick a neighbor of \(x\) and a neighbor of \(y\) in \(L\) to complete the \(4\)-cycle. Now suppose \(t\in\{e,f\}\). For any bad subgraph \(H\) of type \(t\), \(H\cap L\) must be a path on \(5\) vertices with both endpoints in \(H\cap F\). There are \(O(\Delta(L)^{2})=O(k^{2}n^{-2\delta})\) ways to pick a \(4\)-vertex path containing \(xy\) in \(L\). Note that the fifth vertex of \(H\) is determined by this choice of path. Indeed, some edge \(f\) of \(H\cap F\) must be induced by the four vertices on the path, and this edge comes from an \(\mathcal{H}\)-edge which either contributes a monochromatic \(2\)-edge path to \(H\) or contributes one edge of a monochromatic \(2\)-edge matching to \(H\). In either case, the edge \(f^{\prime}\) in \(H\cap F\) which receives the same color as \(f\) determines the fifth vertex of \(H\), and hence, fixes the bad rest of the bad subgraph. Thus, there are \(O(\Delta(L)^{2})=O(k^{2}n^{-2\delta})\) events \(B_{H,t}\) with \(m=4\) containing \(xy\). Finally, if \(m=6\), then it must be the case that \(t=f\). There are at most \(O(\Delta(L)^{3})\) ways to create a \(5\)-vertex path in \(L\) containing \(xy\), and hence, to create a potentially bad subgraph of type \(f\). Thus, there are at most \(O(\Delta(L)^{3})=O(k^{3}n^{-3\delta})\) bad events \(B_{H,f}\) containing \(xy\). To apply the Local Lemma, we now assign a number \(x_{E}\in[0,1)\) to each bad event \(E\in\mathcal{E}\). For each bad event of type \(A_{e,f}\), let \(x_{A}=10/k\). For each bad event of type \(B_{H,t}\) with \(m=|H\cap L|\), let \(x_{B,m}=10/k^{m/2}\). Note that the probability of any event \(A_{e,f}\) is \(k^{-1}\), which is smaller than \[x_{A}(1-x_{A})^{O(kn^{-2\delta})}(1-x_{B,2})^{O(kn^{-2\delta})}(1-x_{B,A})^{O (k^{2}n^{-2\delta})}(1-x_{B,6})^{O(k^{3}n^{-2\delta})}=(1+o(1))x_{A}.\] In addition, for each \(m\in\{2,4,6\}\), the probability of any event \(B_{H,t}\) with \(m=|H\cap L|\) is at most \(2k^{-m/2}\), which is smaller than \[x_{B,m}(1-x_{A})^{O(kn^{-2\delta})}(1-x_{B,2})^{O(kn^{-2\delta})}(1-x_{B,A})^{ O(k^{2}n^{-2\delta})}(1-x_{B,6})^{O(k^{3}n^{-2\delta})}=(1+o(1))x_{B,m}.\] Thus, condition (5) holds, and Lemma 7 implies that with positive probability, our colorings of \(F\) and \(L\) give a \((5,8)\)-coloring of \(K_{n}\), as desired. ## Acknowledgements Work on this project started during the Research Training Group (RTG) rotation at Iowa State University in the spring of 2023. Emily Heath, Alex Parker, and Coy Schwieder were supported by NSF grant DMS-1839918. Shira Zerbib was supported by NSF grant DMS-1953929. We would like to thank Chris Wells, Bernard Lidicky and Ryan Martin for fruitful discussions during early stages of this project.
2310.03522
Putting a Padlock on Lambda -- Integrating vTPMs into AWS Firecracker
When software services use cloud providers to run their workloads, they place implicit trust in the cloud provider, without an explicit trust relationship. One way to achieve such explicit trust in a computer system is to use a hardware Trusted Platform Module (TPM), a coprocessor for trusted computing. However, in the case of managed platform-as-a-service (PaaS) offerings, there is currently no cloud provider that exposes TPM capabilities. In this paper, we improve trust by integrating a virtual TPM device into the Firecracker hypervisor, originally developed by Amazon Web Services. In addition to this, multiple performance tests along with an attack surface analysis are performed to evaluate the impact of the changes introduced. We discuss the results and conclude that the slight performance decrease and attack surface increase are acceptable trade-offs in order to enable trusted computing in PaaS offerings.
Melker Veltman, Alexandra Parkegren, Victor Morel
2023-10-05T13:13:55Z
http://arxiv.org/abs/2310.03522v1
# Putting a Paddock on Lambda - Integrating vTPMs into AWS Firecracker ###### Abstract When software services use cloud providers to run their workloads, they place implicit trust in the cloud provider, without an explicit trust relationship. One way to achieve such explicit trust in a computer system is to use a hardware Trusted Platform Module (TPM), a coprocessor for trusted computing. However, in the case of managed platform-as-a-service (PaaS) offerings, there is currently no cloud provider that exposes TPM capabilities. In this paper, we improve trust by integrating a virtual TPM device into the Firecracker hypervisor, originally developed by Amazon Web Services. In addition to this, multiple performance tests along with an attack surface analysis are performed to evaluate the impact of the changes introduced. We discuss the results and conclude that the slight performance decrease and attack surface increase are acceptable trade-offs in order to enable trusted computing in PaaS offerings. Trust, TPM, Virtualisation, Firecracker, Linux, Platform-as-a-Service, Cloud ## I Introduction Cloud computing has transformed the way we store, access, and process data. The ability to access and use computing resources on demand, without the need for large upfront investments in hardware and infrastructure, has made cloud computing an attractive and flexible option for businesses of all sizes [1]. By transferring on-site infrastructure to cloud computing platforms, more control and trust is given to the third-party company hosting the service. In practice, companies let cloud providers execute their software and process potentially sensitive data of their customers. To secure both the system running and the handling of data, safe storage and verification of the platform should be implemented to ensure trust properties. One way to achieve trust in a computer system is to use a _Trusted Platform Module_ (TPM), which is a coprocessor for secure cryptographic support. Considering cloud infrastructure, the customer is most often given a _virtual machine_ (VM), which can be equipped with a _virtual TPM_ (vTPM). This vTPM is also considered trusted, as there are multiple ways to associate such a TPM with trusted hardware [2]. However, in the case where cloud providers maintain both the machine as well as the operating system, there is no one who exposes trusted computing functionality, such as a TPM. Virtual machines powered by the Firecracker [3]_Virtual Machine Monitor_ (VMM), also called hypervisor in this paper, do not offer TPM device functionality. Yet, Firecracker is used in offerings by _Amazon Web Services_ (AWS), the main cloud provider in the world as of today [4]. Firecracker was created with the goals of workload isolation, low startup times, and low memory overhead. To achieve minimal overhead, several trade-offs have been made compared to traditional VMMs. Typically, there is only support for a minimal device model, therefore not including TPM device support. As a result, incorporating a TPM in such an environment might have an impact on said performance properties, such as memory overhead and startup times. By extending the Firecracker device model, the communication between the VM and hypervisor internals is also increased. Such a modification can have an impact on the attack surface, which would deteriorate workload isolation. In order to tackle the challenges aforementioned, we address in this paper the following research questions: 1. How does the startup time and memory overhead of a Firecracker VM change if a vTPM is created and allocated to it? 2. What impact on the startup time does maintaining a resource pool of TPMs to be allocated have? 3. How does incorporating a TPM change the attack surface of a lightweight VM? It is important to note that while this paper does not present a complete solution for TPMs in _platform-as-a-service_ (PaaS) offerings, it serves as a crucial building block towards the development of a trusted PaaS environment. Trusted computing already exists within _infrastructure-as-a-service_ (IaaS) offerings which this paper builds upon to contribute to the field through the following key aspects: * We introduce TPM device support in Firecracker, ensuring the compatibility and goals of Firecracker. * We address the resource pooling requirements for TPMs in PaaS environments, to act as a base for future efficient and scalable implementations. * Our work includes a comprehensive security analysis of vTPM usage, highlighting the strengths and potential vulnerabilities associated with its implementation. By focusing on these specific contributions, our research aims to provide valuable insights and advancements towards trusted PaaS offerings, promoting secure and reliable cloud computing platforms. The rest of this paper is organised as follows. Section II reviews related concepts whilst Section III describes how the TPM support is integrated into Firecracker. Section IV describes the design of the test suite and then presents the results of the performance tests and attack surface analysis. In Section V, the results and possible impacts are discussed, and finally, Section VII concludes the paper. ## II Related Work This chapter is dedicated to explaining the current state-of-the-art in the space of trusted cloud computing and virtualisation technologies within Paas offerings. ### _VTPM Implementation Models_ To improve performance, avoid data leakage and isolation issues in vTPMs, Wang, Fan, Wang, _et al._[5] continued the work of Berger, Caceres, Goldman, _et al._[2] and created an alternative TPM software implementation, called _Secure vTPM_ (SvTPM). The novelty of their work is to base the trust in a TEE instead of a hardware TPM. More specifically, they use Intel SGX to run or protect specific parts of the TPM in an SGX enclave. Their work resulted in a much lower startup time in comparison to previous implementations. Similarly, but more cross-platform adapted, initiatives such as HyperEnclave [6] use the integrity measurement and attestation capabilities of a TPM to extend the chain of trust to the TEE. Another approach to achieve similar goals is called _Confidential Computing TPM_ (CoCoTPM). CoCoTPM minimises the trust required towards the host and the hypervisor by running a vTPM in an encrypted VM using AMD SEV. The communication between these TPMs and the other VMs running the desired workload is encrypted and integrity protected, further increasing the confidentiality property [7]. However, to keep our contributions applicable to practical usecases, _swtpm_[2] is used in the experiments of this work. Swtpm is used by hypervisors such as QEMU and Xen. ### _Trust in Cloud Computing Environments_ In a literature study on trust in cloud computing, Ibrahim and Hemayed [8] highlight the significant role of the TPM. They investigate different architectures and managers for IaaS services, compare different vTPMs and remote attestation types, and discuss secure boot and integrity monitoring. Considering PaaS offerings, _Cloud Service Providers_ (CSPs) are required to make a decision whether to use containerisation or virtualisation. This comparison has seen a lot of research, both on performance and isolation [9, 10, 11, 12]. Recent development within the space of runtimes for PaaS offerings has been seen with Google gVisor from the container side, and AWS Firecracker from the virtualisation side. Young, Zhu, Caraza-Harter, _et al._[13] conducted a study comparing the gVisor runtime and the default Docker runtime. Their results show a significant decrease in performance with respect to system calls, memory allocations, and networking. Another comparison between Firecracker and gVisor made by Anjali, Caraza-Harter, and Swift [14], resulted in favour of Firecracker for network bandwidth, memory allocation times, and file access. If we consider trusted computing in PaaS offerings, efforts on integrity-verified containers can be applied to solutions such as gVisor. An example of such work is Container-IMA by Luo, Shen, Xia, _et al._[15]. The attestation mechanism they propose improves privacy of the container and the underlying host and ensures container isolation. Instead of including support for vTPMs in user space, the mechanism uses a shared measurement agent in kernel space, where all application layers are measured by the IMA. The novelty of their work lies in their method for multiplexing PCRs for ephemeral container workloads. However, for PaaS solutions based on virtualisation, such as Firecracker, no effort has been made to enable trusted or confidential computing. As our work aims to integrate TPMs into Firecracker, other components are needed to provide trust, confidentiality, and integrity to PaaS offerings. Already available components, such as the mentioned SvTPM [5] and Linux IMA [16] can be used alongside our contributions to build a complete system. ## III Design and Implementation In this section, we introduce the changes made to Firecracker to support a vTPM. ### _High-level Description_ The current virtual device capabilities of Firecracker are small, focusing on devices adhering to the virtio standard [17]. The TPM is usually discovered using _Advanced Configuration and Power Interface_ (ACPI), but to fit the current Firecracker device model that does not support ACPI, the virtual device implemented in this project uses virtio. After analysing different VMMs, only crosvm [18] uses TPM over virtio, in comparison to QEMU and Cloud-Hypervisor, which both use ACPI [19, 20]. The updated Firecracker device model can be seen in Figure 1. The boxes with blue, dotted outlines denote the added components from open-source projects, whereas the boxes with green, dashed outlines are the components implemented in this project. The boxes inside the Firecracker box represent device interfaces between the host and the guest. The specific vTPM implementation on the host used in this project is swtpm [2]. The virtio-tpm device driver on the guest side uses the already existing architecture for TPMs in the Linux kernel and integrates with the IMA [21]. To run a Firecracker VM with a TPM, some cryptographic certificates and keys need to be created to associate the vTPM with the hardware TPM on the host. After that, the vTPM should be started with a file path passed to it. The vTPM will then create a communication socket on that file path. This socket then needs to be passed to the Firecracker VM configuration, along with a Linux kernel compiled with virtio-tpm support. Then the Firecracker VM can be started and will have access to the vTPM. ### _Technical Implementation_ The implementation of the TPM device in Firecracker can be seen in Figure 2. The two main components can also be seen in Figure 1 as the two green arrows. _SWTPMBackend_ is the arrow from the vTPM to the TPM and _TPMVirtioDevice_ can be seen as the arrow between the TPM box and the vitriptpm box. The _SWTPMBackend_ was inspired by the vTPM implementation in the Cloud-Hypervisor project [22], as they also use swtpm over a UNIX socket. To account for platform setup procedures, swtpm exposes a control channel, which is used to both initialise the TPM and to set a file descriptor for client TPM commands. The file descriptor is one side of another UNIX socket that is created in _SWTPMBackend_. The other side of that socket is used to send and receive messages inside Firecracker. The _TPMVirtioDevice_ implements the _MutEventSubscriber_ and _VirtioDevice_ interfaces which already exists in Firecracker. By using these interfaces, the virtual device fits into the Firecracker device model, and no major changes were needed regarding the internals of Firecracker. Another major advantage of implementing these interfaces was that interrupt registration and device activation are taken care of by already existing components within Firecracker. Firecracker also passes the memory address of the device automatically to the kernel command line parameters for device discovery by the Linux kernel. In the virtual device implementation, a single queue is used for communication between the VM and Firecracker. Using a single queue was possible as it is always the client who initiates communication in the case of a TPM, which can be compared with a network device where any side might initiate communication. This difference is also apparent when comparing the virtio network device in Firecracker with our virtio TPM device, as the virtio network device uses two queues. The source code for the virtual TPM device, as well as the resources required to reproduce the benchmarks described in the next section, can be found in a GitHub repository 1. Footnote 1: [https://github.com/DATX05-CSN-8/fctpm-2023/tree/paper](https://github.com/DATX05-CSN-8/fctpm-2023/tree/paper) ## IV Benchmark and Results In order to answer our research questions, we describe how our performance tests operate and present the results of these tests in this section. We also analyze the changes in the attack surface. ### _Design of the Test Suite_ The performance tests are performed using CloudLab, which is a scientific infrastructure that gives bare metal access to cloud computing resources [23]. More specifically, the tests run on an instance consisting of two Intel Xeon Gold 6142 with 16 cores each and 384 GB of memory. The performance test suite runs multiple scenarios with multiple setups. The first setup, _baseline_ runs an unmodified Firecracker binary without a TPM, to serve as a comparison value. The second setup, _on demand_ runs the modified Firecracker binary and provisions keys and starts a vTPM right before each VM is started. The third setup, _pool_ maintains a resource pool of already provisioned vTPMs and allocates one of them to each Firecracker VM before starting the VM. Regarding the VM OS used, a Linux kernel compiled with a virtio-tpm driver along with a simple init script is used. The init script also shuts down the VM directly. The reason to not start any actual service is to minimise the number of components that can impact the startup time. The same OS configuration is used for all startup time tests. The scenarios executed are the following: * 500 VMs, serially executed * 1000 VMs, 50 running concurrently These scenarios are the same as those used in benchmarks made comparing Firecracker with QEMU and Cloud-Hypervisor [3]. Fig. 1: The device model of Firecracker with TPM device support Fig. 2: An UML class diagram, the green boxes with dashed borders are added implementations to Firecracker In addition to the startup time metric, we also measured the memory overhead of adding the TPM to the VM. The memory overhead describes the memory usage of each VM, disregarding the memory actually available inside the VM. However, different VM memory sizes were used to verify that the memory overhead was stable. Memory overhead is measured using the _pmap_[24] tool, which displays Linux process memory maps. Only non-shared memory is counted, as it represents how the system scales with more VMs. Linux allocates the same physical memory region for processes using the same read-only memory, therefore, it does not have an impact on a system with thousands of VMs. The memory overhead benchmark evaluates 3 setups: baseline, modified excluding vTPM memory, and modified including vTPM memory. The memory of the TPM pool is not considered, as it is not part of either the Firecracker process or the vTPM process. In a real-world scenario, the pool itself would be implemented by the cloud provider to optimise for their use case. ### _Pooling Algorithm_ For the case of running VMs with TPMs allocated from a pool, the pool adheres to a specific resource pooling algorithm. The purpose of using a pool is to move the resource allocation time from a critical point in time to a time where resources and time are less scarce. A characteristic of a vTPM resource is that it can be used by only one VM throughout its lifecycle, which becomes a strict requirement of the resource pool. The algorithm implemented for this project is a simple pre-allocated buffer, where vTPMs are provisioned and started when the pool is created. ### _Results_ This chapter presents the test results, mainly the effects of integrating TPMs into Firecracker with regard to performance and security. Based on these scenarios, the results are visualised using a graphed _cumulative distribution function_ (CDF) of the startup times along with a table that shows a numerical interpretation of the samples. To ensure that no measurement errors occurred, the startup time measured from the performance tests was compared with the time reported from the internal boot timer device in the Firecracker hypervisor. The difference between the two was of reasonable size, ranging from \(10-20\)%, implying no noteworthy errors. _Startup time scenario 1, 500 VMs, no concurrency:_ The results of the scenario when running 500 VMs can be seen as a chart of the CDF in Figure 3 and as numerical data in Table I. An important aspect is the standard deviation that demonstrates the width of each CDF. As can be seen in Table I, the difference between the baseline and the pool implementation is significantly smaller in comparison to the difference between the baseline and the on demand implementation. With regard to absolute numbers, the p95 column, showing the 95th percentile, tells us that the same pattern is seen here. In general, the pool implementation performs slightly worse than the baseline; however, if the TPMs are allocated on demand, the results display a significantly worse boot time. _Startup time scenario 2, 1000 VMs, 50 concurrent VMs:_ Considering the other scenario, with 1000 VMs in total and 50 of them running concurrently, as can be seen in Figure 4 and Table II, the result changes. The similarity regarding the CDF of the baseline and the pool setup is less apparent, and the CDF for the pool setup is slightly wider. However, the pool setup still performs significantly better in comparison to the on demand setup. An overall change is that the deviations of all three setups increase by at least three times from scenario 1. _Memory Overhead:_ The results showed no divergence when different VM memory sizes were used, verifying a stable overhead. The change in memory overhead for the 3 setups can be seen in Table III. When only the Firecracker process Fig. 4: Boot time for 1000 VMs, with 50 running concurrently Fig. 3: Boot time for 500 VMs serially executed was measured, the difference was only \(4\)KB, corresponding to a \(0.16\)% increase. However, when including the memory of the swtpm process, the impact was significantly greater showing an increase of a total of \(52.35\)%. The significance of the \(3760\)KB overhead is further discussed in Section V-A2. ### _Attack Surface Analysis_ As Firecracker aim to have a minimal implementation, there is relevancy in understanding what impact the modifications have. The modified Firecracker VMM compared to the original code in terms of the number of lines of code, was examined using the command line program CLOC [25]. Considering only program code, disregarding configuration files, there was only an increase of approximately 2% lines of code (which corresponds to an addition of 1257 lines for a total of 61 697). The communication between a guest and the hypervisor code had no change in the attack surface, considering Firecracker already handles virtio support. Instead, we made a comparison between the network device and the TPM device, regarding their communication between the hypervisor internals and the host OS. The virtual network device creates a simulated network device using a call to a Linux kernel module that executes code based on the configuration supplied to Firecracker. Therefore, all network calls in the guest will pass through the kernel. This procedure can be compared with the TPM device in Firecracker that communicates with swtpm - a userspace process on the host -, therefore being significantly less privileged. Seeing that the vTPM device in Firecracker passes the data from the guest to the swtpm process itself, it should be considered an extension of the attack surface. Swtpm, implemented in C, poses a potential target for adversaries due to memory safety issues, as can be seen in recent issues [26]. Such issues can lead to code execution vulnerabilities, which in the case of an swtpm process interfaced from a Firecracker VM, implies a VM escape. However, since the swtpm process runs in the host userspace, defence-in-depth principles can be used to isolate the process from the rest of the system. These findings clearly indicated that the greatest change in the attack surface originates from the swtpm program itself and not from the integrated TPM functionality in Firecracker. ## V Discussion This section discusses the results of the experiments, measurements, and analyses performed, with respect to both performance and security. The impact on performance and security introduced by adding TPM support to Firecracker is evaluated in a real-world scenario with CSPs in mind. Additionally, limitations and potential measurement errors are discussed. ### _Performance Evaluation_ In this section we first discuss various aspects regarding the startup time and then the results of the memory overheads. #### V-A1 Startup Time Evaluation The results for the concurrent test scenario using the pooled setup showed a \(17\)% increase which might seem quite major. However, since the usage of a TPM in PaaS solutions would be an opt-in feature, similar to that of the current IaaS, customers will have the flexibility to choose whether or not to utilize this functionality. Additionally, the CSP could charge an additional fee for this functionality, depending on whether the performance impact hits the CSP or the customer. Furthermore, since the output of the Firecracker internal boot timer device was compared with the values measured by the test suite, measurement errors can be ignored. The startup time saw quite a major increase regarding the pooled setup moving Scenario 1 to Scenario 2. As measurement errors were ruled out by comparing with the internal boot timer, we can consider the communication introduced between the Firecracker hypervisor and the vTPM as a root cause. The virtual TPM device implemented in Firecracker is synchronous, meaning that the hypervisor wakes up as the guest notifies the device, after that the request is sent to the vTPM, and the guest stays paused for the time it takes to process the request. Also, when sending and receiving data from the SWTPMBackend, the Firecracker process is blocked due to IO calls to the file descriptor communicating with the swtpm process. The IO calls cause the process to go to sleep; meanwhile, another process might start to execute, which can cause a delay. As the number of concurrent processes increases, the risk that the Firecracker process needs to wait before executing again after a blocking call increases. When comparing the startup time results in this project with the values in the Firecracker research paper [3], the difference is significant. A reason behind the difference in startup time is the different hardware setups used, mainly the CPU used could have impacted the result. The CPU used by Agache, Brooker, Iordache, _et al._[3] is more performant in comparison to the one used in this project. Both single-thread performance and multi-thread performance are significantly lower in this project compared to the one used in the Firecracker research paper. Important for the concurrent scenario is the number of cores, which in the Firecracker paper totalled 48, compared to the 32 used in this project. An important aspect regarding the number of cores is the difference between the total number of cores and the desired parallelism. Logically, as the number of cores is lower in comparison with the desired parallelism, the host OS needs to make vital scheduling decisions. These decisions may have caused some workloads to wait, which might have been a reason for the performance difference. Regarding the usage of a resource pool, considering real-world use, there would not be a situation where a CSP would allocate vTPMs on demand. Cloud providers go to great lengths to minimise the critical execution path to improve the response times of their PaaS offerings. Therefore, if there is an option to use a resource pool, CSPs have no reason not to use it. However, the pool algorithm featured in this work is not suitable for real-world PaaS environments, as it assumes that the total number of virtual machines is known in advance. #### V-A2 Memory Overhead Evaluation With regards to the memory overhead comparing the unmodified Firecracker and Firecracker with TPM support, a static increase of 4KB was observed regardless of the VM memory size. Such an increase is minor and would be considered tolerable by a CSP. However, including the overhead of the swtpm process, the total overhead amounts to 3760KB. Which, considering the total memory allocation of a VM still corresponds to a minor part of the total memory allocated, typically a minimum of 128MB. Another aspect is to compare Firecracker to QEMU and Cloud-Hypervisor. Agache, Brooker, Iordache, _et al._[3] stated that Firecracker have the smallest overhead at approximately 3MB, whereas Cloud-Hypervisor came second at approximately 13MB, and QEMU last at 131MB [3]. Considering the overhead of the other hypervisors, the increase seen when adding TPM support is negligible. As stated, the memory overhead of the resource pool was not considered during the tests. This is mainly due to the pooling algorithm used as it pre-allocates TPMs for the total number of VMs, which means it consumes more memory compared to a dynamically allocating algorithm. ### _Security Analysis_ The result of the attack surface analysis highlights that the major change in the attack surface is seen in the swtpm process. As it is not an option to limit the communication with the swtpm process from the Firecracker hypervisor - considering it needs to adhere to the TPM specification -, other security hardening options have to be used. Assuming the worst, that an adversary can achieve code execution through the TPM commands to the swtpm process, applying defence-in-depth principles come naturally to isolate the adversary and minimise the impact that can be made. Defence-in-depth can be applied and used in multiple ways; one alternative is to isolate the process using Linux kernel primitives, another is to run the vTPM in a separate VM as done with CoCoTPM [7]. #### V-B1 Isolating with Linux Kernel Functionality To harden the security of the swtpm process, Linux cgroups, namespaces, and seccomp filters can be used, among others. Using cgroups can mitigate host resource exhaustion if the adversary floods the swtpm process through the TPM interface. Namespaces improve process isolation by exposing an abstracted view of the network, the process table, and the directory structure to the environment of the swtpm process. Therefore, the use of namespaces can mitigate potential pivots of an adversary, as it can limit the network capability of the process. By using seccomp filters, the _system calls_ (syscalls) a process can perform are limited to a specific set. As the vTPM emulates a self-contained hardware component with limited connectivity, few syscalls need to be allowed by the seccomp filters, showcasing the applicability for this use case. #### V-B2 Alternative Isolation Methods Another option to improve the security of the vTPM implementation is to contain the emulation within the Firecracker program itself. Containing the emulation within Firecracker would, however, increase the program size and memory use of the Firecracker process along with coupling the two components together, which might be undesirable for a CSP. Also, if the swtpm code were to be part of the Firecracker process, an arbitrary code execution exploit would achieve the privilege of the Firecracker process. Therefore, an adversary would have more access compared to the case where swtpm is a separate, hardened process. Containing the vTPM in a separate environment is another option, which would improve isolation even more, at the price of increased overhead. Previous work within the space shows both solutions in separate VMs as well as TEEs [5, 7]. Although the security impact can be interpreted as major due to the risk of vulnerabilities in the vTPM implementation, the time to patch a software component is significantly shorter compared to patching a hardware implementation or firmware. Combining the shortened patch time with the fact that CSPs already use vTPMs in their IaaS offerings implies that the security impact is considered acceptable. ## VI Future Work As this work is aimed at a specific building block that enables trusted computing for PaaS offerings, there are several improvement areas and subfields that would benefit from future research. ### _VM Exits_ One metric that is neither evaluated nor analysed is the number of _VM exits_ occurring. A VM exit is when control is transferred from execution within the VM to the hypervisor code, commonly occurring when a VM interacts with an emulated device. As VM exits can be considered an attack vector [27], it would therefore be beneficial to compare the number of VM exits with and without a TPM. VM exits require significant insight into KVM components to be able to analyse the different reasons behind the exits. Hence, the topic of VM exits have not been researched in this work as it deviates from the main purpose of this work. ### _vTPM Trust Establishment at Scale_ Within practical applications, there is also multiple research subfields to be further worked upon. One such area is trust processes and trust relationships, related to simulated TPM manufacturing and remote attestation. The concepts described in this project need to be adapted to a larger scale to be applicable for CSPs in practice. ### _Trusted Late Launch_ Efforts can also be made to implement trusted late launch functionality, where a VM can boot before it is known what workload it will execute, to later receive the workload and perform integrity measurements. By using a late launch, the startup time for workloads would be able to be reduced even further. Further testing could also be beneficial to optimise the process and overhead from starting and using a vTPM in Firecracker. Similar to how Manco, Lupu, Schmidt, _et al._[28] examine the overhead of short-lived lightweight VMs whilst keeping isolation properties, it would be interesting to see what impact the vTPM support can have on more applied use cases. However, optimising the startup time would require knowledge of the vendor-specific resources and the Firecracker VMM. ### _Alternative vTPM Implementation_ The vTPM implementation used is important with regard to the security of a trusted PaaS offering by a CSP. As mentioned in subsection V-B, multiple actions can be taken to further isolate a Linux process to mitigate potential vulnerabilities. One potential research topic is to implement a vTPM in a memory-safe language, effectively protecting it against memory corruption attacks similar to those seen previously with vTPMs [29]. ### _Migrations of VMs_ Another relevant research avenue is snapshots and migrations of VMs. Although previous work has been conducted with other VMMs regarding this topic [30], it falls outside the scope of this thesis and can be considered as potential future work. ## VII Conclusion This project has integrated software TPM support into the Firecracker VMM 2 along with measuring the performance and security impact of the additions. By incorporating the TPM support, the memory overhead of the Firecracker process saw a negligible increase. When including the vTPM process in the memory overhead, the memory overhead increased significantly. Nevertheless, the overhead remains a small percentage of the total memory allocated to a VM, and would, if used in practice, remain an opt-in feature for customers. Footnote 2: The maintainers of Firecracker have been contacted through GitHub Issues, see [https://github.com/firecracker-microvm/firecracker/issues/3629](https://github.com/firecracker-microvm/firecracker/issues/3629). The startup time increased significantly when TPMs were allocated on demand; however, when using a resource pool with pre-allocated vTPMs, the startup times were kept at an acceptable level. Yet, there is still potential to further improve these metrics even further by adjusting the resource pool algorithm based on the workload and the specific hardware used. Regarding the attack surface, there is an increase originating from the extended communication between the VM and external processes. However, considering that other virtio devices are already supported, the extended support for a virtio TPM fits into the Firecracker device model and does not extend the attack surface there. Therefore, the additional attack vector lies in the actual TPM implementation, which could easily be interchanged and hardened to mitigate security concerns. In conclusion, the incorporation of a software TPM into the Firecracker VMM does not have a significant performance decrease. Instead, the extended use case, which features additional trust capabilities and assurance, compensates for the slight deterioration in performance. ## Acknowledgments This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. We would also like to extend our gratitude to Scionova AB through Erik Dahlgren and Daniel Hemberg, for their invaluable technical and financial support.
2303.14166
JEWEL on a (2+1)D background with applications to small systems and substructure
High-$p_T$ jets are an important tool for characterizing the quark-gluon plasma (QGP) created in heavy-ion collisions. However, a precise understanding of the jet-medium interaction is still lacking, and the development of more sophisticated observables is needed. This work presents a tool that allows for the exploration of alternative high-$p_T$ observables in a variety of collision systems. The tool builds on the publicly available JEWEL Monte Carlo code, and allows for the evolution of a jet on any given (2+1)-dimensional background. Proof-of-concept observables are also presented, studied using the latest version of JEWEL, JEWEL-2.4.0. The simplicity of the separation of the physics of the jet from the physics of the medium (while still allowing for the usual JEWEL medium-response), allows for easy interpretation without the need for complex parameterization. Results are produced using the RIVET toolkit, which allows for transparent preservation and development of analyses that are compatible with experimental methods. The code and analysis used to produce the plots presented here are made publicly available on a Github repository, with up-to-date usage instructions. This tool is expected to be useful to the broad jet-physics community for the study of precision observables for jets.
Isobel Kolbé
2023-03-24T17:27:49Z
http://arxiv.org/abs/2303.14166v1
# JEWEL on a (2+1)D background with applications to small systems and substructure ###### Abstract High-\(p_{T}\) jets are an important tool for characterizing the quark-gluon plasma (QGP) created in heavy-ion collisions. However, a precise understanding of the jet-medium interaction is still lacking, and the development of more sophisticated observables is needed. This work presents a tool that allows for the exploration of alternative high-\(p_{T}\) observables in a variety of collision systems. The tool builds on the publicly available JEWEL Monte Carlo code, and allows for the evolution of a jet on any given (2+1)-dimensional background. Proof-of-concept observables are also presented, studied using the latest version of JEWEL, JEWEL-2.4.0. The simplicity of the separation of the physics of the jet from the physics of the medium (while still allowing for the usual JEWEL medium-response), allows for easy interpretation without the need for complex parameterization. Results are produced using the RIVET toolkit, which allows for transparent preservation and development of analyses that are compatible with experimental methods. The code and analysis used to produce the plots presented here are made publicly available on a Github repository, with up-to-date usage instructions. This tool is expected to be useful to the broad jet-physics community for the study of precision observables for jets. ## I Introduction The modification of high-\(p_{T}\) jets in heavy-ion collisions is a critical signature of the hot and dense matter created in heavy-ion collisions. In central heavy-ion collisions, the main channel for the modification of high-\(p_{T}\) jets is known as "jet suppression", which has enjoyed both theoretical and experimental success (see [1] for a recent review). However, while the theoretical understanding of the jet-medium interaction has become very sophisticated, a need has arisen for more precise observables. This paper presents a tool that may be used to explore alternative high-\(p_{T}\) observables, specifically in a variety of collision systems. This work builds on an existing jet Monte Carlo (MC) code, the publicly available Jewel[2; 3; 4], providing Jewel with the ability to evolve a jet on any given (2+1)-dimensional background. Presented here are also proof-of-concept observables studied using the latest version of Jewel,Jewel-2.4.0. Although more sophisticated jet event generators exist and are often publicly available, the value of the present interface lies in its simplicity. The conceptual separation of the physics of the jet, handled by Jewel, and the physics of the medium, allows for simple interpretation that remains unclouded by the subtle interplay between various model parameters and effects. The results presented here are produced using version 3 of the RIVET toolkit [5]. Within RIVET, jets are clustered using the FASTJET package [6; 7], as well as the LundPlane extension based on [8]. Additionally, the Jewel-2.4.0 release is accompanied by a RIVET projection for the constituent subtraction method. The public availability of RIVET and its design philosophy of transparent preservation and development of analyses that are compatable with experimental methods means it is useful to develop observables with RIVET analyses. Since it is hoped that this tool will be useful to the broad jet-physics community, the code is made publicly available, along with the RIVET analysis used to produce the plots presented here. The code is hosted on a Github repository [https://github.com/isobelkolbe/jewel-2.4.0-2D.git](https://github.com/isobelkolbe/jewel-2.4.0-2D.git), along with upt-to-date usage instructions. This paper is organised as follows: In section II, the medium interface for Jewel is presented with a brief discussion of its features and validation. In section III, the problem of small systems is explored by computing illustrative substructure examples using the (2+1)D medium interface. Software ### Basic features Of particular importance is the new subtraction method (constituent subtraction)[9], which improves Jewel's reproduction of the jet mass and will be particularly important for other jet substructure observables. In the standard release, Jewel-2.4.0 can be run in two modes: in vacuum (hereinafter "vac"), and with a simple medium model (hereinafter "medium"). The simple medium is a radially symmetric, longitudinally expanding temperature profile whose initial state is determined by a Glauber overlap of Woods-Saxon thickness functions and a given initial temperature. Jewel-2.4.0 has also been upgraded to use LHAPDF 6 [10] and therefore has access to a wide range of nuclear Parton Distribution Functions (nPDFs). In order to study jets in a variety of collision systems, it is useful to be able to evolve the jet on a changing background. A natural choice for the determination of the properties of such a background is a hydrodynamic simulation, but it is also useful to be able to use any arbitrary background. Jewel has been used before in conjunction with a hydrodynamic background, particularly to study the effect of jets on the source terms in hydrodynamic simulations [11; 12]. In addition to these studies, hydrodynamic plugs for Jewel have also been developed to study the sensitive interplay between elliptic flow and \(R_{AA}\)[13; 14; 15]. However, to the author's knowledge, such plugins have not yet been used to study jet substructure in small systems, nor are they public. The present work is built heavily on an interface originally implemented by Korinna Zapp, one of the authors of [11; 12], which is in turn similar in spirit to the medium-simple.f medium interface that is part of the standard Jewel release. The plugin presented here, named simply'medium-2D' so that, when built with Jewel will result in an executable jewel-2.4.0-2D, has the following features: The interface 1. reads in temperature and velocity contours for up to 90 time steps. 2. can optionally read the locations of the binary collisions that source the background generation. This is used to sample the location of the initial hard scattering (the default is to sample Woods-Saxon overlap functions). 3. correctly boosts into the rest frame of the fluid cell in order to determine the fluid density. 4. interfaces with the main Jewel code. 5. (as with the standard Jewel release), _is not_ able to use two different parton distribution functions (PDFs) in an assymmetric system. Care must be taken to ensure that observables are either not sensitive to the PDF, or that the sensitivity to the PDF is carefully taken into account. 6. does not output the hadronized products of the medium to the Jewel event record: as in the standard Jewel release, the event record is a superposition of the Jewel-evolved jet and a PYTHIA-produced event. The medium interface can be passed a parameter file by passing the name of the parameter file to the MEDIUMPARAMS parameter in the Jewel. The interface can take many of the same parameters as Jewel's simple medium since it must still create an initial condition from which to sample the location of the initial hard scattering if the user does not provide an \(N_{coll}\) probability density. Updated instructions for use are maintained on the code repository. ### Testing and Validation In order to test the interface, a set of temperature contours were produced using the code from Jewel's own medium-simple.f with no modifications. Jewel was then run using the 2D interface presented here with these contours (along with null velocity contours) as input, and the resulting distributions compared. As an illustrative example, the jet yield as a function of transverse momentum is shown in fig. 1 for'simple-2D','simple', and 'vac', corresponding to running Jewel with the present 2D interface on medium profiles produced using Jewel's internal simple medium code, with the standard simple medium, and in vacuum, respectively. As expected, the vacuum case deviates significantly from the two medium cases. The systematic differences between "simple" and "simple-2D" are very sensitive to the granularity of the temperature grid. While this can be improved significantly, the memory cost is high. ## III Jet substructure and small systems The medium interface presented here has broad applicability with in the study of jets in heavy-ion collisions. A particular concern within the community at present is related to small colliding systems such as proton-lead and (proton, deuteron, helium)-gold. In this section, the problem of small systems is explored. An argument is made for the need to develop substructure observables that are independent of the yield but sensitive to the colliding system, a task which will be greatly aided by the availability of the medium interface presented in this work. ### Small systems - a motivation Decades of ultra-relativistic heavy ion collisions at colliders across the world have led to a successful program of creating and beginning to characterize the hot deconfined state of matter known as the quark-gluon plasma (QGP) [16]. The study of the QGP falls largely into two categories: (1) phenomena governing low-momentum particles, and (2) phenomena governing high-momentum particles. It is critical that the two categories of phenomena are described by a self-consistent "standard model of the QGP" [17]. In very central \(Pb-Pb\) or \(Au-Au\) collisions, this seems to be the case, but it has been known for several years now that the framework is inconsistent in small colliding systems such as \(p-Pb\) or \(d-Au\) : Low-momentum signatures of the QGP in small colliding systems are characterized by several observables that are both measured experimentally with a high degree of precision and described exceptionally well theoretically (see [18] for a paedegocial review); On the other hand, the high-momentum, or high-\(p_{T}\), signatures are not only absent, but their absence remains unexplained. This last statement needs to be refined: While other observables exist, the gold standard for the observation of the modification of high-\(p_{T}\) partons by the QGP has been the nuclear modification factor, \(R_{AA}\) (see [19] for a review of jet measurements in heavy-ion collisions). \(R_{AA}\) compares the yield in a nucleus-nucleus (\(AA\) ) collision, with that in \(pp\) and attempts to scale this ratio such that \(R_{AA}\sim 1\) in the absence of the QGP. The observable \(R_{pA}\) (and related observables) does the same for a proton-nucleus collision. Experimental measurements all but rule out the possibility that \(R_{pA}\approx 1\) (see [1] and citations therein). There can be several explanations for this inconsistency: It may be that the model description of low-\(p_{T}\) phenomena using the fluid mechanics reminiscent of the QGP in \(AA\) are going beyond their range of applicability and should not be interpreted as evidence for collective behaviour. This scenario has been studied extensively [20; 21]. The Figure 1: The jet yield as a function of \(p_{T}\) for jets evolved in Jewel, either without a medium (green, “vac”), or with the same simple medium using either Jewel’s own “simple” mode (cyan, “simple”), or using the 2D interface presented here with medium profiles generated with Jewel’s simple medium code (blue, “simple-2D”). Monte Carlo errors are smaller than the thickness of the line. correlations may be due to non-flow effects such as momentum anisotropies that exist already in the initial state [22]. It seems clear that, even within hydrodynamic models that are able to describe the low-\(p_{T}\) phenomena, the nature of the medium in a small system must necessarily be hotter and denser in order to produce multiplicities that are comparable to peripheral \(AA\) collisions [23]. However, the interpretation of the apparent absence of the modification of high-\(p_{T}\) partons in small systems is not as sophisticated. Theoretically, it may be that the distance a hard parton travels through the medium is too short to see any modification. If this were the case, then the perturbative calculations that lead to this intuition would bear it out. As it turns out, some studies have been done that show either that the modification should be observable [24], or that the array of simplifying assumptions made in the standard pQCD calculations are completely inconsistent with small systems, rendering them unreliable [25]. It may be that the modification of high-\(p_{T}\) partons occurs on a time-scale that is much larger than the lifetime of the QGP in small systems, i.e. that the mean free path is smaller than the system size. This is a prediction that should easily be verifiable in Monte Carlo (MC) simulations, and is partly the motivation for the present work. But there is a larger phenomenological obstacle to our understanding of the modification of high-\(p_{T}\) partons in small systems: \(R_{AA}\) (or its equivalent in a small system such as \(R_{pA}\) ) is a wholly unsuitable observable. The bulk of the problem lies in \(R_{AA}\)'s reliance on yields that are sensitive to a host of biases that are accentuated in small systems. These include the immense experimental difficulty of determining the number of binary collisions [26], uncertainties in the fragmentation of jets, and the initial production spectrum through the nuclear parton distribution functions (nPDFs). In addition to these biases, the steeply falling production spectrum means that the necessarily small amount of energy loss rapidly becomes undetectable, even if it was present 1. \(R_{AA}\) is not able to falsify the claim that a medium of deconfined QCD matter is produced in very central small systems. Footnote 1: Some alternatives exist [27; 28] that attempt to reduce the biases introduced by the steeply falling spectrum. It is clear that, whatever the physics of the modification of high-\(p_{T}\) partons in small systems is, a much more sophisticated understanding of sensitive, differential observables is needed. It is also crucial that such an understanding is developed in the context of experimentally achievable goals. ### Jet substructure Jet substructure observables suffer from far fewer of the biases that plague \(R_{AA}\). Jet substructure has been studied extensively in high-energy particle physics [29; 30; 31], enjoying enormous success. There have also been many theoretical advances in the study of jet substructure in heavy-ion physics [32; 33; 34], along with several promising experimental measurements [35; 36; 37] It is hoped that the present work will serve as a useful tool to aid the community to develop new substructure observables that characterize the modification of high-\(p_{T}\) partons in heavy-ion collisions in a more precise manner than \(R_{AA}\). It is not unreasonable to presume that such precision observables will shed significant light on the modification of high-\(p_{T}\) partons in small systems as well. Of particular importance in the study of jet substructure in hadronic collisions has been the development of appropriate grooming techniques. Careful consideration of grooming techniques in studies involving Jewel are particularly important 2 since the soft particles in a Jewel event record are produced by PYTHIA and are independent of the jets evolved by Jewel (except when including recoils in Jewel). Footnote 2: This statement is independent of the need to include an appropriate subtraction technique when using MC data generated by Jewel when keeping track of recoils. See [9] for details on the constituent subtraction method employed in this work. Once a Jewel event has been appropriately subtracted, the further use of jet grooming is phenomenological. The most widely used grooming technique in heavy-ions is the SoftDrop [38] technique. A newer grooming technique is that of Dynamical Grooming [39], which avoids the absolute scale cut-off employed by SoftDrop by using instead the hardest branch in a Cambridge/Aachen (C/A) re-clustering sequence to determine how to groom a jet. The only parameter used in Dynamical Grooming is called \(a\), and sets the definition of the term "hardest branch". That is, the hardest splitting in an angular ordered shower is defined as \[\kappa^{a}=\frac{1}{p_{T}}\max_{i\in\text{C/A seq.}}\left[z_{i}(1-z_{i})p_{T,i }\left(\frac{\theta_{i}}{R}\right)^{a}\,,\right] \tag{1}\] where \(p_{T}\) is the transverse momentum of an entire jet with radius \(R\), for \(z_{i}\), \(p_{T,i}\), and \(\theta_{i}\) the momentum sharing fraction, the energy of the parent, and the relative splitting angle of the \(i^{th}\) splitting in the C/A [40] re-clustering sequence respectively. By choosing (potentially continuous) values for \(a\), one varies the definition of the "hardest branch". Through \(a\), the hardest branch is defined as * \(a=0\): the branch with the most symmetric momentum sharing (use \(a=0.1\) to avoid colliniear sensitivity); * \(a=1\): the branch with the largest relative transverse momentum; * \(a=2\): the branch with the shortest formation time. The dynamical grooming procedure can then be used either to tag a particular hard splitting in order to study its kinematics, or to groom the jet by discarding all emissions that occur prior to the hard splitting in the C/A sequence. Of course, this grooming procedure still assumes angular ordering of emissions, which is not guaranteed in a heavy-ion collision. In addition to the dynamically groomed jet momentum sharing fraction \(z_{G}\), three other observables are also presented in this section: the invariant jet mass \(M_{jet}\) (of a groomed jet, not the groomed jet mass), the number of subjets, and the lund plane. The number of subjets is obtained by first clustering \(R=0.4\) jets in an event before, for each \(R=0.4\) jet, reclustering the constituents into \(R=0.2\) jets. The lund plane is computed using the FASTJET control package "LundPlane" [8]. In order to illustrate the use of the 2D medium interface presented here, simulations were performed using publicly available (2+1)D MUSIC [41] profiles with IP-Glasma initial conditions of the \(0-5\%\) most central \(He-Au\) at \(\sqrt{s_{NN}}=200\)GeV and \(Pb-Pb\) at \(\sqrt{s_{NN}}=2.76\)TeV collisions. Note that the \(0-5\%\) most central \(He-Au\) bin is precisely the centrality bin in which PHENIX measured the hierarchy of \(v_{2,3}\)[42]. For both \(Pb-Pb\) and \(He-Au\), event-by-event fluctuations were simulated by running 2000 events on each of 100 (200) simulation profiles for \(Pb-Pb\) (\(He-Au\) ). Consider first the effect of the choice of PDF in fig. 2. Figure 2 shows two different observables for dynamically groomed (\(a=0.1\)) anti-\(k_{T}\), \(R=0.4\) jets, simulated using the 2D medium interface with MUSIC profiles for \(He-Au\) collisions at \(\sqrt{s_{NN}}=200\) GeV. Each panel shows two histograms: The dark purple histogram is obtained running with Jewel's default proton PDF (CT14nlo [43]), while the light gold histogram was obtained running with a gold nuclear PDF (EPPS21 + CT18ANLO [44; 45]). Even for observables that are expected to be sensitive to the PDF, like the jet spectrum, the simulation agrees within Monte Carlo errors. As an illustrative example of standard substructure observables, consider fig. 3, showing the Lund Plane in the left panel and the number of subjets with radius \(R=0.2\) in the right panel, for dynamically groomed (\(a=0.1\)), anti-\(k_{T}\) jets with radius of \(R=0.4\). Dark purple histograms are obtained running with Jewel's default proton PDF (CT14nlo [43]), light gold histograms were obtained running with a gold nuclear PDF (EPPS21 + CT18ANLO [44; 45]), on MUSIC \(He-Au\) profiles, blue histograms were obtained running Jewel (default proton PDF) on MUSIC \(Pb-Pb\) profiles. Figure 2: (a) The jet yield and (b) the dynamically groomed (\(a=0.1\)) jet momentum sharing fraction for \(R=0.4\) anti-\(k_{T}\) jets in the \(0-5\%\) most central bin of \(He-Au\) collisions at \(\sqrt{s_{NN}}=0.2\) TeV. Gold histograms show simulated results using a gold nuclear PDF, while purple histograms show simulated results run on Jewel’s default proton PDF setting. The constituent subtraction method has been validated using the jet mass, improving the interpretability of the shape of the jet mass distribution [9]. Consider therefore the variation in the shape due to the choice of grooming parameter \(a\) in fig. 4. Lastly, the groomed momentum sharing fraction is shown in fig. 5 for dynamically groomed anti-\(k_{T}\) jets with radius of \(R=0.4\), for the three standard choices of \(a\). It is curious to note the peak in the \(He-Au\) samples in the \(a=1\) and \(a=2\) panels of fig. 5 which is absent for the \(Pb-Pb\) samples. An exploration of the origins of this feature is left for later work as it will involve the higher statistics needed to perform studies that are differential in the jet energy. ## IV Conclusion and Outlook The purpose of this letter is to present a medium interface for Jewel that allows Jewel to sample a given set of temperature and velocity profiles of a (2+1)D medium. The interface is made publicly available. It is hoped that this tool will be of use to the broader heavy-ion jet community to aid in the exploration of jets in a variety of collision systems. There is an important aspect of heavy-ion collisions that is not yet taken into account by this interface - the underlying event. In its current form, this medium interface relies, in precisely the same way as the standard Jewel medium model, on a completely uncorrelated underlying event produced by PYTHIA. Although one may gain some Figure 4: The jet mass for dynamically groomed, \(R=0.4\) anti-\(k_{T}\) jets in the \(0-5\%\) most central bins of \(He-Au\) collisions at \(\sqrt{s_{NN}}=0.2\,\mathrm{TeV}\) and \(Pb-Pb\) collisions at \(\sqrt{s_{NN}}=2.76\,\mathrm{TeV}\). Gold histograms show simulated results for \(He-Au\) using a gold nuclear PDF, purple histograms for \(He-Au\) using Jewel’s default proton PDF setting, and blue histograms for \(Pb-Pb\). From left to right the panels use a grooming parameter of \(a=0.1\), \(a=1\), and \(a=2\) respectively, corresponding to different definitions of the “hardest branch”. Figure 3: The (a) Lund Plane and (b) number of subjets of dynamically groomed (\(a=0.1\)), \(R=0.4\) anti-\(k_{T}\) jets in the \(0-5\%\) most central bins of \(He-Au\) collisions at \(\sqrt{s_{NN}}=0.2\,\mathrm{TeV}\) and \(Pb-Pb\) collisions at \(\sqrt{s_{NN}}=2.76\,\mathrm{TeV}\). Gold histograms show simulated results for \(He-Au\) using a gold nuclear PDF, purple histograms for \(He-Au\) using Jewel’s default proton PDF setting, and blue histograms for \(Pb-Pb\). access to the part of the event that is correlated with the jet by keeping track of recoiling partons, it would still be desirable to have, in the Jewel event record, the underlying event that is the result of the medium upon which the jet was evolved. It is worth noting that, without the information from the underlying event, it is not possible to study any observable that relies on the soft constituents of the event, such as high-\(p_{T}\) -\(v_{2}\). Such a modification of the main Jewel code is a much larger undertaking and is left for future work. Although much focus has been places here on the use of this interface to study small systems, its usefulness extends to any precision study of the jet-medium interaction. There is particular scope to vary the nature of the medium using this interface, which allows for a cleanly interpretable exploration of aspects of jet evolution that are sensitive to the nature of the medium, not simply the existence thereof. ## V Acknowledgments I gratefully acknowledge the major contribution to the original implementation of the medium interface by Korinna Zapp. Thank you to Alba Soto-Ontoso for providing a C++ class that implements the dynamical grooming procedure, and to Liliana Apolinario for help implementing the constituent subtraction. Thank you to Chun Shen for support in accessing the MUSIC profiles. I would like to express gratitude to Urs Wiedemann, Guilherme Milhano and Korinna Zapp for early discussions that lead to this project being started, as well as to the CERN TH department for their hospitality during the start of this project, and the SA-CERN collaboration for funding to visit CERN. I am grateful for many fruitful discussions with, and helpful suggestions from, Leonardo Barreto, Fabio Canedo and Marcelo Munhoz at the University of Sao Paulo. This project has further benefited greatly from insights garnered through conversations with Liliana Apolinario, Jasmine Brewer, Raghav Kunnawalkam Elayavalli, Mawande Lushozi, Anna McCoy, Titus Mombacher, Christine Nattrass, Dmytro Oliinychenko, and Carlos Salgado. Lastly, the development of this project relied heavily on access to resources at both the Institute for Nuclear Theory at the University of Washington (and was therefore partially supported by the U.S. DOE under Grant No. DE-FG02-00ER41132) and the Galician Institute for High Energy Physics at the University of Santiago de Compostela; I would like to express particular gratitude to the general and computing administrative staff at both institutions. This work is supported by European Research Council project ERC-2018-ADG-835105 YoctoLHC; by Maria de Maetzu excellence program under project CEX2020-001035-M; by Spanish Research State Agency under project PID2020-119632GB- I00; and by Xunta de Galicia (Centro singular de investigacion de Galicia accreditation 2019-2022), by European Union ERDF.)
2307.06095
Exact Resource Allocation for Fair Wireless Relay
In relay-enabled cellular networks, the intertwined nature of network agents calls for complex schemes to allocate wireless resources. Resources need to be distributed among mobile users while considering how relay resources are allocated, and constrained by the traffic rate achievable by base stations and over backhaul links. In this letter, we derive an exact resource allocation scheme that achieves max-min fairness across mobile users, found with a linear complexity with respect to the number of mobile users and relays. The results reveal that the proposed scheme remarkably outperforms current solutions.
Edgar Arribas, Vicent Cholvi, Vincenzo Mancuso
2023-07-12T11:38:17Z
http://arxiv.org/abs/2307.06095v3
# Exact Resource Allocation over Fair Wireless Relay Networks ###### Abstract In relay-enabled cellular networks, the intertwined nature of network agents calls for complex schemes to allocate wireless resources. Resources need to be distributed among mobile users while considering how relay resources are allocated, and constrained by the traffic rate achievable by base stations and over backhaul links. In this work, we derive a resource allocation scheme that achieves \(\mathrm{max}\)-\(\mathrm{min}\) fairness across mobile users. Furthermore, the optimal allocation is found with linear complexity with respect to the number of mobile users and relays. ## I Introduction and Related Work We consider a heterogeneous relay-enabled network [1] formed by a set of fixed _gNB_s (_Next Generation Node B_) providing wireless service both to mobile users and relays. Figure 1 shows an illustrative example of the considered scenario. It can be seen that there are two _gNB_s that provide service to three relays (a rooftop tower, a bus and a UAV) and also to one mobile user. On the other hand, relays provide service to other mobile users (e.g., on the bus, in the stadium or simply close to a relay). We derive a mechanism that provides a fair throughput distribution to mobile users. More concretely, we want to guarantee \(\mathrm{max}\)-\(\mathrm{min}\) fairness [1], aiming to maximize the performance of the worst-case mobile user, so potential service outages can be minimized. The complexity of relay architectures makes the analysis quite difficult due to the intertwined nature of all the involved agents. Indeed, _gNB_ resources must be allocated not only to directly served users, but also shared with relays, and relays may reuse wireless resources to serve their mobile users, thus generating interference. Additionally, the use of _gNB_ resources is also constrained by the backhaul capacity. Finally, and no less important, wireless resources must be assigned quickly to be able to adapt to changing scenarios, as guaranteed by our proposal. ### Previous Work In the last years there has been an increasing number of studies focused on resource allocation in heterogeneous networks [1]. Here, we review some previous works, showing the different directions followed by them. In [2], the authors focus on a downlink wireless network aided by a single unmanned aerial vehicle (UAV) which serves users aiming to maximize the minimum average rate among all users. In [3], the authors investigate the application of the non-orthogonal multiple access technique for the case of a single UAV relay and solve a joint channel-and-power allocation problem with an iterative algorithm under \(\mathrm{max}\)-\(\mathrm{min}\) fairness, yet not achieving optimal results. Those studies cannot be extended to the case of multiple relays addressed in this paper. In [4], the authors study proportional and \(\mathrm{max}\)-\(\mathrm{min}\) fairness mechanisms in cognitive radio networks, where secondary users act as relays, aiming to provide an acceptable throughput. However, different from us, their analysis is restricted to IoT scenarios and needs the use of non-convex problems, which prevents finding optimal results in reasonable time. In [5], the authors consider a scenario similar to ours. However, they take restrictive assumptions regarding how resources is allocated, and ignore inter-cell interference as well as interference between _gNB_s and relays. With that, they propose a heuristic and show that it can improve fairness. ### Our Work We develop a \(\mathrm{max}\)-\(\mathrm{min}\) fair resource allocation algorithm that permits to jointly allocate resources to both mobile users and relays, considering interference and backhaul bottlenecks. The proposed algorithm finds the exact solution for the associated optimization problem, which goes beyond existing results. Another important feature is that our algorithm has linear complexity with respect to the number of mobile users and relays, which is a strong advantage when it comes to practical implementations. ## II System Model We consider a wireless relay-enabled network composed by a set of fixed _gNB_s and a set of relays that provide cellular service to a set of mobile users in either downlink or uplink. Each _gNB_ is attached to a wired backhaul network, whereas each relay is attached to one _gNB_ by means of a wireless backhaul link. Fig. 1: Reference scenario. The set of relays attached to _gNB_\(g\) is denoted as \(\mathcal{R}_{g}\), the set of _gNB_-served users is denoted as \(\mathcal{U}_{g}\), for each _gNB_\(g\), the set of users served by relay \(r\) is denoted as \(\mathcal{U}_{r}\), and the set of users served by some relay attached to _gNB_\(g\) is denoted as \(\mathcal{U}_{g}^{*}\). Each _gNB_\(g\) receives a maximum traffic capacity rate (denoted \(\tau_{g}\)) from the wired backhaul network, perhaps different from that of the other _gNB_. We denote as \(W_{\text{relays}}^{g}\) the bandwidth of _gNB_\(g\) dedicated to relays and as \(W_{\text{users}}^{g}\) the bandwidth of _gNB_\(g\) dedicated to users directly attached to \(g\). In addition, each relay \(r\) will allocate its bandwidth, which we denote as \(W_{\text{users}}^{r}\), among the users it serves (note that \(W_{\text{relays}}^{g}\), \(W_{\text{users}}^{g}\) and \(W_{\text{users}}^{r}\) are fixed values, since the assignment of spectrum bands to operators is performed by means of government auctions where only channels of fixed bandwidth are offered [6]). Such bands for mobile users and relays may be deployed by the operator as either orthogonal or reused bands. What matters for our analysis is that interference, if present, is accounted for. After that, operators can split the assigned bandwidth into smaller portions to allocate sub-channels to specific groups of users and services, according to their target (e.g., optimize a fair network performance). On another hand, it must be taken into account that practical systems cannot assign arbitrary a small bandwidth to individual stations or users [7]. Concretely, each relay obtains a minimum bandwidth of \(W_{\text{relays}}^{\text{min}}\), while each served mobile user receives a minimum bandwidth of \(W_{\text{users}}^{\text{min}}\). Mobile users access wireless resources with an OFDMA scheme, as in 3GPP networks [6]. Thus, mobile users under the coverage of the same _gNB_ or relay do not suffer intra-cell interference. However, mobile users can suffer inter-cell interference if they are attached to different _gNB_s or relays. In addition, relay links might interfere among themselves. Although the above-mentioned interference can be reduced by making both _gNB_s and relays use 3D-beamforming or adopting orthogonal frequencies, depending on the scenario it will be necessary to take into account the signal strength of each wireless channel, measured as the SINR (_signal to interference & noise ratio_). We denote by \(\gamma_{g,r}\) the SINR of the relay link between _gNB_\(g\) and a relay \(r\), and by \(\gamma_{s,u}\) the SINR of the access link between a station \(s\) (either a _gNB_ or a relay) and a mobile user \(u\). Table I summarizes the system model parameters that we will use along the paper. ## III The Resource Allocation Problem The aim of our work is to optimize the \(\max\)-\(\min\) fairness of the throughput received by mobile users. This is not a trivial task, since all the involved agents (_gNB_s, relays and mobile users) are intertwined (e.g., resources of mobile users from one relay cannot be allocated without knowing what backhaul resources that relay will get, depending on other relay resources and the _gNB_ bottleneck over the wired backhaul), while the interference management also involves different types of colliding wireless channels. Since at resource allocation the network disposes of the _channel state information_ (CSI) feedback necessary to know the SINRs of the channels, each _gNB_ will be able to solve the resource allocation problem for its relays, its mobile users, and the users of relays attached to that _gNB_ in a concurrent and independent manner, by using the convex program that we will introduce next. More formally, it will be necessary to obtain, for each relay \(r\) and for each mobile user \(u\), both the share of bandwidth assigned (denoted \(w_{r}\) and \(w_{u}\)), and the aggregate throughput experienced by the network node (denoted \(T_{r}\) and \(T_{u}\)). In Eq. (1) we formulate, for each _gNB_\(g\), the corresponding resource allocation optimization in a Convex Program (CP): \[\left\{\begin{array}{ll}\max\min\left\{T_{u}\,|\,u\in\mathcal{U}_{g}\bigcup \,\mathcal{U}_{g}^{*}\right\},&\text{s.t.:}\\ \\ 1.\,\,w_{r}\geq W_{\text{relays}}^{\text{min}},&\forall r\!\in\!\mathcal{R}_{g}; \\ 2.\,\,\sum_{r\in\!R_{g}}w_{r}=W_{\text{relays}}^{g};&\\ 3.\,\,T_{r}\!\leq\!w_{r}\log_{2}(1\!+\!\gamma_{g,r})\,,&\forall r\!\in\!\mathcal{ R}_{g};\\ \\ 4.\,\,w_{u}\geq W_{\text{mess}}^{\text{min}},&\forall u\!\in\!\mathcal{U}_{g} \bigcup\mathcal{U}_{g}^{*};\\ 5.\,\,\sum_{u\in\mathcal{U}_{g}}w_{u}=W_{\text{mess}}^{g};&\\ 6.\,\,\sum_{u\in\mathcal{U}_{r}}w_{u}=W_{\text{mess}}^{r},&\forall r\!\in\! \mathcal{R}_{g};\\ \end{array}\right. \tag{1}\] \[\begin{array}{ll}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! order with respect to the number of mobile users [9], which is prohibitive for real-time applications, specially for large mobile user populations. Thus, in the next section, we derive an exact analytical solution that has a linear complexity in the number of operations both with respect to the number of mobile users and to the number of relays attached to the \(g\!NB\). Namely, the complexity is of the order of \(\mathcal{O}(|\![{\mathcal{U}_{q}}]\!\bigcup\!{\mathcal{U}_{q}^{*}}|\!\cdot\![{ \mathcal{R}_{g}}]\!|)\), in the worst case. ## IV The Exact \(\max\!\!-\!\!\min\) Resource Allocation In this section we find the exact \(\max\!\!-\!\!\min\) resource allocation. For that, we first introduce the exact solution to a preliminary scenario in Section IV-A and, upon that, build the final solution to the general case, in Section IV-B. ### _The preliminary scenario: One Base Station Without Relays_ In this scenario, we consider one base station \(s\) (namely, the \(g\!NB\) or a relay) with no further relays attached to \(s\). The CP of Eq. (1) simplifies considerably since the station only needs to manage resources to be split among its served users \(\mathcal{U}\): \[\begin{cases}\max\min\left\{T_{u}\ \mid\ u\in\mathcal{U}\right\},&\text{s.t.:}\\ w_{u}\geq W^{\min},&\forall u\!\in\!\mathcal{U};\\ \sum_{u\in\mathcal{U}}w_{u}=W_{\text{users}};\\ T_{u}\!\leq\!w_{u}\!\log_{2}(1\!+\!\gamma_{s,u}),&\forall u\!\in\!\mathcal{U} ;\\ \sum_{u\in\mathcal{U}}T_{u}\!\leq\!\tau,\end{cases} \tag{2}\] where \(W_{\text{users}}\) and \(W^{\min}\) are the total and minimum allocable bandwidth of the channel, while \(\tau\) is the backhaul limitation. The exact solution to such problem is derived in Algorithm 1, followed by Algorithm 2. Algorithm 1 is meant to find the optimal \(\max\!\!-\!\!\min\) resource allocation when there is no capacity limitation (i.e., the \(\tau\)-constraint is ignored), while Algorithm 2 ensures that the \(\max\!\!-\!\!\min\) rates are subsequently adjusted in a way that maintains the \(\max\!\!-\!\!\min\) fairness found by Algorithm 1_and_ the aggregate user rate does not exceed the backhaul capacity limitation. ``` 1:Start:\(w_{u}\!\leftarrow\!W^{\min}\), \(T_{u}\!\leftarrow\!w_{u}\log_{2}(1\!+\!\gamma_{s,u}),\forall u\!\in\!\mathcal{U}\). 2:\(\mathcal{J}\leftarrow\{u\in\mathcal{U}\ |\ T_{u}=\min\left\{T_{v}\ |\ v\in\mathcal{U}\right\}\}\). 3:while\(\sum_{u\in\mathcal{U}}w_{u}<W_{\text{users}}\) and \(|\mathcal{J}|\neq|\mathcal{U}|\)do 4:\(v_{0}\leftarrow\arg\min\left\{T_{u}\ |\ u\notin\mathcal{J}\right\}\) and \(\mathcal{K}\leftarrow 1\). 5:\(k_{u}\gets T_{v_{0}}/\log_{2}\left(1+\gamma_{s,u}\right)-w_{u}\), \(\forall u\!\in\!\mathcal{J}\). 6:if\(\sum_{u\in\mathcal{J}}k_{u}>W_{\text{users}}-\sum_{u\in\mathcal{U}}w_{u}\)then 7:\(k_{u}\!\leftarrow\!\frac{W_{\text{users}}-\sum_{u\in\mathcal{U}}w_{u}}{\log _{2}(1\!+\!\gamma_{s,u})-w_{u}}-w_{u}\), \(\forall u\!\in\!\mathcal{J}\) and \(K\!\leftarrow\!0\). 8:endif 9:\(w_{u}\!\leftarrow\!w_{u}\!+\!k_{u},\forall u\!\in\!\mathcal{J}\) and \(T_{u}\!\!\leftarrow\!w_{u}\!\log_{2}(1\!+\!\gamma_{s,u})\), \(\forall u\!\in\!\mathcal{J}\). 10:if\(K\!=\!1\)then\(\mathcal{J}\leftarrow\mathcal{J}\cup\{v_{0}\}\), endif 11:endwhile 12:Output:\(T\leftarrow\min\left\{T_{u}\ |\ u\in\mathcal{U}\right\}\). ``` **Algorithm 1** Resource allocation of without relays. In Algorithm 1, we initially set \(w_{u}\!=\!W^{\min}\) and \(T_{u}\!=\!W^{\min}\), \(\log_{2}(1\!+\!\gamma_{s,u}),\forall u\!\in\!\mathcal{U}\) (cf. step 1), and define the set \(\mathcal{J}\) as those indices \(u\) such that \(T_{u}\) is minimum (cf. step 2): \[\mathcal{J}=\left\{u\in\mathcal{U}\ |\ T_{u}=\min\left\{T_{v}\ |\ v\in\mathcal{U} \right\}\right\}. \tag{3}\] While there are resources to allocate, i.e., \(\sum_{u\in\mathcal{U}}w_{u}\!<\!W_{\text{users}}\), and \(|\mathcal{J}|\neq|\mathcal{U}|\), we take index \(v_{0}\!=\!\arg\min_{u\notin\mathcal{J}}T_{u}\) so that \(T_{v_{0}}\) is the lowest rate not equal to the minimum rate (cf. step 4). Now, we aim to increase \(w_{u}\) as much as possible in a way that is \(\max\!\!-\!\!\min\) fair and \(T_{u}\!\leq\!T_{v_{0}}\forall u\!\in\!\mathcal{J}\). Then, we find \(\{\![\![{w_{u}}]\!\}_{u\in\mathcal{J}}\) so that \(\{\![{w_{u}}]\!\}_{u\in\mathcal{J}}\) are increased by \(k_{u}\) each. The optimal way is by setting \(k_{u}\!=\!\frac{T_{v_{0}}}{\log_{2}(1\!+\!\gamma_{s,u})}\!-\!w_{u},\forall u\! \in\!\mathcal{J}\) (cf. step 5) and checking if \(\sum_{u\in\mathcal{J}}k_{u}\!\leq\!W_{\text{users}}^{\!\!\!-\!\!\sum_{u\in \mathcal{U}}w_{u}}\!-\!\sum_{u\in\mathcal{U}}w_{u}\). If that inequality is not satisfied, then \(k_{u}\!=\!\frac{W_{\text{users}}-\sum_{u\in\mathcal{U}}\overline{w_{u}}}{\log _{2}(1\!+\!\gamma_{s,u})-w_{u}}-w_{u}\), \(\forall u\!\in\!\mathcal{J}\) (cf. step 7). Once \(k_{u}\) is derived, we assign \(w_{u}\!+\!w_{u}\!+\!k_{u}\), \(\forall u\!\in\!\mathcal{J}\) (cf. step 9). Now, if we have set \(k_{u}\!=\!\frac{T_{v_{0}}}{\log_{2}(1\!+\!\gamma_{s,u})}-w_{u}\), \(\forall u\!\in\!\mathcal{J}\), we re-set \(\mathcal{J}\) as \(\mathcal{J}\leftarrow\mathcal{J}\cup\{v_{0}\}\) (cf. step 10), and start all over. Note 1 shows that the \(\{k_{u}\}_{\mathcal{J}}\) of each iteration of Algorithm 1 yields the optimal \(\max\!\!-\!\!\min\) fair resource distribution. **Note 1**.: _Given a distribution of resources \(\{w_{u}\}_{u\in\mathcal{U}}\) and throughput rates \(\{T_{u}\}_{u\in\mathcal{U}}\) such that \(T_{u}=w_{u}\log_{2}\left(1+\gamma_{s,u}\right)\). \(\forall u\!\in\!\mathcal{U}\), we define the set \(\mathcal{J}\) as in Eq. (3). Hence, we have that \(w_{u}\log_{2}(1\!+\!\gamma_{s,u})=w_{k}\log_{2}(1\!+\!\gamma_{s,k})\), \(\forall u,k\in\mathcal{J}\)._ _Given \(v_{0}\!=\!\arg\min\{T_{u}\ |\ u\notin\mathcal{J}\}\), we want to increase \(\{w_{u}\}_{u\in\mathcal{J}}\) as much as possible by \(k_{u}\) each in a \(\max\!\!-\!\!\min\) fair way so that (\(w_{u}\!+\!k_{u}\))\(\log_{2}(1\!+\!\gamma_{s,u})\leq T_{v_{0}}\), \(\forall u\!\in\!\mathcal{J}\). Hence, we must solve the following convex program:_ \[\begin{cases}\max\!\! ``` 1:Input: Backhaul capacity limitation \(\tau\), a set of users \(\mathcal{U}\), and their max\(-\)min fair rates \(\{T_{u}\}_{u\in\mathcal{U}}\). 2:\(A=\sum_{u\in\mathcal{U}}T_{u}\). 3:if\(A>\tau\)then 4:\(T_{\min}\!=\!\min\left\{T_{u}\;\mid\;u\in\mathcal{U}\right\}\), \(E\!\!=\!\!A\!-\!\tau,S\!\!=\!\sum_{u\in\mathcal{U}}\left(T_{u}\!-\!T_{\min}\right)\). 5:if\(S\leq E\)then 6:\(T_{u}\!\leftarrow\!\tau/|\mathcal{U}|\), \(\forall u\!\in\!\mathcal{U}\). 7:else 8:\(T_{u}\!\leftarrow\!T_{u}-E\cdot\frac{T_{u}-T_{\min}}{S}\), \(\forall u\!\in\!\mathcal{U}\). 9:endif 10:endif 11:Output:\(\{T_{u}\}_{u\in\mathcal{U}}\). ``` **Algorithm 2**\(\max\!-\!\min\) _fair rates_\(\{T_{u}\}\) _whose sum_\(A\) _might exceed the aggregate constraint_\(\tau\)_. If that happens, the algorithm computes the initial_ max\(-\)min _fairness level through the value of_\(T_{\min}\) _as well as_\((i)\) _the excess aggregate throughput_\(E\)_, computed with respect to_\(\tau\)_, and_\((ii)\) _the aggregate surplus_\(S\)_, i.e., the sum of throughput values in excess to_\(T_{\min}\) _(cf. step 4). There are two cases. If the surplus is no more than the excess (_\(S\leq E\)_), then eliminating the surplus will not be enough to meet the constraint on_\(\tau\)_. So, the only solution is to assign each mobile user with equal resources_\(\tau/|\mathcal{U}|\) _(cf. step 6) which, in turn, will be less (or at most as much as) the value_\(T_{\min}\) _previously computed. Otherwise, the surplus is more than the excess and the algorithm will reduce the surplus by exactly_\(E\)_, reducing the individual surplus of each mobile user proportionally to its initial value (cf. step 8). In all cases, the final rates are_ max\(-\)min _fair and do not exceed the capacity limitation_\(\tau\)_. ### _The general scenario: A Wireless Relay Network_ Now we consider the generic case, formulated in Eq. (1), and find the exact resource allocation in Algorithm 3. First, to each relay \(r\!\in\!\mathcal{R}_{g}\), we set the minimum bandwidth \(w_{r}\!=\!W_{\text{relays}}^{\min}\) and assign to them the highest achievable rate, \(T_{r}\!=\!w_{r}\log_{2}(1\!+\!\gamma_{g,r})\) (cf. step 1). Then, we find the optimal resource allocation at the \(g\!N\!B\) with Algorithm 1 and also find the optimal allocation at each relay \(r\!\in\!\mathcal{R}_{g}\) using Algorithm 1 and applying next Algorithm 2 with a backhaul capacity limitation of \(\tau\!=\!T_{r}\). With that, we dispose of \(\{T_{u}\}_{u\in\mathcal{U}_{g}}\) and of \(\{T_{r,u}\}_{u\in\mathcal{U}_{r}}\), \(\forall r\!\in\!\mathcal{R}_{g}\). Note that \(\forall r\!\in\!\mathcal{R}_{g}\), \(\forall u\!\in\!\mathcal{U}_{r}\) it might happen that \(T_{r,u}\!<\!w_{u}\log_{2}(1\!+\!\gamma_{r,u})\) due to the limitation \(T_{r}\). Hence, if \(T_{r}\) increases, such \(T_{r,u}\)'s can also increase. The way of increasing as much as possible the utility is by equally raising the lowest values of \(\{T_{r,u}\}_{u\in\mathcal{U}_{r}}\), \(\forall r\in\mathcal{R}_{g}\), as long as constraints are not violated. Then, let \[T_{m}\!=\!\min\{T_{r,u}\!\mid\!T_{r,u}\!<\!w_{u}\log_{2}(1\!+\! \gamma_{r,u})\,,r\!\in\!\mathcal{R}_{g},u\!\in\!\mathcal{U}_{r}\} \tag{11}\] be the minimum throughput rate that has not reached the corresponding to the Shannon capacity (note that if such a \(T_{m}\) does not exist, we are done). Let \[\mathcal{L}_{m}=\left\{(r,u)\!\in\!\mathcal{R}_{g}\times\bigcup_{r \in\mathcal{R}_{g}}\!\mathcal{U}_{r}\;\mid\;T_{r,u}=T_{m}\right\} \tag{12}\] be the set of those relay-user links such that the link rate is the same as the minimum \(T_{m}\). Let \[T_{M}=\min\left\{T_{r,u}\;\mid\;T_{r,u}>T_{m},r\!\in\!\mathcal{R}_{g},u\!\in \!\mathcal{U}_{r}\right\} \tag{13}\] be the minimum throughput rate among those rates that are not as the minimum \(T_{m}\) (cf. step 7). Let \[T_{M}=\min\left(T_{M},\min_{(r,u)\in\mathcal{L}_{m}}w_{u}\log_{2} \left(1+\gamma_{r,u}\right)\right). \tag{14}\] The goal now is to increase \(\{T_{r,u}\}_{(r,u)\in\mathcal{L}_{m}}\) as much as possible not exceeding \(T_{M_{2}}\), as long as those involved \(r\in\mathcal{R}_{g}\) can request more resources to increase \(T_{r}\). Let \(\beta\in[0,1]\) be an undetermined parameter, \(\{T_{r,u}\}_{(r,u)\in\mathcal{L}_{m}}\) will be increased by \(\beta\!\cdot\!(T_{M_{2}}-T_{m})\), i.e., at most, by \(T_{M_{2}}-T_{m}\) (cf. step 13). Parameter \(\beta\) will be defined later. Let \[\overline{\mathcal{U}}_{r}=\{u\in\mathcal{U}_{r}\;\mid\;(r,u)\in \mathcal{L}_{m}\},\quad\forall r\in\mathcal{R}_{g}. \tag{15}\] Now, we set \(T^{{}^{\prime}}_{r,u}=T_{r,u}+\beta\!\cdot\!(T_{M_{2}}-T_{m})\), \(\forall(r,u)\in\mathcal{L}_{m}\) to increase the involved throughput rates. Hence, we set \(\forall r\!\in\!\mathcal{R}_{g}\): \[T_{r}\!=\!\!\sum_{u\in\overline{\mathcal{U}}_{r}}\!\!T_{r,u}\!+\! \sum_{u\in\overline{\mathcal{U}}_{r}}\!\!T^{{}^{\prime}}_{r,u}\!=\!\sum_{u\in \overline{\mathcal{U}}_{r}}\!\!T_{r,u}\!+\!\sum_{u\in\overline{\mathcal{U}}_{r}}\! \!(T_{r,u}\!+\!\beta(T_{M_{2}}\!-\!T_{m}))\!=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Hence, in step 10 we have defined \(\beta\) as: \[\beta=\min\left(1,\frac{W_{\text{relays}}^{g}-\sum_{r\in\mathcal{R}_{g}}w_{r}}{(T_{ M_{2}}-T_{m})\sum_{r\in\mathcal{R}_{g}}|\overline{\mathcal{U}}_{r}|/\log_{2} \left(1+\gamma_{g,r}\right)}\right). \tag{20}\] Once the parameter \(\beta\) is derived, we assign \(w_{r}=w_{r}^{new}\) and \(T_{r}=w_{r}\log_{2}\left(1+\gamma_{g,r}\right)\), \(\forall r\in\mathcal{R}_{g}\) (cf. step 12). In case that \(\beta=1\) (cf. step 4), we repeat the process defining \(T_{m}\) again and increasing the corresponding throughput rates. To finalize the allocation and guarantee an exact and feasible solution, we need to check if the \(\tau_{g}\)-constraint holds. If not (cf. step 16), then we reduce the highest individual rates \(T_{u}\), \(\forall u\!\in\!\mathcal{U}_{g}\!\cup\!\bigcup_{r\in\mathcal{R}_{g}}\!\mathcal{ U}_{r}\) until \(\sum_{u\!\in\!\mathcal{U}_{g}}\!T_{u}\!+\!\sum_{r\in\mathcal{R}_{g}}\!\sum_{u \in\!\mathcal{U}_{r}}\!T_{u}\!=\!\tau_{g}\), by means of using Algorithm 2 with \(\tau\!=\!\tau_{g}\) (cf. step 17). ### _Linear complexity of the exact resource allocation_ As it has been mentioned in the Introduction, the exact \(\max\!-\!\min\) resource allocation provided by Algorithm 3 has a linear complexity in the number of the operations with respect to the number of mobile users and the number of relays (i.e., \(\mathcal{O}\left(|\mathcal{U}_{g}\!\bigcup\mathcal{U}_{g}^{*}|\cdot|\mathcal{R }_{g}|\right)\), for each \(g\!N\!B\)\(g\)). Here, we provide a general overview of that result (additional details can be found in the Appendix A). First, let's assume, without loss of generality, that the unit Shannon capacity of mobile users (i.e., \(\{\log(1\!+\!\gamma_{s,u})\}\)) are sorted in increasing order. Algorithm 1 is run initially as a subroutine and is linear with respect to the number of considered mobile users (i.e, \(|\mathcal{U}|\)). That happens since the number of its loop iterations is at most the number of mobile users. In addition, the summations within its loop can be easily rearranged to be derived incrementally (with only an additional operation that modifies the results of the previous iteration at each new iteration), while deriving minimum values is immediate due to sorting. Algorithm 2 is run as well as a subroutine and is also linear because it has no loops and there are just a few sums over the number of mobile users. Now, let us focus on Algorithm 3. It can be seen that the while loop will have, at most, as many iterations as the number of relay-served mobile users. That happens because the while loop stops when \(\beta\!<\!1\). However, that only happens when there are not enough resources to increase the resources for links gathered in \(\mathcal{L}_{m}\) (which grows, at least, by one link at each iteration). Then, within the loop, we compute sums over the number of relays (i.e., \(|\mathcal{R}_{g}|\)). Hence, the overall complexity of Algorithm 3 is of the order of \(\mathcal{O}\big{(}|\mathcal{U}_{g}\!\bigcup\!\mathcal{U}_{g}^{*}|\cdot| \mathcal{R}_{g}|\big{)}\), in the worst case. ### _A numerical performance illustration_ Here, we provide an illustrative case that shows how our algorithms allocate resources. The network is formed by one \(g\!N\!B\) and two relays. The \(g\!N\!B\) and the relays serve three mobile users each. Moreover, we dispose of different SINR values for each component of the network, and a wired backhaul capacity \(\tau_{g}\) of \(20\) Mbps. Algorithm 3 provides the values to be assigned to each mobile user or relay \(y\) (i.e., \(w_{y}\) and \(T_{y}\)). Table II summarizes both the network parameters as well as the obtained results. As we have shown in the previous sections, such values are optimal in terms of \(\max\!-\!\min\) fairness. It can be seen that UE \(7\) is the mobile user with lowest SINR value. The \(\max\!-\!\min\) throughput for mobile users attached to Relay \(2\) is achieved with rates of \(2.021\) Mbps, which coincides with their Shannon capacity. On another hand, Relay \(1\) attains enough resources to serve its users (as its Shannon capacity in the wireless backhaul exceeds the demand), yet its rates have been reduced since the aggregated user rates exceeded the capacity bottleneck \(\tau_{g}\). The same happens with the \(g\!N\!B\). ## V Conclusions We have solved the optimal allocation of resources in wireless relay-enabled heterogeneous networks under the condition of being \(\max\!-\!\min\) fair. With the exact \(\max\!-\!\min\) allocation algorithm we have shown that the optimal distribution of resources can be found in linear time on the number of mobile users and relays, which is a key enabler for implementation over cellular networks. Considering backhaul bottlenecks is crucial to assign resources to mobile users of one side of the network depending on the allocation to other relays and users.
2302.03733
Field emitter electrostatics: efficient improved simulation technique for highly precise calculation of field enhancement factors
When solving the Laplace equation numerically via computer simulation, in order to determine the field values at the surface of a shape model that represents a field emitter, it is necessary to define a simulation box and, within this, a simulation domain. This domain must not be so small that the box boundaries have an undesirable influence on the predicted field values. A recent paper discussed the situation of cylindrically symmetric emitter models that stand on one of a pair of well-separated parallel plates. This geometry can be simulated by using two-dimensional domains. For a cylindrical simulation box, formulae have previously been presented that define the minimum domain dimensions (MDD) (height and radius) needed to evaluate the apex value of the field enhancement factor for this type of model, with an error-magnitude never larger than a "tolerance" $\epsilon_{\rm{tol}}$. This MDD criterion helps to avoid inadvertent errors and oversized domains. The present article discusses (in greater depth than previously) a significant improvement in the MDD method; this improvement has been called the MDD Extrapolation Technique (MDDET). By carrying out two simulations with relatively small MDD values, it is possible to achieve a level of precision comparable with the results of carrying out a single simulation using a much larger simulation domain. For some simulations, this could result in significant savings of memory requirements and computing time. Following a brief restatement of the original MDD method, the MDDET method is illustrated by applying it to the hemiellipsoid-on-plane (HEP) and hemisphere-on-cylindrical-post (HCP) emitter shape models.
Fernando F. Dall'Agnol, Thiago A. de Assis, Richard G. Forbes
2023-02-07T20:04:42Z
http://arxiv.org/abs/2302.03733v1
Field emitter electrostatics: efficient improved simulation technique for highly precise calculation of field enhancement factors ###### Abstract When solving the Laplace equation numerically via computer simulation, in order to determine the field values at the surface of a shape model that represents a field emitter, it is necessary to define a simulation box and, within this, a simulation domain. This domain must not be so small that the box boundaries have an undesirable influence on the predicted field values. A recent paper discussed the situation of cylindrically symmetric emitter models that stand on one of a pair of well-separated parallel plates. This geometry can be simulated by using two-dimensional domains. For a cylindrical simulation box, formulae have previously been presented that define the minimum domain dimensions (MDD) (height and radius) needed to evaluate the apex value of the field enhancement factor for this type of model, with an error-magnitude never larger than a "tolerance" \(\epsilon_{\text{tol}}\). This MDD criterion helps to avoid inadvertent errors and oversized domains. The present article discusses (in greater depth than previously) a significant improvement in the MDD method; this improvement has been called the MDD Extrapolation Technique (MDDET). By carrying out two simulations with relatively small MDD values, it is possible to achieve a level of precision comparable with the results of carrying out a single simulation using a much larger simulation domain. For some simulations, this could result in significant savings of memory requirements and computing time. Following a brief restatement of the original MDD method, the MDDET method is illustrated by applying it to the hemiellipsoid-on-plane (HEP) and hemisphere-on-cylindrical-post (HCP) emitter shape models. field emission, field emitter electrostatics, field enhancement factor, finite element method, minimum domain dimensions (MDD), electrostatic depolarization. ## I Introduction Computer simulations are becoming a ubiquitous tool to investigate characterization parameters in field electron emission (FE) systems. In particular, numerical methods of analyzing the electrostatics of field emitters have been historically useful [1; 2; 3; 4], in that they have established some relatively simple formulas for parameters that characterize the electrostatic situation, often a dimensionless field enhancement factor (FEF). In particular, electrostatic modeling of post-like shapes has attracted significant research interest. This modelling has usually been carried out: (a) using cylindrically symmetric classical-conductor post models; and (b) in so-called parallel-planar-plate (PPP) geometry [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. This article also focuses mainly on cylindrically symmetric posts in PPP geometry. In PPP geometry, a post of total height \(h\) is assumed to stand on one of a pair of parallel planar plates of large lateral extent (very much greater than the physical _separation_\(d_{\text{sep}}\) of the plates). The situation usually analyzed is the "standard" one where (a) \(h<<d_{\text{sep}}\), and in addition (b) the _gap-length_\([d_{\text{gap}}=d_{\text{sep}}-h]\) between the post apex and the counter-electrode plate is very much greater than the post height [i.e., \(h<<d_{\text{gap}}\)]. This article also deals with this standard situation, but--due to the boundary conditions at the sides of the simulation box--the parallel plates are effectively of infinite extent. However, these boundary conditions (discussed below) also generate mirror-image effects. These mirror-image effects lead to electrostatic depolarization of the emitter post being modelled, and hence to possible inaccuracy in the predicted FEF. The larger the lateral dimensions of the simulation box, then the smaller will be the depolarization effects and the smaller will be the inaccuracy. A similar argument (but not quite the same) applies to the simulation-box height (see [19]). Large simulation boxes require additional computing resource, so there is scope for a trade-off between accuracy and box dimensions. This article is about optimising the trade-off and finding accurate FEF values. At this point it will be useful to give some careful electrostatic definitions. A _local electrostatic (ES) field_\(E_{\mathrm{L}}\) is the ES field at some location "L" on the post surface, typically a location relatively near the post apex. The local ES field at the post apex itself is denoted by \(E_{\mathrm{a}}\). A _macroscopic ES field_ is a field associated with the overall geometry of the system under discussion. There are several different kinds of macroscopic field (see [19]). In this article we are interested in the macroscopic ES field between the two parallel plates, in the absence of the post and in the absence of any effects that would be induced by its presence. For simplicity we refer to this inter-plate field as the _plate field_ and denote it by \(E_{\mathrm{P}}\). In the same way that there are several different kinds of macroscopic field, there are several different kinds of dimensionless macroscopic field enhancement factor. The factor of interest here is the so-called local _plate-field enhancement factor (PFEF)_\(\gamma_{\mathrm{PL}}\) defined by \[\gamma_{\mathrm{PL}}=E_{\mathrm{L}}/E_{\mathrm{P}}. \tag{1}\] Usually, it will be the apex PFEF value \(\gamma_{\mathrm{Pa}}\), defined by \[\gamma_{\mathrm{Pa}}=E_{\mathrm{a}}/E_{\mathrm{P}}, \tag{2}\] that is of particular interest. Conventional classical electrostatic conventions are used here: this implies that for field electron emission the fields are negative but for field ion emission the fields are positive. FEF-values are, of course, positive in both cases. Note that, as compared with some of our previous papers, our terminology and notation here have become more precise. The terms "plate field" and "plate FEF" (and related notation) have replaced the terms "macroscopic field" and "macroscopic FEF" used in older papers, and the designation "macroscopic" has now been given a wider meaning (see [19]). Also note that we use the symbol \(\gamma\) for a dimensionless FEF, rather than the symbol \(\beta\) commonly used in FE literature. This is to avoid confusion with the alternative use of \(\beta\) (and sometimes the alternative use of the term "field enhancement factor") to describe the relationship between field and some voltage-like quantity. All field enhancement factors discussed in this article are dimensionless, and we shall now stop using the term "dimensionless". The calculations described here use the _Finite Elements Method (FEM)_[20; 21] to solve Laplace's equation. This is done within the _simulation domain_ provided by a cylindrical simulation box with the post of interest on the box axis. The simulation domain is the volume (in 3D) or area (in 2D) between the post boundary and the box walls. Simulations are carried out using the COMSOL, v5.3, software package. Initially, it has been and is necessary to consider posts for which there is an exactly known analytical expression \(\gamma_{\mathrm{Pa}}^{(\mathrm{an})}\) for the apex PFEF. The total percentage error \(\epsilon_{\mathrm{num}}\) in the numerically derived (i.e., simulation-derived) apex-PFEF value, \(\gamma_{\mathrm{Pa}}^{(\mathrm{num})}\), is then defined as: \[\epsilon_{\mathrm{num}}\equiv\left[\frac{\gamma_{\mathrm{Pa}}^{(\mathrm{num}) }-\gamma_{\mathrm{Pa}}^{(\mathrm{an})}}{\gamma_{\mathrm{Pa}}^{(\mathrm{an})}} \right]\times 100\%, \tag{3}\] This parameter \(\epsilon_{\mathrm{num}}\) can in principle be positive or negative, but often our interest will actually be in the _error magnitude_\(|\epsilon_{\mathrm{num}}|\). Note that in some geometries, where no known analytical solution exists, it is necessary to replace the "analytical" value by a numerically calculated "highly precise" value. Also, note that the convention being used here differs in places from that used in our recent review [19], and in these cases the change results in a change in the sign of \(\epsilon_{\mathrm{num}}\). The revised definition here is more conventional, and thus may be clearer. The initial objective of the simulations was to determine _minimum domain dimensions (MDD)_.This term is interpreted to mean the minimum height and radius of the simulation box that encloses the simulation domain, and the aim is to ensure that \(|\epsilon_{\mathrm{num}}|\) will be less than some pre-specified _error_\(\epsilon_{\mathrm{tol}}\) in the FEF-value being determined. This parameter \(\epsilon_{\mathrm{tol}}\) is always taken as positive, so strictly \(\epsilon_{\mathrm{tol}}\) is an error magnitude. We call \(\epsilon_{\mathrm{tol}}\) the simulation _tolerance_. In the original paper [22] on minimum domain dimensions by two of us, the chosen shape for the exact analytics was the "hemisphere-on-plane" (HSP) model. However, in our recent review paper [19] we explored the more rewarding case of the "hemiellipsoid-on-plane" (HEP) model. The original paper [22] also found an even more promising method of determining highly precise apex PFEF values, the so-called _minimum domain dimensions extrapolation technique_ (MDDET method). This method was subsequently used [12] to determine precise apex-PFEF values for selected emitter shapes, and has been outlined in our review paper [19]. The present article has two functions. It provides a brief review of our earlier work on the MDD and MDDET approaches. It also provides fuller discussions of these methods and of their application to the hemiellipsoid-on-plane shape model. It constitutes a fuller version of material presented at the 2022 International Vacuum Nanoelectronics Conference [23]. We also need to make the point that this is an article about the electrostatics of classical conductors with smooth surfaces, and about how to implement this type of electrostatics effectively. There are also important issues, particularly with carbon nanotubes (e.g., [24]), about how to relate classical-conductor electrostatics to electrostatic treatments where atomic structure is taken into account. There have been important recent developments in this area, e.g. [25]. Related issues are outside the scope of the present paper, although we do think that considerations concerning minimum domain dimensions will play a part in any such wider discussions. The minimum domain dimensions (MDD) method Let us review the discussion of minimum domain dimensions [19; 22]. As just indicated, the system considered was cylindrically symmetric with a post on axis. Profiles in a "vertical" plane through the axis are shown in Fig. 1(a). The figure shows a half-plane with the original symmetry axis at the left-hand side. The domain width (i.e., radius) \(A\) and height \(B\) are the quantities that we aimed to minimize, for a given pre-specified tolerance \(\epsilon_{\rm tol}\). We take the post to be perfectly conducting, which means no ES field penetration; hence, as already indicated, the post interior is not part of the simulation domain. Boundary conditions for the domain are imposed as follows. The post surface and the box bottom plane are set at Earth potential (taken as ES potential \(\varPhi=0\) V). At the right-hand side boundary, and at the left-hand-side boundary above the post, von Neumann-type boundary conditions are applied, with the specific requirement that the ES field component normal to the boundary is zero, at all points of the boundary. The top boundary is set as having a constant surface charge density \(\sigma=\epsilon_{0}E_{\rm P}\), where \(\epsilon_{0}\) is the _vacuum electric permittivity_. Where relevant, we investigate the electrostatics properties of our system as a function of the _apex sharpness ratio_\(\sigma_{\rm a}\equiv h/r_{\rm a}\), where \(r_{\rm a}\) is the _apex radius of curvature_, and as a function of the domain dimensions \(A\) and \(B\). Note the need to distinguish logically between the apex sharpness ratio, defined as above, and the _aspect ratio_, which is defined as "height"/"base-radius" (\(\rho\)), i.e. \(h/\rho\). For the hemisphere-on-cylindrical-post (HCP) model discussed below these two parameters have equal values, but this is not true for many shapes, including the HEP model. For electrostatic field enhancement, the parameter of principal interest is the apex sharpness ratio. ### MDD for hemisphere-on-plane (HSP) model Formulae for the minimum domain dimensions (MDD) were first derived for a hemisphere-on-plane (HSP) emitter-shape model, and for that model can be written as follows [12; 19; 22]: \[\left(\frac{A}{r_{\rm H}}\right)_{\rm MDD-HSP}=\frac{6}{\sqrt[3]{\epsilon_{ \rm tol}}}, \tag{4}\] \[\left(\frac{B}{r_{\rm H}}\right)_{\rm MDD-HSP}=\frac{5}{\sqrt[3]{\epsilon_{ \rm tol}}}. \tag{5}\] Here, \(r_{\rm H}\) is the radius of the hemisphere. The factors "6" and "5" are values rounded from 5.59 and 4.64, respectively [12; 22]. Rounding up the numerators makes the upper boundary for the MDD just a little larger, but makes the formulas easier to remember. Obviously, in any particular case there is a need to decide a working tolerance value \(\epsilon_{\rm tol}\). For exploratory numerical investigations we have typically taken \(\epsilon_{\rm tol}=1\%\), as this keeps computation times low. For real-life applications one would normally want to choose a lower value. These hemisphere-on-plane results can in principle be used to decide MDD values for other post-like shapes, by inscribing them inside a hemisphere. For hemiellipsoid-on-plane (HEP) and hemisphere-on-cylindrical post (HCP) emitter-shape models, this process is illustrated in Figure 1(b). The boundary of the HSP shape is closer to the right-hand-side boundary of the simulation box than are the boundaries of the HEP and HCP shapes, so, the electrostatic influence of this box boundary will be larger on the HSP shape than it is on the other two shapes. This can be demonstrated numerically, using \(\epsilon_{\rm tol}=10\%\) in eqs. (4) and (5). From the simulations, the nu Figure 1: (a) To show, in grey, the two-dimensional simulation domain \(\Omega\). \(A\) and \(B\) are the characteristic dimensions to be minimized. The height \(h\) and apex radius \(r_{\rm a}\) of an emitter shape model are indicated. The interior of the model is removed because the emitter is considered a perfect conductor at Earth potential. (b) To show, for the hemisphere-on-plane (HSP), hemiellipsoid-on-plane (HEP) and hemisphere-on-cylindrical-post (HCP) shape models, each of which have the same apex sharpness ratio \(h/r_{\rm a}\), the relative proximities of each shape to the lateral boundary. merical errors, as defined by eq.(3), are: \(\epsilon_{\rm num}=-9.4\%\) for the HSP shape; \(\epsilon_{\rm num}=-1.83\%\) for the HEP shape with \(\sigma_{\rm a}=30\); and \(\epsilon_{\rm num}=-1.20\%\) for the HCP shape with \(\sigma_{\rm a}=30\). The HSP and HEP results are found by using the analytical apex PFEFs. To determine \(\epsilon_{\rm num}\) for the HCP model, we calculate \(\gamma_{\rm Pa}^{\rm(num)}=27.3\) for \(\epsilon_{\rm tol}=10\%\) and \(\gamma_{\rm Pa}^{\rm(num)}=27.632969\) for \(\epsilon_{\rm tol}=0.0001\%\), using the latter value of \(\gamma_{\rm Pa}^{\rm(num)}\) as if it were an analytical value. In summary, the MDD that provide an error of magnitude not larger than \(\epsilon_{\rm tol}\) for the HSP shape will ensure an error of magnitude smaller than this for the HEP shape inscribed in the HSP, and an even smaller-magnitude error for the HCP shape inscribed inside both. ### MDD for hemiellipsoid-on-plane (HEP) model This realization, and the availability of analytical formulae [26; 27; 28; 29; 30; 4] for the HEP model, led us to reformulate the MDD arguments on the basis of the theory of the HEP model. In this case the results depend on the apex sharpness ratio (\(\sigma_{\rm a}\)) of the HEP shape, and it was found empirically [19; 12; 22] that convenient formulas for minimum simulation-domain dimensions are \[\left(\frac{A}{h}\right)_{\rm MDD-HEP}=6\times\sqrt[3]{\frac{f(\sigma_{\rm a })}{\epsilon_{\rm tol}}} \tag{6}\] \[\left(\frac{B}{h}\right)_{\rm MDD-HEP}=5\times\sqrt[3]{\frac{f(\sigma_{\rm a })}{\epsilon_{\rm tol}}}, \tag{7}\] where \[f(\sigma_{\rm a}) =0.2+0.8\exp\Big{[}-0.345\left(\sigma_{\rm a}^{1/2}-1\right) \Big{]}. \tag{8}\] Of course, when \(\sigma_{\rm a}=1\), eqs. (6) and (7) reduce to eqs. (4) and (5), respectively. Note that these formula do not work well if the chosen tolerance \(\epsilon_{\rm tol}\) is so large that \(A/h\) is close to unity or smaller. However, one would not normally want to work with a simulation box as small as this, so in practice this is not a significant constraint. ### MDD for hemisphere-on-cylindrical-post (HCP) model As with the HSP approach described earlier, formulae (6) to (8) can be used as upper bounds of the MDD for other shapes that can be inscribed in the HEP shape. This is particularly useful when considering the widely used HCP shape model, for which no _accurate_ simple analytical formulae for the apex PFEF are known (probably none exist). The following exercise demonstrates this. As before, by setting \(\epsilon_{\rm tol}\) very low and thus using a relatively large simulation domain, we can make a reasonably accurate estimate of what the true apex PFEF is. This allows us to make good estimates of \(\epsilon_{\rm num}\) for different values of \(\epsilon_{\rm tol}\) and \(\sigma_{\rm a}\) inserted into eqs. (6) to (8). Figure 2 shows the error-magnitude \(|\epsilon_{\rm num}|\) as a function of \(\epsilon_{\rm tol}\), for \(\sigma_{\rm a}=10\), \(100\) and \(1000\). Clearly, the results show that \(|\epsilon_{\rm num}|\) is always lower than \(\epsilon_{\rm tol}\). That is, the numerical error in the simulated result is always less than the specified tolerance. ## III The MDD extrapolation technique ### MDDET method applied to the HEP shape model We now move on to consider the _MDD Extrapolation Technique (MDDET)_ for the HEP shape model. Calculations are carried out separately for HEP shape models with apex sharpness ratios \(\sigma_{\rm a}=10\), \(100\) and \(1000\). For each value of \(\sigma_{\rm a}\), calculations are carried out for many values of the input specified tolerance \(\epsilon_{\rm tol}\). These values lie in the range \(0.1\%\leqslant\epsilon_{\rm tol}\leqslant 1\%\), and are input into eqs (6) and (7). Results are shown in Fig. 3. The sets of red circles show simulation-based apex-PFEF (\(\gamma_{\rm Pa}\)) values as a function of the numerical error-magnitude \(|\epsilon_{\rm num}|\) derived from eq.(3). Figure 2: To illustrate the results of applying the MDD method to hemiellipsoid-on-plane (HEP) emitter shape models, for apex-sharpness-ratio values \(\sigma_{\rm a}=10\), \(100\) and \(1000\). The magnitude \(|\epsilon_{\rm num}|\) of the actual numerical error applying to the relevant simulation is shown as a function of the specified “tolerance” (maximum error-magnitude specified a-priori) \(\epsilon_{\rm tol}\). In each case, linear behavior is observed. The dashed (red) line plots the relation \(|\epsilon_{\rm num}|=\epsilon_{\rm tol}\); hence it can be clearly seen that \(|\epsilon_{\rm num}|\) is always lower than \(\epsilon_{\rm tol}\), as required. Filled lines are guides for the eyes. Note that, for given \(\epsilon_{\rm tol}\), \(|\epsilon_{\rm num}|\) is not a monotonic function of \(\sigma_{\rm a}\). Although not expected, we are not surprised by this; however, the precise reasons for the effect are not currently understood. The black squares show the same numerically derived apex-PFEF values, but as a function of the input specified tolerance \(\epsilon_{\text{tol}}\). In all cases it can be seen (because each red circle is to the left of the corresponding black square) that the actual numerical percentage error-magnitude is less than the specified tolerance. In all six cases these plots are essentially linear. If a regression line were fitted to any plot (each of which is based on 30 data points), then this line would intercept the vertical ("zero-error") axis at a value expected to be a good estimate of the true apex-PFEF value for the given input value of \(\sigma_{\text{a}}\). In practice, we find that fitting regression lines to large sets of data-points is "overkill". In practice, it seems to be adequate to estimate the intercept value by using two appropriately chosen points on (an extended version of) the "black squares" line. This "two-point procedure" is more straightforward than fitting a regression line, and makes the MDDET method simple to implement. The procedure we currently use for choosing two points on the "black squares" line is as follows. (i) Define the desired final tolerance \(\epsilon_{\text{tol}}\) for the MDDET result. This tolerance might typically be a value of order 0.001 %. (ii) Define two much higher tolerance values \(\epsilon_{1}=100\ \epsilon_{\text{tol}}\) and \(\epsilon_{2}=1000\ \epsilon_{\text{tol}}\), which might typically become defined as 0.1 % and 1 %, respectively. (iii) Determine two apex-PFEF numerical estimates \(\gamma_{\text{Pa},1}^{\text{(num)}}\) and \(\gamma_{\text{Pa},2}^{\text{(num)}}\), using the sets of domain dimensions found by using \(\epsilon_{1}\) and \(\epsilon_{2}\) (respectively) as the inputs into eqs. (6) and (7). (iv) Using these two data-points, an "extracted estimate" \(\gamma_{\text{Pa}}^{\text{(extr)}}\) of the apex PFEF can be obtained from the intercept on the vertical ("error = 0") axis, using the formula \[\gamma_{\text{Pa}}^{\text{(extr)}}=\gamma_{\text{Pa},1}^{\text{(num)}}- \epsilon_{1}\left[\frac{\gamma_{\text{Pa},2}^{\text{(num)}}-\gamma_{\text{Pa},1}^{\text{(num)}}}{\epsilon_{2}-\epsilon_{1}}\right]. \tag{9}\] It is not straightforward to make an _empirically derived_ estimate of the accuracy of this result. However, for the HEP model being discussed, the extracted apex-PFEF values can easily be compared with the exact analytical result. Some selected comparisons are made in Table 1. Table 1 makes comparisons of apex PFEF values, for six values of the apex sharpness ratio \(\sigma_{\text{a}}\). Column 2 shows values of the exact analytical result, which corresponds to the formal situation where the simulation box dimensions become infinitely large. Columns 3 and 4 show the apex-PFEF values, and corresponding percentage errors (as compared with the analytical result), derived via the MDD method. Columns 4 and 5 show equivalent data for the MDDET method. In all cases the underlying required tolerance has been taken as \(\epsilon_{\text{tol}}=0.001\) %. Finally, column 6 shows the ratio \(\epsilon_{\text{num}}/\epsilon_{\text{extr}}\). Note that in all cases, for both the MDD and MDDET methods, the magnitude of the numerical error in these methods is less than the specified tolerance. For apex sharpness ratios less than around 200, the MDDET method gives smaller numerical-error magnitudes than does the MDD method, as shown in the last column of Table 1. The real advantage of the MDDET method is the saving on computer memory requirements and on computing time, as compared with the MDD method. This is because it is quicker to carry out two high-tolerance calculations (using small simulation-box dimensions) than to carry out one low-tolerance calculation (using large simulation-box dimensions). Figure 4 illustrates the basic principle of how this saving arises. The coloured rectangles are visual representations of the sizes of the relevant simulation domains. The two small rectangles combined have area approximately 1/20th and volume 1/90th of the large rectangle. The memory requirements and computer times "go with" the sizes of the rectangles, though not in any straightforward fashion. The relatively small volume associated with the two small rectangles combined makes the MDDET method especially advantageous in 3D models. Another context in which the MDDET method has an advantage can be illustrated as follows. If, for some technical computing reason, the maximum domain size that can be implemented corresponds to the red (0.1 %) square in Fig. 4, then the apex-PFEF estimate provided by the MDDET method can be much more precise than that provided by the MDD method. Figure 3: To illustrate the results of applying the MDD Extrapolation Technique (MDDET method) to hemiellipsoid-on-plane (HEP) emitter shape models, for apex-sharpness-ratio values: (a) \(\sigma_{\text{a}}=10\); (b) \(\sigma_{\text{a}}=100\); (c) \(\sigma_{\text{a}}=1000\). In all cases, the (red) circles are the simulated apex PFEF (\(\gamma_{\text{Pa}}\)) values, plotted as a function of the actual error magnitude, \(|\epsilon_{\text{num}}|\), which is found by comparison with exact analytical values. The (black) squares are these apex PFEF values plotted as a function of the specified input tolerance \(\epsilon_{\text{tol}}\). Since each red circle is to the left of the corresponding black square, this shows that (in all cases) the actual numerical error-magnitude is less than the specified tolerance. Note the good linearity of the “black-square plots”. ### MDDET method applied to the HCP shape model We now discuss applying the MDDET method to the hemisphere-on-cylindrical-post (HCP) shape mode. In this case, there is no known analytical solution, so we use instead the result \(\gamma_{\rm Pa}^{\rm(precise)}\) of a highly precise numerical estimate, made using a tolerance of 0.001 %. Otherwise, procedures are similar to those described in the previous section. Results are shown in Table 2, in the same form as in Table 1. In this HCP-model case there is often no significant gain in numerical accuracy when the MDDET method is used (if anything the reverse). Rather, the advantage (in principle) of the MDDET method is the reduction in memory requirements and computing time. For any given simulation geometry, there can be a separate question of whether these advantages are significant enough to be practically useful. The advantage of the MDDET over the MDD regarding computational time and memory can be checked directly by simple comparison. In Table 3, the time on Figure 4: Comparison of the predicted areas of the two-dimensional simulation domains needed to guarantee a tolerance (i.e., error-magnitude upper bound) of the percentage error level shown. The MDDET method uses two simulations based on the two smaller simulation domains, rather than a single calculation based on the large (blue) simulation domain. This can lead to savings in memory requirements and computing time. the 2nd column is the combined computational time to perform the MDDET with tolerance \(0.001\) %. The 3rd column is the time needed to perform the ordinary MDD with same tolerance of \(0.001\) %. For simulations that spans several parameters, saving time is of the essence. However, this advantage is not so straightforward to quantify. The memory used by the simulator tends to increase with the number of mesh elements (i.e. with the apex sharpness ratio \(\sigma_{\mathrm{a}}\)). However, the dependency is somewhat irregular. Further, since the software package COMSOL is not open source, it is not possible to access the way that its memory allocation management works; this, it is difficult to understand exactly why the memory does not scale with the number of elements. The number of mesh elements used in a simulation is more than sufficient for the actual numerical error-magnitude to beat the specified tolerance. As has been discussed, no matter how many mesh elements there are, the precision of the simulation will always be hindered by the proximity of the boundaries. ### Methods for other shape models To apply either the MDD or the MDDET method to emitter-shape models other than the HCP, one inscribes the given shape into a hemi-ellipsoid. This is done for the HCP model shape in Fig. 1. The apex sharpness ratio \(\sigma_{\mathrm{a}}\) of the enclosing HEP-model can then be used as an upper limit for the model-shape of interest. ## IV Comments and summary ### Choice between MDD and MDDET methods In general, if only limited precision is needed for apex or other PFEF values (say, \(\epsilon_{\mathrm{tol}}\geq 0.1\%\)), then we recommend using the MDD method (rather than the MDDET method). The MDD method is simpler, because the simulation only needs to be carried out once. In fact, if we try to apply the MDDET method for low values of precision (corresponding, say, to high tolerance values \(\epsilon_{\mathrm{tol}}>10\%\)) then the method may not work well. The MDDET method is particularly advantageous, in terms of practicality and reliability, if the precision requirement is exceptionally high, say, \(\epsilon_{\mathrm{tol}}\leq 0.001\%\). ### Extension to infinite rectangular arrays The arguments in this paper can be readily extended to the analysis of infinite square and rectangular arrays. Such arrays are modelled by placing the post at the centre of the base of a square or rectangular simulation box. The mirror images in the side-walls, together with the central post, constitute the array, with the spacings in the array determined by the lengths of the sides of the box base. Nearly all part of the walls of the square or rectangular box would be farther from the post than the walls of a cylindrical simulation box with radius equal to half the length of the shorter side of the base of the square of rectangular box. Hence, for a specified tolerance, the precision of PFEF results for the array would be expected to be better than precision of PFEF results for the singpost case. ### Meshing errors So far, the whole discussion has been about the influence of the simulation box boundaries (and hence the role of the domain dimensions) on the error-magnitude \(\epsilon_{\mathrm{num}}\) of a simulated field-enhancement-factor value. However, there are other potential sources of error in the system, with the chief of these being the way that the finite-element mesh is chosen and refined. If the resulting mesh is "sufficiently fine", particularly near the apex of the chosen shape, then the meshing-error-magnitude can be made significantly less than the simulation-error-magnitude due to the proximity of the boundaries to the central post-like shape, and the former can then be disregarded. Our practice is to always work in the regime where meshing errors are so small in magnitude that they can be disregarded. As a normal working starting point, we assume that, near the shape apex, the mesh length should be of order \(r_{\mathrm{a}}/30\) or less, where \(r_{\mathrm{a}}\) is the apex radius of curvature. When a simulated FEF-value has been achieved, we check that changing the meshing does not significantly alter the derived FEF-value, at the level of significance of interest. If this is not the case, then the meshing is refined until effective independence of the simulated result from the meshing details is achieved, causing the simulated result to be "safe from meshing errors". All results in this paper are thought to be safe from meshing errors. ### Summary A summary of our conclusions is as follows. The MDD and MDDET methods have been described previously and have been restated here. The usefulness of the MDD method is that it allows researchers to minimise the dimensions of a simulation domain, in order to obtain a result of specified accuracy. This provides benefits in terms of computational time and memory required. More important, we have provided a better formal structure for the MDDET method. The essential advantage of the MDDET method is that, for a given required tolerance, it allows the use of much smaller simulation domains; this is particularly useful in complex time-consuming computations. For apex (or other) PFEF values, the MDDET method allows a result of specified ac curacy (or better) to be obtained by carrying out two calculations using simulation boxes of small dimensions, rather than one box of large dimensions. This approach can be used for any cylindrically symmetric shape that can be inscribed within a hemiellipsoid of revolution. However, if the precision requirement is low (say, \(\epsilon_{\rm tol}\geq 0.1\%\)), then using the MDD method may be simpler and preferable. ## V Acknowledgement T A dA is grateful for funding from the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) (No. 310311/2020-9) and the Fundacao de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ) (No. E-26/203.860/2022). ## VI Author Declarations ### Conflicts of Interest The authors have no conflicts to disclose. ### Data availability The data that supports the findings of this study are available within the article. ### Authors' contributions **Fernando Dall'Agnol**: conceptualization (supporting); data curation (equal); formal analysis (equal); methodology (equal); project administration (equal); resources (equal); software (equal);supervision (equal); validation (equal); visualization (equal); writing/original draft (lead); writing/reviewing and editing (supporting). **Thiago de Assis**: conceptualization (lead); data curation (equal); formal analysis (equal); funding acquisition (lead); methodology (equal); project administration (equal); resources (equal); software (equal); supervision (equal); validation (equal); visualization (equal); writing/original draft (supporting); writing/reviewing and editing (supporting). **Richard Forbes**: conceptualization (supporting); data curation (equal); formal analysis (equal); methodology (equal); project administration (equal); resources (lead); supervision (equal); validation (supporting); visualization (equal); writing/original draft (supporting); writing/reviewing and editing (lead).
2306.00915
The feasibility of artificial consciousness through the lens of neuroscience
Interactions with large language models have led to the suggestion that these models may soon be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the inputs to large language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architecture of large language models is missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions, and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.
Jaan Aru, Matthew Larkum, James M. Shine
2023-06-01T17:18:15Z
http://arxiv.org/abs/2306.00915v3
## The feasibility of artificial consciousness through the lens of neuroscience ## Abstract Interactions with large language models have led to the suggestion that these models may soon be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the inputs to large language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architecture of large language models is missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions, and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness. Keywords: artificial intelligence; consciousness; large language models; thalamus; #### Large language models and consciousness There is a long tradition of research asking which animals are conscious [1-3] and whether entities outside the animal kingdom might be conscious [4-6]. Recently, the advent of Large Language Models (LLMs) has brought a novel set of perspectives to this question. Through their competence and ability to converse with us, which in humans is indicative of being conscious, LLMs have forced us to refine our understanding of what it means to understand, to have agency, and to be conscious. LLMs are sophisticated, multi-layer artificial neural networks whose weights are trained on hundreds of billions of words from various texts including natural language conversations between awake, aware humans. Through text-based queries, users interacting with LLMs are provided with a fascinating language-based simulation. If you take the time to use these systems, it is hard not to be swayed by the apparent depth and quality of the internal machinations in the network. Ask it a question, and it will provide you with an answer that drips with the kinds of nuance we typically associate with conscious thought. As a discerning, conscious agent yourself, it's tempting to conclude that the genesis of the response arose from a similarly conscious being - one that thinks, feels, reasons and experiences. Using this type of a "Turing test" as a benchmark, the question can be raised whether LLMs are or soon will be conscious [7-10], which in turn raises a host of moral quandaries, such as whether it is ethical to continue to develop LLMs that could be on the precipice of conscious awareness. While this position might not be prevalent among neuroscience researchers today, the improving capabilities of AI systems will inevitably lead to the point where the possibility of machine consciousness needs to be addressed. Furthermore, this possibility is discussed extensively in media and thus neuroscientists need to consider some of the arguments in favor and against it. This perspective is often bolstered by the fact that the architecture of LLMs is loosely inspired by features of brains (Fig. 1) - the only objects to which we can currently (and confidently) attribute consciousness. However, while early generations of artificial neural networks were designed as a simplified version of the cerebral cortex [11], modern LLMs have been highly engineered and fit to purpose in ways that do not retain deep homology with the known structure of the brain. Indeed, many of the circuit features that render LLMs computationally powerful (Fig. 1) have strikingly different architectures from the systems to which we currently ascribe causal power in the production and shaping of consciousness in mammals. For instance, many theories of the neural basis of consciousness would assign a central role in conscious processing to thalamocortical [12, 13, 14, 15, 16, 17] and arousal systems [18, 19, 20, 21, 22, 23, 24], both features that are architecturally lacking in LLMs. Figure 1: **Macroscopic topological differences between mammalian brains and large language models.** Left – a schematic depicting the basic architecture of a large language model, which can have tens or even more than a hundred decoder blocks arranged in a feed-forward fashion. Right – a heuristic map of the thalamocortical system, which generates complex activity patterns thought to underlie consciousness. One might ask why it is so crucial for the architecture of LLMs to mimic features of the brain. The primary reason is that the only version of consciousness that we can currently be absolutely sure of arises from brains embedded within complex bodies. This could be further collapsed to humans, though many of the systems-level features thought to be important for subjective consciousness are pervasive across phylogeny, stretching back to mammals [13, 24, 25], and even to invertebrates [26]. We will return to this point, but we will start with the question about precisely what we mean by the term 'consciousness'. From there we will present the three arguments against the view that present-day AI systems have, or that future AI systems will soon have, consciousness. First, consciousness is tied to the sensory streams that are meaningful for the organism; second, in mammalian brains, consciousness is supported by a highly interconnected thalamocortical system that is strikingly different from the architecture of LLMs; and third, that consciousness might be inextricably linked to the complex biological organization characteristic of living systems. ### What is consciousness? Typically, people rely on interactive language-based conversations to develop an intuitive sense about whether LLMs are conscious or not. Although these conversations are remarkable, they are not formal objective measures of consciousness and do not constitute _prima facie_ evidence for conscious agency. The advent of LLMs has demanded a re-evaluation of whether we can indeed infer consciousness directly from interactions with other agents. Thus, there is an emerging view that the criteria for attributing human-like intelligence need to be re-assessed [27]. To make sensible progress on this matter, we need to better understand what exactly people think and assume when talking about consciousness. There are different meanings associated with the word "consciousness": neurologists often refer to levels of consciousness (e.g., the fact that you are [or are not] conscious; i.e., state of consciousness), whereas psychologists often interrogate the contents of consciousness (e.g., consciousness _of_ something; i.e., content of consciousness) [17]. Furthermore, there is a distinction between different contents of consciousness: our experiences can be described as primarily phenomenal [28] (e.g., experiential, the sight/smell of an apple; or the feel of your arm) or more cognitive (i.e., how we access and report conscious experiences [28]; or how we manipulate abstract ideas and concepts, such as our sense of self, or ephemeral ideas such as "justice" or "freedom"). The question of whether AI systems are conscious could include only one or all of these aspects. Here, we mainly focus on phenomenal consciousness and ask whether machines can experience the world phenomenally. #### The Umwelt of an LLM The portion of the world that is perceptually 'available' to an organism has been described as its _"Umwelt"_ (from the German 'environment' [29]). For instance, human retinas respond to wavelengths of light ranging from ~380-740 nanometers, which we perceive as a spectrum from blue to red. Without technological augmentation, we are not susceptible to light waves outside of this narrow band - in the infrared (>740 nm) or ultraviolet (<380 nm) bands. We have a similar Umwelt for the auditory (we can perceive tones between 20-20,000 Hz), somatosensory (we can differentiate stimulation up to 1 mm apart on some parts of our body) and vestibular domains (yoked to the 3-dimensional structure of our semicircular canals). Other species can detect other portions of the electromagnetic spectrum. For instance, honeybees can see light in the ultraviolet range [30], and some snakes can detect infrared radiation in addition to more traditional visual light cues [31] - that is, the bodies and brains of other animals place different constraints on their susceptibility to the sensory world around them. James Gibson referred to this information - that is pragmatically _available_ to us - as a set of "affordances" [25,32-34]. If anything at all, what is the Umwelt of an LLM? What kinds of affordances does an LLM have access to? By the nature of its design, an LLM is only ever presented with binary-coded patterns fed to the network algorithms inherent within the complex transformer architectures that comprise the inner workings of present-day LLMs [35, 36]. While neuronal spikes also potentially encode incoming analog signals as digital (i.e., binary), the information stream fed to the LLMs in question is highly abstract, and hence does not itself make robust contact with the world _as it is_ in any way. Text and strings of keystrokes on a keyboard are simply no match for the dynamic complexity of the natural world: the Umwelt of an LLM - the information afforded to it - is written in a completely different language to the information that bombards our brain from the moment we open our eyes. In this regard, the information stream provided to LLMs is more different from the one presented to humans than ours is from bats [37]. With all this said, it bears mention that there is nothing stopping the input of future AI systems from being much more enriched - i.e., LLMs could be equipped with different types of inputs (see [38, 39]) that better match the statistics of the natural world than the small slice of electromagnetic signals available to human brains. Could the Umwelt of future AI systems become more extended than that available to humans? It is essential to recognize that our Umwelt and conscious experience are not determined solely by sensory input. For example, consider lying in a floating tank where, despite a lack of normal sensory experiences, consciousness does not extinguish. This underscores the notion that having an Umwelt presupposes an inherent subjective perspective - that is, an agent to begin with [29, 40, 41]. Similarly, affordances depend also on the internal properties of the agents, in particular their motivations and goals [33, 40, 41]. This underscores the point that consciousness does not arise merely from data, and hence that simply adding massive data streams to future AI systems will not lead to consciousness. This perspective allows us to rethink some of our fundamental assumptions in the science of consciousness. Specifically, as AI systems continue to exhibit increasingly sophisticated abilities, it prompts us to re-evaluate the necessity of more basic self- and agency-related processes for the emergence of consciousness. Although there are theories of consciousness that make this assumption [42-46], many theories take the classic information processing view where consciousness 'pops out' at some elaborated stage of stimulus processing. The contrast between biological consciousness and LLMs demonstrates that we must reconsider our assumptions. Perhaps, to be conscious, the external world must be integrated with the internal needs and processes. **The neural architecture supporting conscious integration** There is a sizable literature on the neural correlates of consciousness, with many different theories about the neural processes that underlie conscious processing [47]. Many of these frameworks highlight that consciousness is supported by neural processing within the dense, re-entrant thalamocortical network [12-17,48-53]. The thalamocortical network encompasses cortical areas, cortico-cortical connectivity, and higher-order thalamic nuclei with their diffuse projections to cortical areas [53-56]. This specific architecture of the thalamocortical system supports recurrent and complex processing thought to underlie consciousness [57-61] and conscious integration, i.e., the fact that despite processing happening in different areas, consciousness is unified [51,53,62]. However, the details of how this integration is achieved are different in different theories of consciousness. For instance, according to the Global Neuronal Workspace Theory [48,49] consciousness depends on the central workspace constituted by a distributed frontoparietal cortical system. This workspace integrates information from local cortical processors and then globally broadcasts it to all local processors, with the global broadcast delineating conscious from non-conscious processes. Other theories of consciousness assign a different neural process to carry out this integration. For instance, the integrated information theory of consciousness [50, 51] suggests that conscious integration mainly occurs in posterior cortical regions. The binding-by-synchrony theory [63, 64], which is less influential today, suggested that conscious integration occurs via a process - high-frequency synchronization between different cortical areas - that can be putatively involved in diverse functions including perception, cognition or motor planning, depending on the cortical regions involved. Figure 2: **The neural architecture underlying conscious integration according to the Dendritic Integration Theory. (left): when inputs hit the retina, they contain information content that will drive feed-forward activity in the ventral visual stream, however only the patterns that are coherent with the information content will be augmented (e.g., bilateral arrows and simultaneously active apical/basal compartments of the neurons in ventral stream). The top-down prediction of these features from frontal or parietal cortex can augment certain features of the input stream, causing them to stand out from the background, leaving others inactive (e.g., the light red basal compartment in primary visual area). According to the DIT [12, 53], the thalamus (light blue; dotted line) plays a crucial role in shaping/gating the contents of consciousness. (right) DIT associates consciousness with the subset of thick-tufted layer 5 (L5tt) pyramidal neurons that are burst-firing, which occurs when depolarisation of the cell body via basal dendrites (red) coincides temporally with descending cortical inputs to apical dendrites (orange), particularly in the presence of gating inputs from higher-order, matrix-type thalamus (blue).** In Dendritic Integration Theory (DIT; [12, 53]; Fig. 2), it is proposed that global conscious integration also depends on local integration on the level of single layer 5 pyramidal neurons, which are large excitatory neurons that hold a central position in both thalamocortical and corticocortical loops [12, 53]. These neurons have two major compartments (Fig. 3, orange and red cylinders) that process categorically distinct types of information: the basal compartment (red) processes externally-grounded information whereas the apical compartment (orange) processes internally-generated information [12, 53, 65]. According to DIT, during conscious states, these two compartments are coupled, i.e., integrated, allowing information to flow through the thalamocortical and corticocortical connections, thus enabling system-wide integration and consciousness [12, 53]. Notably, the architecture of present-day LLMs is devoid of features from each of these theoretical proposals: there is no LLM equivalent of dual-compartment pyramidal neurons, nor a centralised thalamic architecture, nor a global workspace, nor the many arms of the ascending arousal system. In other words, LLMs are missing the very features of brains that are currently hypothesized to support consciousness. Although we are not arguing that the mammalian brain is the _only_ architecture capable of supporting conscious awareness, the evidence from neurobiology suggests that very specific architectural principles (i.e., more than simple connections between integrate-and-fire neurons) are responsible for mammalian consciousness (see [1-4, 26] for some examples of research on consciousness in non-mammalian species). According to the best evidence from nature, therefore, we are hesitant to ascribe phenomenal consciousness to present-day LLMs, which are topologically extremely simple in comparison. Could future AI models eventually incorporate the process of integration? The integration proposed by Global Neuronal Workspace Theory [48, 49] offers a relatively straightforward implementation [9, 10] and, in fact, some recent AI systems have incorporated something akin to a global workspace shared by local processors [66, 67]. As the computational process of global broadcasting can be implemented in AI systems, an artificial system with a computationally equivalent global workspace ought to be considered conscious by proponents of this theory [9, 10]. However, as indicated above, not all other theories would agree that this type of integration is the key to consciousness. For instance, the integrated information theory of consciousness [50, 51] claims that it is impossible for a software-based AI system to achieve consciousness because computer software does not have real cause-effect power necessary for integrating information [68, 69]. Here we will consider the third possibility, namely that consciousness might be implementable in principle, but it might require a level of computational specificity that is beyond the present-day (and perhaps future) AI systems. **Consciousness as a complex biological process** Consciousness does not only depend on the architecture of the system. For instance, the structure of the thalamocortical system does not change when we are in deep sleep or undergo anesthesia, yet consciousness disappears. Even local neural responses and gamma-band activity in primary sensory areas can remain the same, even though we are unconscious [70, 71]. This implies that consciousness relies on specific neural _processes_ that are different in conscious and unconscious brains. To appreciate our current knowledge about the details distinguishing conscious from unconscious processing, we will return to DIT, as it perhaps best encapsulates the specific neurobiological nuances relevant to the matter. In particular, DIT proposes that the crucial difference between conscious and unconscious processing lies in the integration between the two compartments of pyramidal cells (Fig 2). As indicated above, during conscious processing, these two compartments interact and hence enable complex and integrated processing across the thalamocortical system [12, 53]. However, it has been demonstrated that various different anesthetic agents lead to functional decoupling between the two compartments [72]. In other words, while anatomically the neurons are intact and can fire action potentials, physiologically there is no dendritic integration within these cells: the top-down feedback component cannot influence processing. This dendritic coupling was demonstrated to be controlled by metabotropic receptors, which are a special type of receptor not usually modeled in artificial neural networks. Furthermore, the authors provided suggestive evidence that the activity of the metabotropic receptors might be controlled by the higher-order thalamus [72]. Thus, there are relatively specific neurobiological processes that may be responsible for switching consciousness on and off in the brain. Figure 3: **The limitations of our current computational understanding of consciousness.** The space of all possible computations (the big ellipse) is larger than the types of computations we do understand (teal ellipse), hence we might not have yet captured the key computations underlying consciousness. Although we understand some of the normative computations that comprise the biological processes responsible for consciousness (overlap between teal and blue ellipse), our knowledge horizon of computational mechanisms constantly evolves and perhaps needs to be extended to understand consciousness (dashed teal ellipse). Computations in AI systems (red ellipse) show some overlap with biological computations, and some of the computations of AI systems are understood. However, the computations of AI systems are different from those of a biological system and hence, a priori, there is little reason to think that the computations of present-day AI systems are related to computations underlying consciousness. However compelling, these details almost certainly pale compared to the true complexity of the depth of biological organization required to achieve a satisfactory understanding of consciousness. While today's explanations of consciousness rely on ideas such as the global workspace [48, 49], information integration [50, 51], recurrent feedback [57, 58], dendritic integration [12, 53], and other notions [47] it might be the case that the biological processes underlying consciousness are more intricate than these current concepts appreciate. It is also quite possible that the abstract computational-level ideas that we currently use to frame discussions in consciousness research may miss the necessary computational details required to satisfactorily explain consciousness. In other words, biology is complex, and our understanding of biological computations is limited (Fig. 3) - so perhaps we simply do not have yet the right mathematical and experimental tools to understand consciousness. To better ground this notion of biological complexity, it is worth considering that the cellular-level and system-level processes described above are inextricably embedded within a living organism. Living organisms differ from present-day machines and AI algorithms, as they are constantly in the process of self-maintenance across several levels of processing [73, 74] (i.e., they have skin in the game (see Box 1)). While here we are not arguing that consciousness can only arise in living systems [74-77], we want to draw attention to the fact that living systems have organizational complexity - i.e., interactions between the different levels of the system [77-81] - that is not captured within present-day computer software. So while AI algorithms might model and capture neural computations happening at the level of neural circuits and large-scale networks, these algorithms do not simulate the processes within the neurons. This is a reasonable abstraction if one assumes that consciousness occurs at the level of interactions between neurons. However, the truth is that we do not know whether that assumption is correct. A biological neuron is not just an abstract entity that can be fully captured with a few lines of code. Biological cells have multi-level organization, and depend on a further cascade of biophysical intracellular complexity [79-83]. For instance, consider the Krebs cycle that underlies cellular respiration, a key process in maintaining cellular homeostasis [84]. Cellular respiration is a crucial biological process that enables cells to convert the energy stored in organic molecules into a form of energy that can be utilized by the cell, however this process is not compressible into software as processes like cellular respiration need to happen with real physical molecules. Note that our aim is not to suggest that consciousness requires the Krebs cycle, but rather to highlight that perhaps consciousness is similar: it cannot be abstracted away from the underlying machinery [68-69,85]. Importantly, we are not claiming that consciousness cannot be captured within software at all [68-69,85-87]. Rather, we emphasize that we have to at least entertain the possibility that consciousness is linked to the complex biological organization underlying life [74-81], and thus any computational description of consciousness will be much more complex than our present-day theories suggest (Fig. 3). It might be impossible to "biopsy" consciousness and remove it from its organizational dwellings. This idea contradicts most of today's theories of consciousness which assume that consciousness can be captured on the abstract level of computation [47,88]. However, this might be another assumption about consciousness that will require updating in light of modern AI systems: perhaps interdependencies and organization across scales observed in living systems cannot be ignored if we want to understand consciousness. It might be that although AI systems (at least to some extent) mimic their biological counterparts on the level of network computations, in these systems we have abstracted away all the other levels of processing that causally contribute to consciousness in the living brain. Hence, we have abstracted away consciousness itself. In this way, LLMs and future AI systems may be trapped in a compelling simulation of the signatures of consciousness, without any conscious experience. If consciousness is indeed related to these other levels of processing, or their coherent interactions across scales, we might still be far from conscious machines. #### Concluding remarks Here, we have attempted to provide a systems neuroscientists perspective on LLMs. We conclude that, while fascinating and alluring, LLMs are not conscious, and will likely not be conscious soon. Firstly, we detailed the vast differences between the Umwelt of mammals - the'slice' of the external world that they can perceive - and the Umwelt of LLMs, with the latter being highly impoverished and limited to keystrokes, rather than the electromagnetic spectrum. Secondly, we argued that the topological architecture of LLMs, while highly sophisticated, is sufficiently different from the neurobiological details of circuits empirically linked to consciousness in mammals that there is no a priori reason to conclude that they are capable of phenomenal consciousness (Fig. 1). Third, we pointed out that it might not be possible to abstract consciousness away from the organizational complexity that is inherent within living systems but strikingly absent in AI systems. In toto, we believe that these three arguments make it extremely unlikely that LLMs, in their current form, have the capacity for phenomenal consciousness. Rather, they mimic signatures of consciousness that are implicitly embedded within the language that humans use to describe the richness of our conscious experience. Rather than representing a deflationary account, we foresee several useful implications from this perspective. For one, we should worry much less about any potential moral quandary regarding sentience in LLMs **(Box 1)**. In addition, we believe that a refined understanding of the similarities and differences in the topological architecture of LLMs and mature brains provides opportunities for advancing progress in both machine learning (by mimicking features of brain organisation) and neuroscience (by learning how simple distributed systems can process elaborate information streams) [89-91]. For these reasons, we are optimistic that future collaborative efforts between AI and systems neuroscience have the potential to rapidly improve our understanding of how our brains make us conscious. ### Box 1: LLMs and skin-in-the-game - do we have a moral quandary? Does it really matter if LLMs are conscious? If LLMs can match and even exceed our expectations in terms of getting superficially human-like responses that are useful and informative, is there any need to speculate about what an LLM experiences? Some argue forcefully, yes from a moral perspective [92]. According to this view, we should carefully consider the ethical implications for any conscious entity, including artificial intelligence, principally because it is assumed that some experiences could be negative and that in this case the AI could also suffer. At this point, it is claimed, we should care or at least be cautious about whether an AI really does suffer. To this extent, we predict that LLMs do not (and will not) have experiences that can be considered suffering in any sense that should matter to human society. One way of arguing this point is analogous to the notion of'skin in the game' [93] which emphasizes the importance of personal investment and engagement in moral decision-making, and suggests that those who have a personal stake in an issue are more competent to make ethical judgments than those who do not [93]. An LLM could, in principle, claim in a conversation that it does not want to be shut down, but an LLM does not have _"skin in the game"_, as there is no real consequence to the software when it is actually shut down. In contrast, in biology the system has something to lose on several levels [73]: it cannot stop living, as otherwise it will die. As the philosopher Hans Jonas has said: _"The organism has to keep going, because to be going is its very existence"_[94]. If cellular respiration stops, the cell dies; if cells die, organs fail; if organs fail, the organism will soon die. The system has skin in the game across levels of processing [73] and these levels are not independent from each other. There is complex causality between real physical events at the microscale and consciousness. Here, we would argue that not having the capacity for phenomenal consciousness would preclude suffering and therefore personal investment. This reasoning also extends to the common disincentives that are used for legal issues that LLMs could become entangled with such as contracts, libel, etc., which are commonly penalized with actions ranging from monetary compensation to incarceration. Without personal investment on the part of the LLM, these disincentives will not be taken seriously by injured parties and would therefore likely destabilize the rule of law. #### Acknowledgements We would like to thank Jakob Howhy, Kadi Tulver, Raul Vicente, Christopher Whyte and Gabriel Wainstein for their helpful comments on the manuscript. This research was supported by the European Social Fund through the "ICT programme" measure, by the European Regional Development Fund through the Estonian Centre of Excellence in IT (EXCITE), and the Estonian Research Council grant PSG728. #### Declaration of interests The authors declare no conflict of interest
2301.05612
Companion Weakly Periodic Matrices over Finite and Countable Fields
We explore the situation where all companion $n \times n$ matrices over a field $F$ are weakly periodic of index of nilpotence $2$ and prove that this can be happen uniquely when $F$ is a countable field of positive characteristic, which is an algebraic extension of its minimal simple (finite) subfield, with all subfields of order greater than $n$. In particular, in the commuting case, we show even that $F$ is a finite field of order greater than $n$. Our obtained results somewhat generalize those obtained by Breaz-Modoi in Lin. Algebra & Appl. (2016).
Peter Danchev, Andrada Pojar
2023-01-13T15:26:40Z
http://arxiv.org/abs/2301.05612v1
# Companion weakly periodic matrices over finite and countable fields ###### Abstract. We explore the situation where all companion \(n\times n\) matrices over a field \(\mathbb{F}\) are weakly periodic of index of nilpotence \(2\) and prove that this can be happen uniquely when \(\mathbb{F}\) is a countable field of positive characteristic, which is an algebraic extension of its minimal simple (finite) subfield, with all subfields of order greater than \(n\). In particular, in the commuting case, we show even that \(\mathbb{F}\) is a finite field of order greater than \(n\). Our obtained results somewhat generalize those obtained by Breaz-Modoi in Lin. Algebra & Appl. (2016). Key words and phrases:companion matrices, fields, potents, square-zero nilpotents 2010 Mathematics Subject Classification: Primary 15A23, 15B33; Secondary 16S50, 16U60 ## 1. Introduction and Principalities Let \(\mathbb{F}\) be a field and \(n\) an arbitrary non-negative integer, say \(n\geq 1\). We denote by \(M_{n}(\mathbb{F})\) the matrix ring consisting of all squared \(n\times n\) matrices over \(\mathbb{F}\). For a non-negative integer \(m\geq 2\), we denote by \(\tau\) the cycle \(\tau_{m}=(1\ 2\ \dots\ m)\in S_{m}\), and by \(P_{\tau_{m}}\) the \(m\times m\) permutation matrix with only \(0\)s and \(1\)s over \(\mathbb{F}\), that is, \(P_{\tau_{m}}=(a_{ij})_{1\leq i,j\leq m}\), where \((a_{ij})=1\) if \(j=\tau_{m}(i)\), and \((a_{ij})=0\) if \(j\neq\tau_{m}(i)\). The matrix \(M\in M_{n}(\mathbb{F})\) is called _potent_ if there exists an integer \(k\geq 1\) such that \(M^{k+1}=M\). So, let \(ST_{n}\) the set of traces of all potent companion matrices \(C\in M_{n}(\mathbb{F})\). By a root of unity, we denote a root of the polynomial over \(\mathbb{F}\) which is of the form \(g=X^{i}-1\), where \(i\) is an integer with \(i\geq 2\). Thus, let \(SR_{n}\) be the set of sums of at most \(n\)-th roots of unity which are not necessarily distinct. Likewise, we denote by \(L_{m,n}\) the set of polynomials having degree at most \(m\) and with non-negative integer multiples of unity coefficients such that their sum is not exceeding \(n\). We also denote \(W_{m}=\cup_{f\in L_{m,n}}Spec(f(P_{\tau_{m}}))\), and the symbol \(d(m)\) stands for the number of non-negative divisors of \(m\). Let \(R\) be a ring. An element \(x\in R\) is said to be _potent_ if there exists a non-negative integer \(q\geq 1\) with \(x^{q+1}=a\), and \(x\in R\) is said to be _weakly periodic with nilpotence index \(2\)_, provided \(x\) is the sum of a potent and a square-zero nilpotent of \(R\). Furthermore, we shall say that the ring \(R\) is _weakly periodic with nilpotence index \(2\)_, provided every element of \(R\) is weakly periodic with nilpotence index \(2\). Historically, the concept of weak periodicity arisen quite normally in the existing on the subject literature. In fact, it was showed in [3] that an element \(x\) of a ring \(R\) is _periodic_, i.e., \(x^{n}=x^{m}\) for some two different positive integers \(m,n\), if and only if \(x\) can be written as a sum of a potent element and a nilpotent element which commute each other. Thus, by removing the "commuting property", it is rather natural to consider the sum of such two elements. In addition, a ring \(R\) is called _weakly periodic_ if all its elements are weakly periodic. On the other hand, Diesl defined in [5] the notion of a _nil-clean_ ring \(R\) as the ring for which, for every \(a\in R\), there are an idempotent \(e\) and a nilpotent \(b\) such that Introduction Let \(n\geq 1\) be a positive integer. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. Let \(G\) be a connected graph with \(n\) vertices and \(G\) a connected graph with \(n\) vertices. (2) Since trace \(C\in ST_{n}\), there is a potent companion matrix \(C^{\prime}\) such that trace \(C^{\prime}=\) trace \(C\), whence \((C-C^{\prime})^{2}=0\). In conclusion, \(C\) is weakly periodic with nilpotence index \(2\), as claimed. **Proposition 2.3**.: _Let \(n\geq 2\) be an integer. If all \(n\times n\) companion matrices are weakly periodic with nilpotence index \(2\), then \(\mathbb{F}\subseteq SR_{n}\)._ Proof.: This follows immediately with the aid of Proposition 2.2. **Lemma 2.4**.: _Let \(n\geq 2\) be an integer. Then the following containment is valid:_ \[\cup_{k\leq n,d(m)\geq k}W_{m}\subseteq\cup_{m\in\mathbb{N},m\geq 2}W_{m}.\] Proof.: This follows directly from the easy fact that, for any non-negative integer \(m\geq 2\), if \(m\) has at least \(k\) divisors, then \(m\) has at least one divisor. **Lemma 2.5**.: _Let \(n\geq 1\) be a non-negative integer. Then the following relation holds:_ \[SR_{n}\subseteq\cup_{m\in\mathbb{N},m\geq 2}W_{m}.\] Proof.: Letting \(t\in SR_{n}\), then there exist non-zero integers \(1\leq k\leq n\) and \(\alpha_{1},\alpha_{2},\ldots\alpha_{k}\) with sum not exceeding \(n\) as well as there are roots of unity \(\lambda_{1},\lambda_{2},...\lambda_{k}\) and non-negative integers \(m_{1},m_{2},\ldots,m_{k}\) such that \(\lambda_{1}^{m_{1}}=\lambda_{2}^{m_{2}}=\ldots=\lambda_{k}^{m_{k}}=1\) and such that \[t=\alpha_{1}\lambda_{1}+\alpha_{2}\lambda_{1}+\ldots\alpha_{k}\lambda_{k}.\] If we take \(m\) to be the common multiple of \(m_{1},m_{2},\ldots,m_{k}\), then \(\lambda_{i}^{m}=1\) for every \(i\in\{1,2,\ldots,k\}\). Consequently, \(k\leq m\) and \[\{\lambda_{1},\lambda_{2},\ldots,\lambda_{k}\}\subseteq\{\epsilon^{a_{1}}+ \epsilon^{a_{2}},\ldots\epsilon^{a_{k}}\},\] where \(\{a_{1},a_{2},\ldots,a_{k}\}\subseteq\{0,1,\ldots,m-1\}\) and \(\epsilon\) is so that it generates all \(m\)-th roots of unity. Without loss of generality, we can assume that \(a_{1}<a_{2}<\ldots<a_{k}\). Therefore, \[t=\alpha_{1}\epsilon^{a_{1}}+\alpha_{2}\epsilon^{a_{2}}+\ldots+\alpha_{k} \epsilon^{a_{k}}.\] Suppose now that \[f=\alpha_{1}X^{a_{1}}+\alpha_{2}X^{a_{2}}+\ldots+\alpha_{k}X^{a_{k}}.\] Since \(\alpha_{1},\alpha_{2},\ldots,\alpha_{k}\) are non-negative integers with sum not exceeding \(n\), it will follow that \(f\in L_{m,n}\). We also will have the equality \[t=f(\epsilon).\] Since \(\epsilon\) is a primitive \(m\)-th root of unity, one extracts that \(\epsilon\) is an eigenvalue for \(P_{\tau_{m}}\), and hence \(f(\epsilon)\) is an eigenvalue of \(f(P_{\tau_{m}})\). Furthermore, since \(m\) is a common multiple \(k\) non-zero integers greater than \(1,m_{1},m_{2},\ldots,m_{k}\), it must be that \(m\geq 2\) is a non-negative integer. Now, as \(m\) is a non-negative integer, \(m\) is a common multiple of number \(k\) of its divisors if, and only if, \(m\) has at least \(k\) divisors and thus, in conjunction with Lemma 2.4, one derives that \[SR_{n}\subseteq\cup_{k\leq n,d(m)\geq k}W_{m}\subseteq\cup_{m\in\mathbb{N},m \geq 2}W_{m},\] as asserted. **Lemma 2.6**.: _Let \(m\geq 2\) and \(n\geq 1\) be integers. If \(\omega\in W_{m}\), then there exist two integer multiples of unity \(u=u(\omega)\) and \(\alpha=\alpha(\omega)\) such that \((\omega-\alpha)^{m}=u\)._ Proof.: Given \(\omega\in W_{m}\). Then, for every \(f\in L_{m,n}\), we have \(\omega\in Spec(f(P_{\tau_{m}})\). It follows now that \(\omega\) is a root of the polynomial \(det(XI_{m}-f(P_{\tau_{m}})\). But as \(f\in L_{m,n}\), there exist \(1\leq k\leq m\) and non-zero integers \(\alpha_{1},\alpha_{2},\ldots,\alpha_{k}\) and non-negative pairwise distinct integers \(a_{1},a_{2},\ldots,a_{k}\) with values at most \(n-1\) such that \[f=\alpha_{1}X^{a_{1}}+\alpha_{2}X^{a_{2}}+\ldots+\alpha_{k}X^{a_{k}}.\] Therefore, \[f(P_{\tau_{m}})=\alpha_{1}(P_{\tau_{m}})^{a_{1}}+\alpha_{2}(P_{\tau_{m}})^{a_{ 2}}+\ldots+\alpha_{k}(P_{\tau_{m}})^{a_{k}}.\] Since \(\tau_{m}\) is a cycle in \(S_{m}\), and \(\tau_{m}=(1\ 2\dots\ m)\), it follows that \(\tau_{m}\) is a product of \(m-1\) transpositions, and hence \(\tau_{m}\) and \(m\) obviously have different parities. Furthermore, assuming \(m\) is even, it follows then that \(\tau_{m}\) is odd, and thus \(\tau_{m}^{a_{i}}\) and \(a_{i}\) have the same parity for every \(i\in\{1,2,\ldots,k\}\). Now, let \((b_{ij})_{1\leq i,j\leq m}=XI_{m}-f(P_{\tau_{m}})\) and \(\alpha_{i}=f(0)\). Therefore, \[det(f(P_{\tau_{m}})-XI_{m})=(\alpha_{i}-XI_{m})^{m}+\sum_{\sigma\neq e,\sigma even }b_{1\sigma(1)}\cdots b_{m\sigma(m)}-\sum_{\sigma odd}b_{1\sigma(1)}\cdots b_{ m\sigma(m)}=\] \[=(\alpha_{i}-X)^{m}+(-1)^{a_{1}+1}\alpha_{1}^{m}+(-1)^{a_{2}+1}\alpha_{2}^{m} +\ldots+(-1)^{a_{k}+1}\alpha_{k}^{m}-(-1)^{a_{i}+1}\alpha_{i}^{m}.\] Consequently, for even \(m\), we obtain: \[(\alpha_{i}-\omega)^{m}=(-1)^{a_{1}}\alpha_{1}^{m}+(-1)^{a_{2}}\alpha_{2}^{m} +\ldots+(-1)^{a_{k}}\alpha_{k}^{m}-(-1)^{a_{i}}\alpha_{i}^{m}.\] Assuming that \(m\) is now odd, we then infer that \(\tau_{m}\) is even, and so \(\tau_{m}^{a_{i}}\) is even for each \(i\in\{1,2,\ldots,k\}\). Now, let \((b_{ij})_{1\leq i,j\leq m}=f(P_{\tau_{m}})-XI_{m}\) and \(\alpha_{i}=f(0)\) Therefore, \[det(f(P_{\tau_{m}})-XI_{m})=(\alpha_{i}-X)^{m}+\sum_{\sigma\neq e,\sigma even }b_{1\sigma(1)}\cdots b_{m\sigma(m)}-\sum_{\sigma odd}b_{1\sigma(1)}\cdots b_{ m\sigma(m)}=\] \[=(\alpha_{i}-X)^{m}+\alpha_{1}^{m}+\alpha_{2}^{m}+\ldots+\alpha_{k}^{m}-0- \alpha_{i}^{m}.\] Consequently, for odd \(m\), we conclude: \[(\omega-\alpha_{i})^{m}=\alpha_{1}^{m}+\alpha_{2}^{m}+\ldots+\alpha_{k}^{m}- \alpha_{i}^{m}.\] Finally, there exist multiple integers of unity, say \(\alpha=\alpha_{i}\) and \(u\), such that \((\omega-\alpha)^{m}=u\), as required. The next claim is closely related to assertions from [4]. **Lemma 2.7**.: _Every diagonalizable matrix over the finite Galois field \(GF(p^{l})\) for some \(l\geq 1\) with no multiple eigenvalues is a non-derogatory potent matrix._ Proof.: Since every element \(x\in GF(p^{l})\) satisfies the equation \(x^{p^{l}}=x\), it follows that each diagonal over \(GF(p^{l})\) is potent. Hence, any diagonalizable matrix over \(GF(p^{l})\) is necessarily potent. Since the matrix has no multiple eigenvalues, the algebraic multiplicity of every eigenvalue is exactly \(1\). And since the matrix is diagonalizable, the geometric multiplicity of any eigenvalue equals to the algebraic multiplicity, so will be the \(1\) too. Therefore, the matrix is non-derogatory as well. In conclusion, the matrix is a non-derogatory potent matrix, as expected. The following technical claim from field theory is our key to establish the chief result stated below. **Proposition 2.8**.: _Each infinite field, which is an algebraic extension of its minimal simple (finite) subfield, is an infinite countable union of its finite subfields, and thus it is countable. In addition, these finite subfields can be taken of the sort GF\((p^{l_{i}})\) such that GF\((p^{l_{i}})\) is contained in GF\((p^{l_{i+1}})\) and \(l_{i}\) divides \(l_{i+1}\) for each \(i\geq 1\)._ Proof.: Given \(\mathbb{F}\) is an infinite field of characteristic \(p>0\), which is an algebraic extension of its simple subfield \(\mathbb{F}_{p}\) of \(p\) elements, and \(u\) is an arbitrary element of \(\mathbb{F}\). Hence, each \(\mathbb{F}_{p}(u)\) which means the extension of \(\mathbb{F}_{p}\) generated by \(u\), is a finite extension of \(\mathbb{F}_{p}\) and so it is a finite subfield of \(\mathbb{F}\). Precisely, all finite subfields of \(\mathbb{F}\) are of this type, because for the finite extensions of \(\mathbb{F}_{p}\) is valid the classical theorem for the "primitive element". We also know that \(\mathbb{F}_{p}(u)\cong F_{p}[X]/(g_{u}(X))\), where \(X\) is a transcendental element over \(\mathbb{F}_{p}\), and \(g_{u}(X)\) is the minimal polynomial of \(u\) over \(\mathbb{F}_{p}\). To ensure an uniqueness of \(g_{u}(X)\), as usual we will assume that it is normed, that is, its chief coefficient is exactly \(1\). Also, the degree \([\mathbb{F}_{p}(u):\mathbb{F}_{p}]\) of the extension \(\mathbb{F}_{p}(u)/\mathbb{F}_{p}\) coincides with the degree of the polynomial \(g_{u}(X)\). Furthermore, the extensions \(\mathbb{F}_{p}(u)/\mathbb{F}_{p}\) are normal with cyclic Galois group. Moreover, for every positive integer \(n\), at most (exactly) one field of the form \(\mathbb{F}_{p}(u)\) is an extension of the field \(\mathbb{F}_{p}\) of degree \(n\). In other words, there is a bijection between the set of all subfields of \(\mathbb{F}\) of the kind \(\mathbb{F}_{p}(u)\) onto some subset of the set \(\mathbb{N}\) consisting of all natural numbers. Concretely, this subset consists of those elements of \(\mathbb{N}\) which are equal to the degree \([\mathbb{F}_{p}(u):\mathbb{F}_{p}]\) for some element \(u\in\mathbb{F}\). Since any subset of \(\mathbb{N}\) is either finite or infinitely countable, our arguments so far allow us to conclude that \(\mathbb{F}\) can be presented as a finite or countably infinite union of finite (sub)fields. This union is properly countable uniquely when \(\mathbb{F}\) is an infinitely countable field, as pursued. Next, by what we have already shown, the base field \(\mathbb{F}\) is countable, and hence the elements of \(\mathbb{F}\) can be linearly ordered as \(f_{1},\ldots,f_{n},\ldots\). Set \(\mathbb{F}_{n}=\mathbb{F}_{p}(f_{1},\ldots,f_{n})\) for every \(n\in\mathbb{N}\). It is now easily inspected that \(\mathbb{F}\) is equal to the countable union of all its subfields \(\mathbb{F}_{n}\), where \(n\geq 1\). Besides, it easily follows that \(\mathbb{F}_{n}/\mathbb{F}_{p}\) is a finite extension, because it is simultaneously an algebraic extension and finitely generated. Likewise, \(\mathbb{F}_{n}\) is obviously a subfield of \(\mathbb{F}_{n+1}\) for each natural \(n\geq 1\). Thus, we arrive at the conclusion that \(\mathbb{F}_{n}\) is a finite field of order \(p^{l_{n}}\), where \(l_{n}=[\mathbb{F}_{n}:\mathbb{F}_{p}]\). Since \(\mathbb{F}_{n}\leq\mathbb{F}_{n+1}\), one deduces that \(l_{n+1}\) is divided by \(l_{n}\) and that the equality \(l_{n+1}/l_{n}=[\mathbb{F}_{n+1}:\mathbb{F}_{n}]\) holds for all \(n\in\mathbb{N}\), as desired. As for the second part that such a field is necessarily is now an immediate consequence of the first one. The next comments are worthwhile to explain that the conditions in the previous statement cannot be weakened. **Remark 2.9**.: Knowing that \(\mathbb{F}_{p}\) is the simple (finite) field of prime characteristic \(p\), routine arguments show that the field \(\mathbb{F}_{p}(X)\) of rational functions of the variable \(X\) with coefficients from \(\mathbb{F}_{p}\), which is actually a transcendental extension of \(\mathbb{F}_{p}\), is an example of a countable field of characteristic \(p\) which _cannot_ be constructed as a countable union of finite fields. Moreover, arguing in the same manner, one can see that the field of rational numbers, \(\mathbb{Q}\), also cannnot be presented as a countable union of finite fields. We now arrive at our first main result in the present paper. Specifically, the following is true: **Theorem 2.10**.: _Let \(n\geq 1\) be an integer and let \(\mathbb{F}\) be a field. Then the following two conditions are equivalent:_ 1. _Every_ \(n\times n\) _companion matrix over_ \(\mathbb{F}\) _is the sum of a potent and a square-zero nilpotent over_ \(\mathbb{F}\)_._ 2. \(\mathbb{F}\) _is a countable field of positive characteristic, which is an algebraic extension of its minimal simple (finite) subfield, with all subfields of order greater than_ \(n\)_._ Proof.: \((1)\implies(2)\). Assume that each \(n\times n\) companion matrix \(C\) over \(\mathbb{F}\) is weakly periodic with nilpotence index \(2\). Referring to Proposition 2.5, we have \(\mathbb{F}\subseteq\cap_{m\in\mathbb{N},m\geq 2}W_{m}\). But, for every \(\omega\in W_{m}\), there exist integer multiples of unity, say \(\alpha\) and \(u\), such that \((\omega-\alpha)^{m}=u\) by owing to Lemma 2.6. Assume in a way of contradiction that \(\mathbb{F}\) has zero characteristic. It then follows that each non-zero integer multiple of unity \(s\) is invertible. Set \(\omega=r\cdot s^{-1}\), and let \(r\) and \(\omega\) be integers multiple of unity. Hence, \(r\cdot s^{-1}\in\cap_{m\in\mathbb{N},m\geq 2}W_{m}\). Fix an \(m\geq 2\). Thus, there exist integer multiples of unity \(\alpha\) and \(u\) such that \((r\cdot s^{-1}-\alpha)^{m}=u\). By the classical Newton's binomial formula, we find that \[(r\cdot s^{-1})^{m}+(-1)^{1}\binom{m}{1}(r\cdot s^{-1})^{m-1}\alpha+(-1)^{2} \binom{m}{2}(r\cdot s^{-1})^{m-2}\alpha^{2}+\ldots\] \[+(-1)^{m-1}\binom{m}{m-1}(r\cdot s^{-1})^{1}\alpha^{m-1}+\alpha^{m}=u\] and so \[r^{m}+(-1)^{1}\binom{m}{1}r^{m-1}s\alpha+(-1)^{2}\binom{m}{2}r^{m-2}s^{2} \alpha^{2}+\] \[\ldots+(-1)^{m-1}r^{1}s^{m-1}\alpha^{m-1}+\alpha^{m}\cdot s^{m}=u\cdot s^{m},\] which is equivalent to \[r^{m}=((-1)^{0}\binom{m}{1}r^{m-1}\cdot\alpha+(-1)^{1}\binom{m}{2}r^{m-2} \alpha^{2}s+\] \[\ldots+(-1)^{m-2}r\alpha^{m-1}s^{m-2}+\alpha^{m}\cdot s^{m-1}+us^{m-1})\cdot s.\] As by assumption the field is of characteristic zero, we may consider with no harm of generality that the above equation is satisfied in \(\mathbb{Z}\), whence it follows that \(s\) divides \(r^{m}\) for any \(s\in\mathbb{Z}^{*}\), \(r\in\mathbb{Z}\), which is manifestly false; for example, by considering \(s=2\) and \(r=3\). Hence, the characteristic of the field has to be some non-zero prime, say \(p>0\). Besides, Proposition 2.3 ensures that the field is of necessity countable, because \(SR_{n}\) is countable and \(\mathbb{F}\subseteq SR_{n}\). Now, assume that the containment \(GF(p^{l})\subseteq\mathbb{F}\) is fulfilled for some integer \(l\) satisfying the inequalities \(2^{t}\neq p^{l}-1\leq n-1\) for any non-negative integer \(t\). Thus, there exists an odd proper divisor \(d\) of \(p^{l}-1\). Further, take \(q\) a non-negative odd integer such that \(q\) and \(p^{l}-1\) are co-prime. Then, \(m=d\cdot q\) is odd and \(d=gcd(p^{l}-1,m)\). Let the equation \(x^{m}-1=0\) holds over \(\mathbb{F}\). It has \(d\) solutions in \(GF(p^{l})\), namely \(1,h,h^{2},\ldots,h^{d-1}\), for \(h=g^{\frac{p^{l}-1}{d}}\), where g is a generator of the multiplicative cyclic group of the field \(GF(p^{l})\). Since \(h^{d}=1\), we have \[1+h+h^{2}+\ldots+h^{d-1}=0.\] Therefore, \[-1=h+h^{2}+\ldots+h^{d-1}.\] So, \(-1\) is the sum of \(d-1\)\(m\)-th roots of unity with each of them not equal to \(1\). Since \(d-1<d\leq p^{l}-1\leq n-1\), we detect that \[-1=\alpha_{1}h+\alpha_{2}h^{2}+\ldots+\alpha_{d-1}h^{d-1}\] with \[\alpha_{1}=\alpha_{2}=\ldots=\alpha_{d-1}=1,\] and hence it follows from the proof of Lemma 2.6 that \[(-1-0)^{m}=\alpha_{1}^{m}+\alpha_{2}^{m}+\ldots+\alpha_{d-1}^{m}-0.\] Also, \((-1)^{m}=d-1\), and since \(m\) is odd, we deduce that \(p\) divides \(d\). But \(d\) divides \(p^{l}-1\). So, by the transitivity law, \(p\) must divide \(p^{l}-1\) and thus \(p=1\), which is a contradiction. Therefore, we have \(GF(p^{l})\subseteq\mathbb{F}\) for some integer \(l\) with \(2^{t}=p^{l}-1\leq n-1\) for some non-negative integer \(t\). It follows now that \(p-1\) and \(r=p^{l-1}+p^{l-2}+\ldots+p+1\) are powers of \(2\) and \(p-1\) divides \(r\). But since \(p-1\) divides \(r-l\), we receive that \(p-1\) divides \(l\). So, since \(p\neq 2\), we deduce that \(l=2u\) with \(u\) a positive integer. That is why, \((p^{u}-1)(p^{u}+1)=2^{t}\) and hence \(p^{u}-1\) divides \(p^{u}+1\). Finally, \(p^{u}-1\) divides \(2\) and since \(p\neq 2\), we obtain \(p=3\) and \(u=1\). So, \(l=2u=2\) and \(GF(p^{l})=GF(3^{2})\). Let \(g\) be a generator of the multiplicative group of \(GF(9)\). Since \(gcd(3^{2}-1,12)=4\), there are four \(12\)-th roots of unity in the field \(GF(3^{2})\). They are \(\epsilon^{0},\epsilon^{6},\epsilon^{a_{3}}\) and \(\epsilon^{a_{4}}\), where \(\epsilon\) is a primitive \(12\)-th root of unity, whereas \(a_{3}\) and \(a_{4}\) are positive distinct integers less then \(12\) and not equal to \(6\). However, as \(\frac{3^{2}-1}{4}=2\), we have \[\{1,-1,\epsilon^{a_{3}},\epsilon^{a_{4}}\}=\{1,g^{2},(g^{2})^{2},(g^{2})^{3}\}.\] Since \((g^{2})^{4}=1\), it follows that \[1+g^{2}+(g^{2})^{2}+(g^{2})^{3}=0.\] Consequently, \[-1=-1+\epsilon^{a_{3}}+\epsilon^{a_{4}}.\] We thus derive two things: \(\epsilon^{a_{3}}=-\epsilon^{a_{4}}\), and \(-1\) is the sum of at most \(n\)\(12-\)th roots of unity, because \(3<3^{2}\leq n\). Therefore, imitating the proof of Lemma 2.6, taking into account that \(12\) is even, it follows that \[(0-(-1))^{12}=(-1)^{6}\cdot 1^{12}+(-1)^{a_{3}}\cdot 1^{12}+(-1)^{a_{4}}\cdot 1 ^{12}.\] We thus inspect that the numbers \(a_{3}\) and \(a_{4}\) cannot be simultaneously odd or simultaneously even. In this way, without loss of generality, we can assume that \(a_{3}\) is even and \(a_{4}\) is odd, and since \(\epsilon^{a_{3}}=-\epsilon^{a_{4}}\), we obtain that \((-\epsilon)^{a_{3}}=(-\epsilon)^{a_{4}}\). However, because \(\epsilon\) is not lying in the set \(\{-1,0,1\}\), while \(a_{3}\) and \(a_{4}\) are positive integers less then \(12\), it follows automatically that \(a_{3}=a_{4}\), which is a new contradiction. Now, in order to demonstrate that \(\mathbb{F}\) is an algebraic extension of its minimal simple (finite) subfield, we will prove that every non-zero element of \(\mathbb{F}\) is a root of unity. In fact, owing to Proposition 2.3, we have that \(\mathbb{F}\in SR_{n}\), the set of all sums of at most \(n\) roots of unity. Also, according to Lemma 2.5, we get that \(SR_{n}\subseteq\cup_{m\in\mathbb{N},m\geq 2}W_{m}\), whereas Lemma 2.6 enables us that if \(x\in W_{m}\), then there exist \(u\in\mathbb{F}_{p}\) and \(\alpha\in\mathbb{F}_{p}\) such that \((x-\alpha)^{m}=u\). Likewise, the proofs of the mentioned lemmas tell us that if \(x\) is the sum of at most \(n\)\(m\)-th roots of unity within there are exactly \(\alpha\) values of \(1\), then \((x-\alpha)^{m}\in\mathbb{F}_{p}\). So, we may assume that \(x=\alpha\). If \(x\neq 0\), then \(x^{p-1}=1\) and thus \(x\) is a root of unity. If, however, \(x\) is the sum of at most \(n\) roots of unity within there are no values of \(1\), then \((x-\alpha)^{m}\in\mathbb{F}_{p}-\{0\}\). Hence, \(x=\alpha+y\) with \(\alpha^{p-1}=1\) and \(y^{m(p-1)}=1\). If \(\alpha\neq 1\), then \(x\) is the sum of two roots of unity not equal to \(1\). So, \(x^{m(p-1)}\in\mathbb{F}_{p}-\{0\}\) and, therefore, \(x^{m(p-1)^{2}}=1\) Thus, \(x\) is a root of unity. Now take \(\alpha=1\). But as \[x=1\cdot 1+y=(p-1)\cdot(-1)+y=((n-1)\cdot(-1)+y)+(p-n)\cdot(-1),\] we have that \((n-1)(-1)+y\) is the sum of \(n-1\) roots of unity equal to \((-1)\) and one \(m(p-1)-th\) root of unity not equal to \(1\). It follows now that \(((n-1)\cdot(-1)+y-1\cdot 0)^{m(p-1)}=1\). Moreover, if \((p-n)\cdot(-1)=1\), then \(p\) divides \(n-1\). But \(n-1\leq p-1\), so that \(p\leq p-1\) which is false. By this contradiction we derive that \(x\) is the sum of two \(m(p-1)\)-th roots of unity not equal to \(1\). Consequently, \(x^{m(p-1)}\in\mathbb{F}_{p}-\{0\}\), whence \(x^{m(p-1)^{2}}=1\). Finally, \(x\) is a root of unity, as promised. In conclusion, \(\mathbb{F}\) is a countable field of positive characteristic, which is an algebraic extension of its minimal simple (finite) subfield, with all subfields of order greater than \(n\), as desired. \((2)\implies(1)\). Let \(GF(p^{l})\) be contained in \(\mathbb{F}\). Take \(C_{1}\) to be an \(n\times n\) companion matrix over \(GF(p^{l})\). Then, as \(p^{l}\geq n+1\), it follows by the application of the main result from [4] that we may decompose \(C_{1}=D_{1}+N_{1}\), where \(D_{1}\) is a diagonalizable matrix over \(GF(p^{l})\) with no multiple eigenvalues and \(N_{1}\) is a nilpotent matrix of nilpotence index \(2\) over \(GF(p^{l})\). Now, Lemma 2.7 helps us to have that \(D_{1}\) is a non-derogatory potent matrix, and since trace \(N=0\) we can get that trace \(C_{1}=\) trace \(D_{1}\). Therefore, any element in \(GF(p^{l})\) can be the trace of a non-derogatory potent matrix. Hence, each element in \(GF(p^{l})\) can be the trace of a potent companion matrix. Now, according to Proposition 2.8, every countable field of positive characteristic, which is an algebraic extension of its minimal simple (finite) subfield, is a countable union of its finite subfields, it follows at once that every element in \(\mathbb{F}\) can be the trace of an \(n\times n\) potent companion matrix over \(\mathbb{F}\). Thus, letting \(C\) be an arbitrary \(n\times n\) companion matrix over \(\mathbb{F}\). Then, trace \(C\in ST_{n}\). In conclusion, applying Proposition 2.2(2), one infers that \(C\) can be written as a sum of a potent matrix and a nilpotent matrix of order \(2\) over \(\mathbb{F}\), as wanted. We now come to the case when the potent and square-zero matrices commute each other. ### The commuting case We will attack here the commuting case of weakly periodic with nilpotence index \(2\) companion matrices, that is, the existing potent and square-zero nilpotent matrices commute each other. To that aim, we first need the following two technical conventions. **Lemma 2.11**.: _Let \(n\) be a positive integer and \(\mathbb{F}\) a field. If, for every \(n\times n\) companion matrix \(C\) over \(\mathbb{F}\), there exist an integer \(t>1\), a potent matrix \(P\) such that \(P=P^{t}\) and a square-zero nilpotent \(N\) such that \(C=P+N\) with \(PN=NP\), then the following two points are true:_ 1. _If_ \(C\) _is invertible, then_ \(\chi_{C}\) _divides_ \((X^{t-1}-1)^{2}\)_._ 2. _If_ \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) _are non-zero elements of_ \(\mathbb{F}\)_, then there exists an integer_ \(t>1\) _such that_ \(\lambda_{i}^{t-1}=1\) _for every_ \(i\in\{1,2,\ldots,n\}\)_._ Proof.: (1) Since \(C=P+N\) and \(PN=NP\), it follows that \(CP=PC\) and \(CN=NC\). But we have \((C-N)^{t}=C-N\). Since \(CN=NC\), we can apply Newton's binomial formula to argue that there exists an \(n\times n\) matrix over \(\mathbb{F}\) such that \[C^{t}-tCN+MN^{2}=C-N.\] But as \(N^{2}=0\), we obtain that \[C(C^{t-1}-tN)=C-N.\] Multiplying with \(C^{t-1}+tN\), we get that \[C(C^{2t-2}-t^{2}N^{2})=C^{t}+tCN-C^{t-1}N+tN^{2},\] and using that \(N^{2}=0\), we write the equality \[C^{2t-1}=C^{t}+(tC-C^{t-1})N.\] But \(C\) is invertible, and so \[C^{2t-2}-C^{t-1}=(tId_{n}-C^{t-2})N.\] Now, bearing in mind that \(CN=NC\) and \(N^{2}=0\), we infer \[((C^{t-1})^{2}-C^{t-1})^{2}=0,\] and since \(C\) is invertible, we conclude \[(C^{t-1}-1)^{2}=0.\] Therefore, the minimal polynomial of \(C\) divides \((X^{t-1}-1)^{2}\). But the minimal polynomial of \(C\) is the characteristic polynomial, say \(\chi_{C}\), of \(C\). Summarizing all the information so far, \(\chi_{C}\) divides \((X^{t-1}-1)^{2}\), as promised. (2) Just take \(C\) to be the companion matrix with eigenvalues \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) and apply the preceding point (1) along with the fact that \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) are from \(\mathbb{F}\). **Lemma 2.12**.: _Let \(n\) be a positive integer and \(\mathbb{F}\) a field. If, for every \(n\times n\) companion matrix \(C\) over \(\mathbb{F}\), there exist an integer \(t>1\), a potent matrix \(P\) such that \(P=P^{t}\) and a square zero nilpotent \(N\) such that \(C=P+N\) with \(PN=NP\), then the following two conditions are valid:_ 1. _If_ \(C\) _is invertible, then there exists a polynomial_ \(q\in\mathbb{F}[X]\) _such that_ \(\chi_{C}\) _divides_ \((q(X)-X)^{2}\)_._ 2. _If_ \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) _are non-zero elements of_ \(\mathbb{F}\)_, then there exists a polynomial_ \(q\in\mathbb{F}[X]\) _such that_ \(q(\lambda_{i})=\lambda_{i}\) _for every_ \(i\in\{1,2,\ldots,n\}\)_._ Proof.: (1) Since \(C=P+N\) and \(PN=NP\), it follows that \(CP=PC\) and \(CN=NC\). It is well known that all matrices that commute with a companion matrix \(C\) can be interpreted just as polynomials in \(C\) over \(\mathbb{F}\). So, there exists \(q\in\mathbb{F}[X]\) with \(C=q(C)+N\). Now, since \(N^{2}=0\), it follows that \((q(C)-C)^{2}=0\). Therefore, the minimal polynomial of \(C\) obviously divides \((q(X)-X)^{2}\). But the minimal polynomial of \(C\) is the characteristic polynomial, say \(\chi_{C}\), of \(C\). Summarizing all the information thus far, \(\chi_{C}\) divides \((q(X)-X)^{2}\), as asked for. (2) Just take \(C\) to be the companion matrix with eigenvalues \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) and employ the preceding condition (1) together with the fact that \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) are from \(\mathbb{F}\). The following claim from number theory is a well-known folklore fact, but we state it here only for the sake of completeness and the reader's convenience. **Lemma 2.13**.: _Let \(a,b,c\) be three positive integers. Then, \(gcd(bc,a)\) divides \(gcd(b,a)\cdot gcd(c,a)\)._ Proof.: Let we set \(f=gcd(a,bc)\), \(f_{1}=gcd(a,b)\) and \(f_{2}=gcd(a,c)\). Then, there exist integers \(s_{1},s_{2},t_{1},t_{2}\) such that \[f_{1}=s_{1}a+t_{1}b\] and \[f_{2}=s_{2}a+t_{2}c.\] Thus, \[f_{1}f_{2}=s_{1}s_{2}a^{2}+s_{1}t_{2}ac+t_{1}s_{2}ba+t_{1}t_{2}bc.\] But \(f\) divides \(a\) and \(f\) divides \(bc\), so \(f\) divides \(a^{2},\ f\) divides \(ac,\ f\) divides \(ba\) and \(f\) divides \(bc\). Hence, \(f\) divides \(f_{1}f_{2}\) whence \(gcd(bc,a)\) divides \(gcd(b,a)\cdot gcd(c,a)\), as formulated. The next comments are needed to explain the complicated situation in the commuting case. **Remark 2.14**.: Let \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) be distinct non-zero elements of \(\mathbb{F}\). By Lemma 2.12(2), there exists \(q\in\mathbb{F}[X]\) such that \(q(\lambda_{i})=\lambda_{i}\) for every \(i\in\{1,2,\ldots,n\}\). Take \(GF(p^{l_{1}})\) such that \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) are in \(GF(p^{l_{1}})\), \(GF(p^{l_{2}})\) and such that \(q\in GF(p^{l_{2}})[X]\) with \(l=max(l_{1},l_{2})\). Then, \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) are in \(GF(p^{l})\), while \(q\in GF(p^{l})[X]\). Furthermore, since \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) are in \(\mathbb{F}\), it follows from Lemma 2.11(2) that there exists a non-negative integer \(t>1\) such that \(\lambda_{i}^{t-1}=1\) for every \(i\in\{1,2,\ldots,n\}\). Now, since \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) are in \(GF(p^{l})\), we can infer that \[\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\}\subseteq\{h,h^{2},\ldots,h^{d}= 1\},\] where \(h=g^{\frac{p^{l}-1}{d}}\) with \(g\) a generator of the multiplicative group of \(GF(p^{l})\), and \(d=gcd(p^{l}-1,t-1)\). Therefore, \(n\leq d\) and there exist \(\{j_{1},j_{2},\ldots,j_{n}\}\subseteq\{1,2,\ldots,d\}\) such that \(\lambda_{i}=h^{j_{i}}\). Finally, we extract that \(q(h^{j_{i}})=h^{j_{i}}\) for every \(i\in\{1,2,\ldots,n\}\) with \(h^{d}=1\), as expected. The next example illustrates some concrete aspects of our calculating manipulations. **Example 2.15**.: If in the previous Remark 2.14 we take \(\lambda_{1}=g^{i}\) and \(\lambda_{2}=g^{i+1}\) with \(i\in\{1,2,\ldots,p^{l}-2\}\), then one inspects that \[\{g^{i},g^{i+1}\}\subseteq\{g^{\frac{p^{l}-1}{d}},g^{2\frac{p^{l}-1}{d}},\ldots,g^{\frac{d^{p}-1}{d}}\}.\] Therefore, \[\{i,i+1\}\subseteq\{\frac{p^{l}-1}{d},2\frac{p^{l}-1}{d},\ldots,d\frac{p^{l}- 1}{d}\}.\] Hence, the ordinary fraction \(\frac{p^{l}-1}{d}\) is a common divisor of both \(i\) and \(i+1\), so it is necessarily \(1\). We thus have now that \(p^{l}-1=d=gcd(p^{l}-1,t-1)\). In conclusion, \(p^{l}-1\) divides \(t-1\), as desired to demonstrate. We are now ready to establish our second main result. **Theorem 2.16**.: _Suppose \(n\geq 1\) is an integer and \(\mathbb{F}\) is a field. If all companion \(n\times n\) matrices over \(\mathbb{F}\) are the sum of a potent matrix and a square-zero matrix over \(\mathbb{F}\) which matrices commute each other, then \(\mathbb{F}\) is a finite field of order greater than \(n\)._ Proof.: Suppose \(n\geq 1\) is an integer, \(\mathbb{F}\) is a field and all companion \(n\times n\) matrices over \(\mathbb{F}\) are the sum of a potent matrix and a square-zero matrix over \(\mathbb{F}\) which commute. Then, by Theorem 2.10, we have that \(\mathbb{F}\) is a countable field of positive characteristic (which is also an algebraic extension of a finite field). Assume now the contrary that \(\mathbb{F}\) is infinite. Thus, the second part in the statement of Proposition 2.8 applies to write that \(\mathbb{F}=\cup_{i=1}^{\infty}GF(p^{l_{i}})\), where \(l_{i}\) divides \(l_{i+1}\) for every positive integer \(i\). Therefore, there exists the sequence of integers greater than \(1\), say \((c_{i})_{i\geq 1}\) such that \(l_{i+1}=l_{i}\cdot c_{i}\) and \((l_{i})_{i\geq 1}\) is a strictly increasing infinite sequence of positive integers. Take \(g_{1}\) to be the generator of the multiplicative group of \(GF(p^{l_{1}})\) and put \[\lambda_{1}=g_{1}^{[\frac{p^{l_{1}}-1}{n}]},\lambda_{2}=g_{1}^{2\cdot[\frac{p ^{l_{1}}-1}{n}]},\ldots,\lambda_{n}=g_{1}^{n\cdot[\frac{p^{l_{1}}-1}{n}]}.\] Consequently, Lemma 2.11 allows us to have the existence of a positive integer \(t>1\) such that \[\lambda_{1}^{t-1}=\lambda_{2}^{t-1}=\cdots=\lambda_{n}^{t-1}=1.\] Let \(C\) be the companion matrix with eigenvalues \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\). Note that the integer \(t\) above has the property that \(C=P+N\) with \(P^{t}=P\), \(N^{2}=0\) and \(PN=NP\). If, eventually, there are more than one such decompositions of \(C\), then we can take \(t\) to be the minimum of such integers \(t\)'s. So, \(t\) can be chosen to be a fixed uniquely determined integer having the mentioned property. Furthermore, since we are working in \(GF(p^{l_{1}})\), Remark 2.14 leads us to the relation \[\{g_{1}^{[\frac{p^{l_{1}}-1}{n}]},g_{1}^{2\cdot[\frac{p^{l_{1}}-1}{n}]},\ldots,g_{1}^{n\cdot[\frac{p^{l_{1}}-1}{n}]}\}\subseteq\{g_{1}^{\frac{p^{l_{1}}-1}{ d_{1}}},g_{1}^{2\frac{p^{l_{1}}-1}{d_{1}}},\ldots,g_{1}^{d_{1}\frac{p^{l_{1}}-1}{ d_{1}}}\},\] where \(d_{i}=gcd(p^{l_{i}}-1,t-1)\). Take \(j_{i}\in\{1,2,\ldots,d_{i}\}\) such that \[g_{1}^{[\frac{p^{l_{1}}-1}{n}]}=g_{1}^{j_{1}\cdot\frac{p^{l_{1}}-1}{d_{1}}},\] and since the order of \(g_{1}\) in the multiplicative group of \(GF(p^{l_{1}})\) is \(p^{l_{1}}-1\), it follows that \[[\frac{p^{l_{1}}-1}{n}]=j_{1}\cdot\frac{p^{l_{1}}-1}{d_{1}}.\] Analogously, we obtain that \[[\frac{p^{l_{1}}-1}{n}]=j_{i}\cdot\frac{p^{l_{i}}-1}{d_{i}},\] for any positive integer \(i\). Therefore, \[j_{i}\cdot\frac{p^{l_{i}}-1}{d_{i}}=j_{i+1}\cdot\frac{p^{l_{i+1}}-1}{d_{i+1}},\] and so \[\frac{j_{i}}{j_{i+1}}=\frac{d_{i}}{d_{i+1}}\cdot\frac{(p^{l_{i}})^{c_{i}}-1}{p ^{l_{i}}-1}.\] Take \(s_{i}=(p^{l_{i}})^{c_{i}-1}+(p^{l_{i}})^{c_{i}-2}+\cdots+p^{l_{i}}+1.\) It thus follows that \[\frac{j_{i}}{j_{i+1}}=\frac{gcd(p^{l_{i}}-1,t-1)}{gcd((p^{l_{i}})^{c_{i}}-1,t -1)}\cdot s_{i}.\] Also, by Lemma 2.13 we have that there exists a strictly positive integer \(k_{i}\) such that \[gcd((p^{l_{i}})^{c_{i}}-1,t-1)=\frac{gcd(p^{l_{i}}-1,t-1)\cdot gcd(s_{i},t-1)) }{k_{i}}.\] Now, we obtain that \[\frac{j_{i}}{j_{i+1}}=k_{i}\cdot\frac{s_{i}}{gcd(s_{i},t-1)}.\] However, since \(gcd(s_{i},t-1)\) divides \(s_{i}\), it follows that \(j_{i+1}\) divides \(j_{i}\) for every positive integer \(i\geq 1\). But \(j_{i}\geq 1\) and thus there exists \(i_{0}\geq 1\) such that \(j_{i}=j_{i+1}\) for every \(i\geq i_{0}\). So, we obtain \[k_{i}\cdot\frac{s_{i}}{gcd(s_{i},t-1)}=1,\] which forces \(s_{i}=gcd(s_{i},t-1)\). Hence, \(s_{i}\) divides \(t-1\) and so \(s_{i}\leq t-1\). But \(p^{l_{i}}<s_{i}\) and then \(p^{l_{i}}<t-1\), for every \(i\geq i_{0}\), which is in sharp contradiction with the fact that \((l_{i})_{i\geq 1}\) is a strictly increasing infinite sequence of positive integers. In conclusion, one has that \(\mathbb{F}\) is finite, as claimed. We next once again apply Theorem 2.10 to get that the order of the field is greater than \(n\), as asserted. In the spirit of the last result, we pose the following conjecture. **Conjecture:** Given \(n\geq 1\) is an integer and \(\mathbb{F}\) is a field. Then all companion \(n\times n\) matrices over \(\mathbb{F}\) are the sum of a potent matrix and a square-zero matrix over \(\mathbb{F}\) which matrices commute each other if, and only if, \(\mathbb{F}\) is a finite (and hence potent) field and \(n=1\). On the other side, in regard to [1] and [6], we close our work with the following question of some interest. **Problem 2.17**.: _Suppose that \(D\) is a division ring and \(n\geq 1\) is an integer. Does it follow that the matrix ring \(M_{n}(D)\) is weakly periodic if, and only if, \(D\) is a finite (and hence potent) field?_ **Funding:** The scientific work of Peter V. Danchev was partially supported by the Bulgarian National Science Fund under Grant KP-06 No 32/1 of December 07, 2019 and by the Junta de Andalucia, FQM 264.
2308.09523
Denoising Diffusion for 3D Hand Pose Estimation from Images
Hand pose estimation from a single image has many applications. However, approaches to full 3D body pose estimation are typically trained on day-to-day activities or actions. As such, detailed hand-to-hand interactions are poorly represented, especially during motion. We see this in the failure cases of techniques such as OpenPose or MediaPipe. However, accurate hand pose estimation is crucial for many applications where the global body motion is less important than accurate hand pose estimation. This paper addresses the problem of 3D hand pose estimation from monocular images or sequences. We present a novel end-to-end framework for 3D hand regression that employs diffusion models that have shown excellent ability to capture the distribution of data for generative purposes. Moreover, we enforce kinematic constraints to ensure realistic poses are generated by incorporating an explicit forward kinematic layer as part of the network. The proposed model provides state-of-the-art performance when lifting a 2D single-hand image to 3D. However, when sequence data is available, we add a Transformer module over a temporal window of consecutive frames to refine the results, overcoming jittering and further increasing accuracy. The method is quantitatively and qualitatively evaluated showing state-of-the-art robustness, generalization, and accuracy on several different datasets.
Maksym Ivashechkin, Oscar Mendez, Richard Bowden
2023-08-18T12:57:22Z
http://arxiv.org/abs/2308.09523v1
# Denoising Diffusion for 3D Hand Pose Estimation from Images ###### Abstract Hand pose estimation from a single image has many applications. However, approaches to full 3D body pose estimation are typically trained on day-to-day activities or actions. As such, detailed hand-to-hand interactions are poorly represented, especially during motion. We see this in the failure cases of techniques such as OpenPose [6] or MediaPipe[30]. However, accurate hand pose estimation is crucial for many applications where the global body motion is less important than accurate hand pose estimation. This paper addresses the problem of 3D hand pose estimation from monocular images or sequences. We present a novel end-to-end framework for 3D hand regression that employs diffusion models that have shown excellent ability to capture the distribution of data for generative purposes. Moreover, we enforce kinematic constraints to ensure realistic poses are generated by incorporating an explicit forward kinematic layer as part of the network. The proposed model provides state-of-the-art performance when lifting a 2D single-hand image to 3D. However, when sequence data is available, we add a Transformer module over a temporal window of consecutive frames to refine the results, overcoming jittering and further increasing accuracy. The method is quantitatively and qualitatively evaluated showing state-of-the-art robustness, generalization, and accuracy on several different datasets. ## 1 Introduction Accurate 3D human pose estimation from a single image is a challenging problem that must overcome image quality, occlusions, motion blur, hand interaction, etc. The problem is often tackled by decomposition into separate body and hand pose reconstruction stages. Body pose estimation has seen significant advancements, with numerous solutions proposed in both the academic literature and available as open-source implementations. However, accurate hand estimation remains a challenge. The commonly used _state-of-the-art_ estimators such as MediaPipe, OpenPose, MMPPose [30, 6, 10] were trained on large-scale datasets and have good generalization for detecting human joints in the image, especially for the human body. Nevertheless, 2D hand estimation is not always accurate, and often completely fails in the presence of a motion blur or hand-to-hand interaction, while 3D estimation from 2D is even less reliable. The decision to separate hand and body pose estimation is motivated by several factors. Firstly, full-body reconstruction necessitates higher image resolutions due to the larger size of the body in the image. However, the distribution of hand points differs from that of the body, as they are denser and in closer proximity. Additionally, the relatively small size of the hand makes it impractical to estimate both parts simultaneously. Consequently, most methods in the literature approach 3D hand and body pose estimation independently. In this work, we specifically focus on addressing the more challenging and crucial task of 3D hand pose estimation. ### Related work Hand pose estimation from a single image has been extensively studied in the literature for several years, with various approaches focusing primarily on leveraging deep learning and convolutional techniques to process images. These approaches aim to tackle the problem through either direct image-to-3D estimation or a two-step approach involving image-to-2D and 2D-to-3D methods. Moon _et al_. [34] propose InterHand2.6M - a large hand image dataset with complex hand-to-hand interactions, and a baseline method that by utilizing ResNet[18] from image input, predicts a 3D Gaussian heatmap for image coordinate and relative depth regression. The final 3D coordinates are obtained via back-projecting points using normalized camera intrinsic parameters and absolute depth estimated by the RootNet [33]. Spurr _et al_. [40] employ a statistical approach to correlate input images with 3D pose embeddings. It exploits an RGB image encoder (ResNet) and decoder, along with separate encoders and decoders for the 3D pose. Three encoder-decoder pairs are trained: image-to-image, image-to-3D, and 3D-to-3D. The primary pair consists of the image encoder and 3D pose decoder, while additional pairs con tribute to regularizing the latent embedding space. Yang _et al_.[42] propose a method similar to[40] that utilizes a latent space for image synthesis. However, their approach disentangles the embedding space into independent factors and introduces an additional latent variable. Zimmerman _et al_. [47] first estimate the 2D keypoints of a hand and then regress the 3D pose from these keypoints in the canonical frame. The hand orientation is separately determined by predicting a single rotation matrix. Additionally, the authors have provided a rendered hand dataset (RHD) consisting of synthetic hand poses. PeCLR, proposed by Spurr _et al_. [39], employs a contrastive loss on image pairs with diverse augmentations. The network maximizes agreement between identical images with varied augmentation while minimizing agreement with dissimilar images. Using image features extracted by ResNet, the network predicts 2.5D keypoints (_i.e.,_ image coordinates and relative depth), and the 3D pose is obtained by back-projecting. The hand estimation literature encompasses various methods that utilize the MANO library [36] for hand parameterization, particularly focusing on volumetric hand prediction. Guan _et al_. introduce MobileHand [16], a model that predicts camera rotation, camera scale, camera translation, joint hand angles, and shape to generate a MANO hand mesh. Similarly, Boukhayma _et al_. [3] present an end-to-end method for combined 3D hand with mesh estimation from images and 2D heatmaps. Kulon _et al_. [27] propose a weakly supervised approach for 3D hand pose estimation. The authors extract 2D keypoints by running OpenPose on images and optimize the MANO hand model to align the projection of 3D points with the OpenPose 2D keypoints. The method exploits a ResNet image encoder to process the input image, followed by a convolutional decoder that predicts the hand mesh by sampling the neighborhood constructed with the spiral operator. Interactions between hands raise a significant challenge and have been extensively explored in the body of research. Wang _et al_. introduce RGB2Hands [41], a comprehensive framework that addresses the estimation and tracking of interacting 3D hands from video inputs. The method leverages various information sources, including hand segmentation, depth data, image points, vertex-to-pixel mapping, and hand-to-hand distance, fusing them to regress MANO hand parameters. The hand interaction problem was also approached by Fan _et al_. [13] who propose a method for 3D interacting hands prediction from a monocular image by extracting visual and semantic features via CNN. Furthermore, by utilizing a segmentation probability mask, the method regresses 2.5D coordinates and recovers the 3D pose through inverse perspective projection. Recent works exploring hand interactions and incorporating the MANO model are also presented in [44, 28, 32]. Diffusion models have recently proved themselves as an efficient method for model training, and they are able to generate high-quality samples, _e.g.,_ images. They have outperformed _state-of-the-art_ generative models such as generative adversarial networks, variational autoencoders, etc. The denoising diffusion model as a parameterized Markov chain was presented by Ho _et al_. [19]. Additionally, the diffusion models with more improvements were studied in variational diffusion models [24] by Kingma _et al_., simple diffusion [21] of Hoogeboom _et al_., improved denoising diffusion probabilistic models [35] by Nichol _et al_., etc. Several methods have employed diffusion models for 3D body pose estimation by utilizing 2D input keypoints. Holmquist _et al_. introduce DiffPose [20], which uplifts a human body from 2D to 3D using a conditional diffusion model. The approach involves extracting heatmaps of body joints from an input image and converting them into joint-wise embeddings used for conditioning. While DiffPose demonstrates promising results in body pose evaluation, the authors acknowledge the limitation of its two-step approach, which disregards some information from the image features. Another method that incorporates diffusion models is DiffuPose by Choi _et al_. [9]. This approach performs 2D to 3D body pose uplift by conditioning the diffusion model with 2D keypoints obtained from an off-the-shelf 2D detector. Additionally, DiffuPose replaces the commonly used U-Net module for noise prediction with a graph convolutional network. Gong _et al_. [15] leverage the diffusion model to recover a true distribution of 3D body poses by conditioning it with spatial context information from 2D points. Zheng _et al_. introduce PoseFormer [46], where authors explore the application of temporal transformer models for pose estimation. The approach incorporates both spatial and temporal information in the transformer to generate a 3D pose estimation for a middle frame using a sequence of 2D estimates. Furthermore, Jiang _et al_. present Skeletor [22], a sequence-to-sequence model that leverages the encoder part of the transformer to refine 3D poses obtained from 2D. Within the literature, the most common approach is to extract 2D keypoints, and then uplift them to 3D space. However, the detection of the 2D joints of the hands is in itself challenging for several reasons. Firstly, the hands are much smaller than the body which makes them more difficult to detect. Secondly, hands can move significantly faster than other body parts, hence motion blur often occurs in the image. Finally, hand interactions often introduce self-occlusion, further complicating the detection process. The widely adopted approach has two steps, wherein the first step runs a convolutional neural network (CNN) to detect 2D human keypoints on the input image, and then, for instance, a multi-layer perceptron (MLP) takes the 2D joints and outputs the 3D pose. The benefit of such methods is a two-step decomposition and ease of generalization in the uplift stage. However, the main issue of this method is its complete reliance on the accuracy of the image detector. Training a CNN detector on one dataset may provide correct keypoint predictions for images in that dataset, but in practice, the detector can struggle to generalize to images outside the training data, _e.g.,_ with a different background, image noise, clothes, etc. _State-of-the-art_ 2D hand detectors such as MediaPipe or OpenPose often completely fail or output inaccurate 2D joints in the presence of motion blur and hand-to-hand interaction. If 2D detection fails, then uplifting to 3D is impossible. ### Motivation To overcome the limitations of the traditional two-step approach and mitigate error propagation, methods that predict 3D pose directly from image features could be utilized. However, training such direct approaches can be more challenging due to the complex mapping between 2D images and 3D poses. This is where diffusion models offer a compelling solution. Diffusion models have recently emerged as a promising approach for pose estimation tasks conditioned on 2D information. These models excel at denoising and capturing complex data relationships, making them well-suited for the task of regressing a 3D pose from image features. By leveraging the denoising capability of diffusion models, they can effectively handle noise and uncertainty in the input data, resulting in more robust and accurate pose estimations. Incorporating an inverse kinematics layer further enhances the effectiveness of diffusion models for hand pose estimation. By enforcing kinematic constraints, such as joint angles and joint limits, the model can generate more realistic and anatomically plausible hand poses. This not only improves the accuracy of the estimated 3D poses but also ensures that the generated poses adhere to the natural range of motion for human hands. Compared to the simpler MLP uplift method, diffusion models offer distinct advantages. Hand pose estimation that suffers from the effect of fast motion can actually be to our benefit by integrating temporal information and cues over multiple frames. Therefore, we propose a temporal model based on a transformer that can leverage this additional temporal information and increase performance further. We propose a novel hand estimation method that leverages a diffusion model conditioned on image features to directly predict a 3D pose avoiding an explicit two-step approach. The output pose undergoes an IK (inverse kinematics) layer to enforce physical constraints on a hand, and on a sequence input the additional transformer module eliminates jittering to ensure smoothness of estimate that could be converted to mesh representation afterwards. ## 2 Methodology The overview of our pipeline is provided in Figure 1. The model consists of a pre-trained ResNet [18] and a diffusion model that predicts the 3D pose of the hand based on the ResNet features. For a static image, two MLP modules then predict the parameters that are fed into the Forward Kinematics (FK) layer, namely the bone lengths and the bone angles. For a temporal sequence, we feed the output of the diffusion model into a small Transformer which predicts the bone angles using the temporal context. However, as the bone lengths remain consistent over the sequence of consecutive frames, these are still estimated by the MLP layer. We now describe each step in turn. Figure 1: This pipeline shows our 3D hand pose estimation denoising diffusion model. Input images are processed by a CNN (_e.g.,_ ResNet), the features then condition the diffusion model. The U-Net [37] model starts from pure Gaussian noise and denoises the 3D points iteratively. The inverse kinematics MLP module takes a noisy 3D pose and predicts angles and bone lengths that are fused in the forward kinematics to return a realistic pose. The transformer refines the diffusion model estimate over a sequence. Finally, hand angles and bone lengths are used to produce a hand mesh model [36]. ### Feature Extraction ResNet features are taken after average pooling, additionally, we stack one hidden fully-connected layer to decrease the feature dimensionality. Without loss of generality, the ResNet feature network is applied only to "right" hand images. More explicitly, we feed the network right-hand images and horizontally flipped left-hand images. The output of left-hand images is then flipped back by multiplying the \(x\)-axis of the output 3D joints with -1. This formulation is also applicable to hands' interactions, see Figure 2). ### Bone Prediction Bone angle prediction is decomposed into two parts. The first one predicts the rotation in the camera frame, _i.e.,_ root joint (wrist) orientation, and the second estimates the angles of other joints. The reason for separation is that root rotation is unconstrained and therefore a high-dimensional parameterization can be used, _e.g.,_ 9 DoF singular value decomposition orthogonalization [4]. To enforce the angular and bone length constraints, the bone angles and length prediction are followed by a sine normalization function to clamp the values from -1 to 1, transform to a 0-1 range, and finally multiplied by constraints as follows: \[a=\frac{\sin(x)+1}{2}(a_{\max}-a_{\min})+a_{\min}, \tag{1}\] where \(x\) is a neuron output, and \(a\in[a_{\min},a_{\max}]\) is the constrained angle. ### Hand Mesh The hand mesh is generated independently using the MANO model [36]. Note: we use MANO only for visualization, it is not integrated into the network for training. MANO uses angles, shape parameters, and pre-trained weights to generate a mesh combining the forward kinematics. The angles returned by the proposed model are used to initialize the MANO convention. We prioritize our own hand model as it enables us to parameterize the hand with specific degrees of freedom, Euler angles, and constraints for joint articulation and finger lengths, while MANO relies on statistically precomputed hand shapes. ### Model Supervision The model is supervised via the ground truth 3D poses in the camera frame. We assume a perspective camera model with a projection matrix \(\mathbf{P}=\mathbf{K}[\mathbf{R}|\mathbf{t}]\). Where \(\mathbf{K}\in\mathbb{R}^{3\times 3}\) is an intrinsic matrix, and \((\mathbf{R},\mathbf{t})\) are the camera's rotation and translation that transform an object from the world to camera frame. The 3D hand with \(N\) joints in the world frame is a matrix \(\mathbf{X}_{W}\) of size \(3\times N\) where points are stacked in columns. The pose in the camera frame used for model supervision is hence, \(\mathbf{X}_{C}=\mathbf{R}\mathbf{X}_{W}+\mathbf{t}\,\mathbf{1}_{N}^{\top}\), and its 2D projection onto the image plane is \(\mathbf{X}_{I}\sim\mathbf{K}\mathbf{X}_{C}\). The network does not predict the origin of the pose in 3D space and the bone length scale, because this is challenging for a single image input. Therefore, the training is invariant to hand scale and origin. We incorporate several loss functions with suitable weighting that have empirically proved to generate more accurate models. The first loss is the mean absolute error (\(L_{1}\) loss) between the ground truth pose and the estimated one, _i.e.,_\(|\mathbf{X}_{C}^{*}-\hat{\mathbf{X}}_{C}|\). The second is additional supervision with corresponding image points (if a dataset contains intrinsic), _i.e.,_\(||\mathbf{X}_{I}^{*}-\hat{\mathbf{X}}_{I}||\). An MLP that estimates a rigid hand rotation enables prediction of the 3D hand pose in both the canonical and camera frames, hence, this decomposition suits being trained separately applying a 3D loss on both canonical and camera frame 3D output. Finally, the contrastive loss (SimCLR [8]) is applied to image features from a ResNet extracted from differently augmented image pairs to maximize agreement on the same hand images and minimize on different pairs as suggested in PeCLR [39]. Images of the hands are augmented with different levels and types of image noise, blur, sharpening, jittering, etc., and the hand location in the images is also randomly shifted and scaled. ### 3D Diffusion Diffusion models are a class of generative models that are used to predict high-quality samples by gradual denoising. It is common to distinguish _forward_ and _reverse_ diffusion processes. Let us denote a data vector \(\mathbf{x}_{0}\sim q(\mathbf{x})\) sampled from a real distribution \(q\), and \(T\) as the number of time steps where Gaussian noise with variance \(\left\{\beta_{t}\in(0,1)\right\}_{t=1}^{T}\) is added to vector \(\mathbf{x}_{0}\) at each step \(t\in[0,T]\) to generate a sequence of samples \(\left\{\mathbf{x}_{t}\right\}_{t=1}^{T}\). The latent variable \(\mathbf{x}_{t}\) is Figure 2: The proposed model is able to predict right and left hands from a single image of hands interaction. In the top right figure, the estimated 3D hand interaction is shown together with a hand mesh. The bottom figures show individual right and left hands’ skeletons with mesh. The random hand interaction image was not part of the training set. then sampled from distrubtion \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\) of mean \(\boldsymbol{\mu}_{t}\) and variance \(\boldsymbol{\Sigma}_{t}\) as follows: \[\mathbf{x}_{t}\sim q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{ t};\sqrt{1-\beta_{t}}\,\mathbf{x}_{t-1},\beta_{t}\mathbf{I}). \tag{2}\] Given a useful property of reparameterization [25] to obtain \(\mathbf{x}_{t}\), it is enough to sample it from the following distribution: \[\mathbf{x}_{t}\sim q(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{ t};\sqrt{\bar{\alpha}_{t}}\,\mathbf{x}_{0},(1-\bar{\alpha}_{t})\mathbf{I}), \tag{3}\] where \(\bar{\alpha}=\prod_{i=1}^{t}(1-\beta_{i})\). This enables a _forward_ process to efficiently noise the input data \(\mathbf{x}_{0}\) to a certain time-step \(t\). In the _reverse_ process, the goal is to obtain \(\mathbf{x}_{0}\) from \(\mathbf{x}_{T}\), which for \(T\to\infty\) steps is close to an isotropic Gaussian distribution. The reverse distrubution \(q(\mathbf{x}_{0}|\mathbf{x}_{t})\) is unknown, while the distrubution \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) is very difficult to obtain. Therefore, the diffusion model approximates \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) via a parameterized neural network to learn a conditional probability \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) to estimate this joint probability \(p_{\theta}(\mathbf{x}_{0:T})\) (_i.e.,_ reverse process), which is defined as a Markov chain with learned Gaussian transition [19] \[p_{\theta}(\mathbf{x}_{0:T})=p(\mathbf{x}_{T})\prod_{t=1}^{T}p_{ \theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}), \tag{4}\] where \(p(\mathbf{x}_{T})\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). Essentially, the neural network is trying to predict the mean \(\boldsymbol{\mu}_{\theta}(\mathbf{x}_{t},t)\) and variance \(\boldsymbol{\Sigma}_{\theta}(\mathbf{x}_{t},t)\) of the distribution \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) conditioned on time-step \(t\). For the hand pose estimation problem, the data sample corresponds to a set of 3D hand points, _i.e.,_\(\mathbf{x}_{0}\in\mathbb{R}^{3N}\), where \(N\) is a number of hand points (_e.g.,_ 21 in the experiments). Additionally, we condition the denoising model \(p_{\theta}\) not only on the time-step \(t\) but also on the extracted image features \(\mathbf{f}\) to provide the network information about the corresponding images. The most common architecture for denoising diffusion models is a U-Net, modified to have the time-embeddings of each time-step \(t\). In the experiments, we use a 1D U-Net which takes 3D points concatenated with image features and time-step information. For training diffusion models, the objective is normally to minimize the Kullback-Leibler divergence [23]. However, Ho _et al_. [19] made simplifications, first by setting \(\boldsymbol{\Sigma}_{\theta}(\mathbf{x}_{t},t)=\sigma_{t}^{2}\mathbf{I}\), where \(\sigma^{2}\approx\beta_{t}\). Additionally, by exploiting the distribution \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t-1 };\tilde{\boldsymbol{\mu}}_{t}(\mathbf{x}_{t},\mathbf{x}_{0}),\beta_{t} \mathbf{I})\), which is tractable when conditioned on \(\mathbf{x}_{0}\), Ho _et al_. suggest to train the reverse process mean function approximator \(\boldsymbol{\mu}_{\theta}\) to predict \(\tilde{\boldsymbol{\mu}}_{t}\). Consequently, the training of the denoising model is done by taking a gradient descent step on the difference between sampled and predicted noise as follows: \[\nabla_{\theta}||\boldsymbol{\epsilon}-\boldsymbol{\epsilon}_{ \theta}(\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}} \boldsymbol{\epsilon},t,\mathbf{f})||^{2}, \tag{5}\] where, \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) is randomly sampled noise, \(t\) is sampled uniformly in random within \(0\) to \(T\) range, \(\boldsymbol{\epsilon}_{\theta}\) is, for instance, U-Net model conditioned on time-step \(t\) and image features \(\mathbf{f}\) to predict the Gaussian noise. Luo _et al_. [31] proved that optimizing (5) gives better performance for a diffusion model than the original ELBO [23]. During inference, pure Gaussian noise is concatenated with image features, and given the number of denoising time steps, the U-Net model gradually removes noise between two consecutive time steps. The recursive equation to produce a 3D skeleton from an image is therefore: \[\mathbf{x}_{t-1}=\frac{1}{\sqrt{\bar{\alpha}_{t}}}\Big{(}\mathbf{x}_{t}-\frac {\beta_{t}}{\sqrt{1-\alpha_{t}}}\boldsymbol{\epsilon}_{\theta}(\mathbf{x}_{t}, t,\mathbf{f})\Big{)}+\sigma_{t}\mathbf{z}, \tag{6}\] where \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) if \(t>0\), otherwise \(\mathbf{z}=\mathbf{0}\), and the calcuation starts from \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). The proposed 3D hand estimation pipeline incorporating a diffusion model is outlined in Figure 1. Firstly, an input image is processed by a ResNet to extract features that condition the diffusion model. Subsequently, using equation (6) and starting from the pure Gaussian noise, the diffusion model returns a 3D hand pose. To enforce kinematic constraints, we add an inverse kinematics MLP layer that takes the output from the diffusion model and generates angles and bone lengths that are fused into a valid 3D pose. Figure 3 shows the denoising steps of the diffusion model with the IK refining part. Diffusion models demonstrate stable training, simple supervision, and good accuracy. In our experiments, the diffusion model outperforms a baseline MLP, which regresses angles and bone lengths from image features. However, the detrimental aspect of the diffusion model is significantly longer inference time, where at each time step the denoising model has to be executed, _e.g.,_\(T=50\) in the experiments. ### Forward Kinematics Accurate and realistic 3D hand pose estimation requires kinematic constraints such as the limitation of joint rotations, bone length symmetry, etc. Therefore, we parameterize the hand skeleton as a tree graph with a root node (_i.e.,_ Figure 3: Denoising steps for 3D pose estimation as part of the network pipeline demonstrated in Figure 1. The output 3D hand from the diffusion module (in red) is perturbed by noise and does not look realistic (see blue circles on the hand pose highlighting the inaccuracies), while the green hand returned by the IK MLP module has preserved hand constraints. wrist joint), the sets of vertices and edges, where vertices represent the hand joints and the edges match the skeleton bones, see Figure 4. Each node of the graph models the constraints of the joint, _i.e.,_ its orientation, angular constraints, and degrees of freedom (DoF). For instance, a node that corresponds to the distal interphalangeal joint has only one degree of freedom, since it can only move in one direction, and its angular limit is from zero to approximately 100 degrees. The rotation of a joint is represented via Euler angles. Even though Euler angles are not continuous (_i.e.,_ 0 and \(2\pi\)), they enable us to easily parameterize the rotation of a joint (_e.g.,_ degrees of freedom), and enforce angular limits, which is more challenging with higher dimensional representations (_e.g.,_ quaternions). The graph edges also represent the individual offsets of the child node to its parent, and the root offset is the pose's origin in the camera frame. Instead of working with real distances of edges, we select an anchor edge (_e.g.,_ the longest edge in the skeleton), and all other edges are scaled with respect to this edge. This approach makes the skeleton easily scalable and more intuitive because human hand proportions are relatively constant. To control the proportion limits, each edge has an assigned tuple of the maximum and minimum ratio or scale for the anchor edge. The proposed tree graph is directed, and it has a hierarchical structure beginning from the root vertex. This aims to build a chain of computations for the forward kinematics layer (FK) by traversing from the root to the leaves of the tree. The orientation of the nodes is therefore relative to its parent, and the rotation of the root is the orientation of the pose in the camera frame. This graph representation helps to enforce constraints including symmetry or scaling of the hand structure during the FK processing. #### 2.6.1 FK Layer The forward kinematics layer is a non-parametric layer of the network that is implemented by traversing a tree graph via the breadth-first search algorithm (BFS) [5]. This allows us to process all nodes in parallel at each depth level of the tree. The computation starts from the root node and expands to its children, and it recursively repeats thereafter. Each node \(i\) has Euler angles relative to the parent node \(\mathbf{e}_{i}\in\mathbb{R}^{3}\) that is within a limit \([\mathbf{e}_{\min}^{i},\mathbf{e}_{\max}^{i}]\) and translation offset \(\mathbf{o}_{i}\in\mathbb{R}^{3}\). The relative rotation of the node is given by converting Euler angles to a rotation matrix \(\mathbf{R}_{i}^{\prime}=\phi(\mathbf{e}_{i})\) via mapping \(\phi:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3\times 3}\). The relative rotation matrix \(\mathbf{R}^{\prime}\) can have from zero to three DoF depending on the parametrization of the nodes. The node's rotation in a camera frame is denoted as \(\mathbf{R}_{i}\) and its position in the 3D space is \(\mathbf{p}_{i}\). The root hence has relative rotation and offset with respect to the camera frame, _i.e.,_\(\mathbf{R}_{0}=\mathbf{R}_{0}^{\prime}\) and \(\mathbf{p}_{0}=\mathbf{o}_{0}\). The position and orientation in space for all other nodes are found by a recursive rule at each stage of the BFS tree traverse as follows: \[\mathbf{R}_{i}=\mathbf{R}_{j}\mathbf{R}_{i}^{\prime}\qquad\qquad\mathbf{p}_{i }=\mathbf{R}_{i}\,\mathbf{o}_{i}+\mathbf{p}_{j}. \tag{7}\] The edge \((i,j)\) goes from the parent node \(j\) to its child \(i\). The position of each joint is dependent on the parent, while the root node remains unconstrained, _i.e.,_ it has three DoF without limits. ### Temporal Transformer When a video sequence is available, we replace the angular MLP with an angular Transformer model to exploit temporal information over the sequence of consecutive frames. This allows us to overcome noisy predictions caused by motion blur or occlusions, and refine the final estimate. The transformer outputs a sequence of angles that with fixed bone lengths are fused into the forward kinematics layer to generate 3D poses. In this work, we explored different variations of the transformer. Primarily, the decoder part is unnecessary because a target sequence is not always available, _e.g.,_ dataset does not contain the ground truth joint angles. The input for the Transformer can be either a sequence of 3D points or angles since the MLP and diffusion model work with angles and 3D joints respectively. The diffusion model at the first stage outputs 3D points, therefore, it is better to avoid Figure 4: The skeleton shows the hand parameterization where each node has assigned degrees of freedom (DoF). The root of the skeleton is the wrist of the hand. The main edge of the skeleton graph is from the root to index MCP (metacarpophalangeal), in which bone length represents the scale of the hand. The other edges are computed as proportion and with respect to the anchor edge, the proportion ratio is constrained to the feasible range of bone length. Moreover, each angle is constrained to be within a limited range. the computation of angles with the IK module and pass 3D points directly to the transformer to generate a sequence of angles. The Transformer encoder consists of several layers and heads. The inputs are embedded into high-dimensional vectors using a fully-connected layer with a sinusoidal positional encoding. Additionally, we use an encoder mask where for each batch we randomly hide at most 50% of the sequence values. Together with dropout, it forces the encoder to better generalize and learn temporal information. For the bone length smoothing an additional temporal model is not required. We assume that bone lengths do not change over the sequence, so simple averaging of the MLP predictions is sufficient. ## 3 Experiments ### Skeletal parameterization For a hand pose skeleton with 21 joints, we used the 26 degree of freedom parameterization from [38], where three angles are used for the orientation of the wrist (root joint), eight angles for the four metacarpophalangeal (MCP) points that have two DoF, three angles for the thumb MCP, eight angles for the four interphalangeal (PIP) and the four distal interphalangeal (DIP) joints, one angle for the thumb interphalangeal (IP), and three angles for the thumb compactacarpal (CMC) points. Additionally, apart from the angular parameterization of the joints, we include 15 more angles (three per finger) to position fingers on the correct offset from each other. Therefore, in total one hand has 41 angles and 20 bone length proportions. We have also implemented an inverse kinematics (IK) solver to fit 3D hand poses by optimizing angles and hand shape. The solver helps to extract, and statistically compute, the angular constraints and bone length proportions from a dataset. The obtained limits are used for model training to predict the pose parameters within the desired range. ### Quantitive evaluation The hand pose models for 3D estimation from a single RGB image were evaluated on three publicly available benchmark datasets. First, the Rendered Hand pose Dataset (RHD) [47] contains synthetically generated images of humans with 3D hand skeletons provided. It has 41258 training and 2728 testing samples with 20 different characters performing 39 actions. The comparison to the _state-of-the-art_ methods is shown in table 1. The proposed baseline and diffusion model have the lowest error, whereas the diffusion model with an additional IK module shows the best performance. While the error-wise improvement of the IK model may not be substantial, the visual realism of the hand representation is notably enhanced (see Figure 3). The second, Stereo Hand pose Benchmark dataset (STB) [45] includes 18000 stereo reconstructed 3D hand poses, where 15000 images are used for training and 3000 for testing. Results are reported in table 1 where the proposed models have the best accuracy, similarly, the diffusion model outperforms the baseline model. The third, InterHand2.6M [34] contains 2.6 million images of single and interacting hands including 3D poses triangulated from multiple views. The 2D hand detections for InterHand2.6M were obtained by either manual annotation (H) or an automatic annotation tool (M), therefore the dataset can be divided into two parts depending on annotation type. The results for different partitions of this dataset are shown in table 2. While the proposed model outperforms the baseline and recent methods, it should be \begin{table} \begin{tabular}{l c|c} \hline \multirow{2}{*}{Approach} & \multicolumn{2}{c}{MPJPE (mm)} \\ & RHD & STB \\ \hline Zimmerman [47] & 30.42 & 8.68 \\ Chen [7] & 24.20 & 10.95 \\ Yang [42] & 19.95 & 8.66 \\ Spurr [40] & 19.73 & 8.56 \\ Moon [34] & 20.89 & 7.95 \\ Gao [14] & 17.40 & 6.92 \\ Ours & 16.98 & 7.56 \\ Ours\({}_{\text{SD}^{*}}\) & **16.79** & 6.81 \\ Ours\({}_{\text{D}^{*}}\) & **16.79** & 6.47 \\ Ours\({}_{\text{D}^{*}}\) (temporal) & - & **6.17** \\ \hline \end{tabular} \end{table} Table 1: The table reports a comparison evaluation of _state-of-the-art_ methods and the proposed hand model estimators for RHD [47] and STB [45] datasets. The average per joint position errors (in mm) are reported in columns, and the lowest errors are highlighted in bold. The (D) notation corresponds to the accuracy of a plain diffusion model, and (D\({}^{*}\)) denotes the diffusion model with IK MLP. After the dashed line is the evaluation of a temporal diffusion model. \begin{table} \begin{tabular}{l c c c} \hline \multirow{2}{*}{Approach} & \multicolumn{3}{c}{MPJPE (mm)} \\ & \(H\) & \(M\) & \(H\)+\(M\) \\ \hline Moon [34] & 10.42/13.05 & 12.56/18.59 & 12.16/16.02 \\ Gao [14] & 9.10/12.82 & - & - \\ Hampali [17] & - & - & 10.99/14.34 \\ Fan [13] & - & - & 11.32/15.57 \\ Yu [43] & - & - & **6.09/8.41** \\ Ours & 9.09/12.61 & 13.37/19.06 & 11.98/16.04 \\ Ours\({}_{\text{D}}\) & **8.10/11.39** & 11.97/18.58 & 10.44/14.81 \\ Ours\({}_{\text{D}^{*}}\) & 8.12/**11.39** & **11.92/18.48** & 10.43/14.78 \\ \hline \end{tabular} \end{table} Table 2: The _state-of-the-art_ comparison on InterHand2.6M dataset [34]. The mean per joint position errors (in mm) for images of a single hand and interacting hands (separated by a slash symbol) are reported for human (\(H\)), machine (\(M\)), and both (\(H\)+\(M\)) test annotations where the models were trained on the corresponding training sets. The Ours, Ours\({}_{\text{SD}}\), and Ours\({}_{\text{D}^{*}}\) mark the baseline, diffusion, and diffusion + IK MLP models respectively. noted that Yu _et al_. [43] have explicitly designed a model to address hands' interactions, whereas our method primarily focuses on a single hand, which shows superior accuracy on single hand datasets such as RHD and STB. We used the evaluation script from [40] to compute model accuracy for STB and RHD datasets, and the script from [34] for evaluation of InterHand2.6M dataset. The results of other approaches were taken from the original papers. ### Temporal smoothing The temporal transformer model was trained and quantitatively evaluated on the validation set of the STB dataset, which is the only dataset that has consecutive image frames available. Using the temporal window of five frames, the model achieved a 4.6% performance increase versus the accuracy of the diffusion model in table 1. The qualitative evaluation is demonstrated in Figure 6, where the temporal model shows a smoother estimate with less jittering compared to a single-frame model. ### Qualitative evaluation We evaluated the baseline model trained on a new partition of the SMILE dataset [12], which contains several million hand images. For a qualitative evaluation, we randomly selected images from different sign language datasets that were not part of the training. The MediaPipe [30] 2D detector was employed to get image points of a human to uplift the 3D body pose and localize hands in images. The full-body pose estimation with a hand mesh is shown in Figure 5. It can be seen that the model has good generality across various images and could be used for sign-language tasks. ## 4 Conclusions This paper presents a novel 3D hand pose estimator from a single image. The proposed method uses the denoising diffusion model to learn a hand distribution from images conditioned on ResNet features. This allows us to predict 3D structure from image features. Additionally, it exploits the skeletal structure of the hand to parameterize and constrain the 3D pose and enforce realistic estimation. The baseline method reaches _state-of-the-art_ results on multiple benchmark datasets, while the diffusion model further improves the accuracy. To enforce temporal smoothness and remove jittering across frames, we introduce a temporal Transformer model which is applied to a consecutive sequence of frames providing further gains. The approach was qualitatively evaluated on different sign language datasets not used in the training and demonstrates excellent generality. ## 5 Acknowledgement This work was supported by the EPSRC project ExtTOL (EP/R03298X/1), SNSF project 'SMILE II' (CRSII5 193686), European Union's Horizon2020 programme ('EASIER' grant agreement 101016982) and the Innousisse IICT Flagship (PFFS-21-47). This work reflects only the authors view and the Commission is not responsible for any use that may be made of the information it contains. Figure 5: Qualitative evaluation of the proposed baseline hand model on four sign language datasets: DSGS [29], RWTH-Phoenix Weather 2014 [26], BBC-Oxford British Sign Language [1, 2], and SMILE Swiss sign language dataset [11] (respectively, from left to right). The full-body skeleton and hands mesh are on the right of each image. None of the demonstrated datasets was part of the model training. Figure 6: The top five images show a hand motion over consecutive frames. The bottom left figure shows 3D poses (centralized to the wrist joint) independently predicted by a single frame model, the middle figure shows the ground poses used for supervision, and the right figure shows the output of the temporal model. The circles highlight significant changes in the estimation. The input images were taken from the STB dataset [45].
2308.06610
Bio-SIEVE: Exploring Instruction Tuning Large Language Models for Systematic Review Automation
Medical systematic reviews can be very costly and resource intensive. We explore how Large Language Models (LLMs) can support and be trained to perform literature screening when provided with a detailed set of selection criteria. Specifically, we instruction tune LLaMA and Guanaco models to perform abstract screening for medical systematic reviews. Our best model, Bio-SIEVE, outperforms both ChatGPT and trained traditional approaches, and generalises better across medical domains. However, there remains the challenge of adapting the model to safety-first scenarios. We also explore the impact of multi-task training with Bio-SIEVE-Multi, including tasks such as PICO extraction and exclusion reasoning, but find that it is unable to match single-task Bio-SIEVE's performance. We see Bio-SIEVE as an important step towards specialising LLMs for the biomedical systematic review process and explore its future developmental opportunities. We release our models, code and a list of DOIs to reconstruct our dataset for reproducibility.
Ambrose Robinson, William Thorne, Ben P. Wu, Abdullah Pandor, Munira Essat, Mark Stevenson, Xingyi Song
2023-08-12T16:56:55Z
http://arxiv.org/abs/2308.06610v1
# Bio-SIEVE: Exploring Instruction Tuning Large Language Models for Systematic Review Automation ###### Abstract Medical systematic reviews can be very costly and resource intensive. We explore how Large Language Models (LLMs) can support and be trained to perform literature screening when provided with a detailed set of selection criteria. Specifically, we instruction tune LLMa and Guanaco models to perform abstract screening for medical systematic reviews. Our best model, Bio-SIEVE, outperforms both ChatGPT and trained traditional approaches, and generalises better across medical domains. However, there remains the challenge of adapting the model to safety-first scenarios. We also explore the impact of multi-task training with Bio-SIEVE-Multi, including tasks such as PICO extraction and exclusion reasoning, but find that it is unable to match single-task Bio-SIEVE's performance. We see Bio-SIEVE as an important step towards specialising LLMs for the biomedical systematic review process and explore its future developmental opportunities. We release our models, code and a list of DOIs to reconstruct our dataset for reproducibility. Department of Computer Science\({}^{1}\)/School of Health and Related Research\({}^{2}\), The University of Sheffield [email protected], {arobinson10, whthorne1, bpwu1, a.pandor, m.essat, mark.stevenson, x.song}@sheffield.ac.uk ## 1 Introduction Systematic reviews (SR) are widely used in fields such as medicine, public health and software engineering where they help to ensure that decisions are based on the best available evidence. However, they are time-consuming and expensive to create. Expensive specialist time must be spent evaluating natural language documents. This is becoming infeasible due to the exponentially increasing release of literature, especially in the biomedical domain [14]. michelson2019predicuated estimated that the average SR cost $141,194 and takes a single scientist an average of 1.72 years to complete. Automation approaches have been introduced to assist in alleviating these issues, targeting different stages of the process. The most targeted stages are searching, screening and data extraction. It is standard practice for screening solutions to utilise an active learning approach. A human is "in the loop" labelling the model's least certain samples and ranking articles by relevance [1, 13, 14]. However, stopping criteria is a common insufficiency, often being left to the end user and risking missed relevancy. Regardless, this does not lead to an out-the-box solution and requires significant screening before satisfactory performance is achieved [10]. Language models like BERT [10] and T5 [12] have been applied to screening prioritisation [13, 14, 15] and classification [16, 17]. However, model input size has been a common limitation and zero-shot performance severely lacked compared to basic trained models like Support Vector Machines (SVM) or traditional methods such as Query Likelihood Modelling (QLM). Given the general-purpose capability of LLMs like GPT-3.5-turbo (hence referred to as ChatGPT), studies have attempted to evaluate the ability to assist in screening classification using a zero-shot approach with promising yet varied results, evoking the need for a specialised solution. Reviewers must also provide reasons for excluding potentially relevant articles. Automating this task could reduce workload as a qualitative filtering mechanism - where sensitivity (recall) is essential, excluded reviews could be briefly inspected to validate their exclusion. Exclusion reasons can also provide reviewers using these tools with an insight into the model's decision process. Our contribution is a family of instruction fine-tuned Large Language Models, Bio-SIEVE (Biomedical Systematic Include/Exclude reViewer with Explanations), that attempts to assist in the SR process via classification. By incorporating the existing and expansive Cochrane Review knowledge base via instruction tuning, Bio-SIEVE establishes a strong baseline for inclusion or exclusion classification screening of potential eligible studies given their abstract for unseen SRs. Bio-SIEVE is highly flexible and able to consider specific details of a review's objectives and selection criteria without the need to retrain. The task we explore is more challenging than existing work as it requires filtering of more subtly irrelevant articles. Previous work has mainly comprised of screening for simple topics or single selection criterion [15, 16, 17]. We filter by an arbitrary set of selection criteria and objectives and extend this problem by introducing the novel, challenging task of exclusion reasoning. We investigate the efficacy of different instruction tuning methods on our data with an ablation study. Following the work of Vu et al. (2020); Sanh et al. (2022), we train a set of models on the multi-task paradigm of PICO extraction and exclusion justification in an attempt to leverage beneficial cross-task transfer. As Longpre et al. (2023) found that treating generalised instruction tuning as pretraining led to pareto improvements, we fine-tune on top of Guanaco in addition to LLAMA. We find that multi-task transfer is limited but instruction tuned pretraining caused marginal improvements. We also find that training on our dataset leads to highly accurate exclusion of inappropriate studies, e.g. excluding muscle trauma studies from oral health reviews. Finally, Bio-SIEVE-Multi shows promise for the task of inclusion reasoning but fails to match the performance of Chat-GPT in preference rankings. We believe that Bio-SIEVE lays the foundation for LLMs specialised for the SR process, paving the way for future developments for generative approaches to SR automation. We open-source our codebase1 and the means with which to recreate our datasets. We also release our adapter weights on HuggingFace2 for reuse and further development. Footnote 1: [https://github.com/ambroser53/Bio-SIEVE](https://github.com/ambroser53/Bio-SIEVE) Footnote 2: [https://huggingface.co/Ambroser53/Bio-SIEVE](https://huggingface.co/Ambroser53/Bio-SIEVE) ## 2 The Systematic Review Process The systematic review process is a series of steps mapping a comprehensive plan for the study of a specific research field. This results in an effective summarisation of research material in a particular area or to answer a particular question within a domain. Initially the reviewer establishes a research question from which a selection criteria is developed that defines the scope of the project and therein the criteria for study relevance. The Population, Intervention, Comparison, Outcome (PICO) framework is a tool that can be used to define the parameters of a study. Other frameworks also exist such as PICOS and SPIDER (Methley et al., 2014; Cates, Stovold, and Welsh, 2014; Kloda, Boruff, and Cavalcante, 2020). This, along with a preliminary search, helps to establish the reviews inclusion and exclusion criteria. Once the parameters of the study are sufficiently defined, a Boolean query is constructed for use in the searching of large databases in order maximise the recall of as many relevant articles as possible and is refined in an iterative process (Wang et al., 2023). In the next stage, the relevance of each study to the review is assessed via evaluation of the study's title and abstract. The recall from Boolean queries can lead to massive amounts of documents and the time and cost of this stage can be further exacerbated by "double-screening" and "safety-first" approaches that require multiple reviewers independently carrying out the same relevance screening (Shemilt et al., 2016). The following stage is full-text screening where it is hoped that the majority of irrelevant studies have been discarded since, compared to title and abstract, obtaining the full-text of studies is not necessarily trivial (Tawfik et al., 2019). The final stages consist of: adding included reviews based on manual searching; data extraction of relevant info and quality checking; data checking and double checking; analysis, and writing. As depicted in Figure 1, Bio-SIEVE targets the title and abstract screening phase given the objectives and selection criteria of the study established by the review team earlier in the process and the abstract of the study being screened. This phase is the most appropriate given the current capability of LLMs as full-text screening requires longer context lengths. ## 3 Related Work There have been a number of approaches to automating the SR process. These works are delineated into _classification_ techniques, which provide a distinct inclusion or exclusion label, and _prioritisation_ techniques, which assist in screening by ranking reviews by relevance. Where classifiers aim to directly reduce the number of manually screened studies, ranking strategies aim to allow the reviewer to stop screening early by considering the top-k returned documents. Basic screening techniques have matured, for example Marshall et al. (2018) and Wallace et al. (2017), which are n-gram classifiers for randomised control trials, with the latter being integrated into Cochrane Reviews' Evidence Pipeline (How, 2017). These methods excel at single, easily generalisable tasks but far more difficult is evaluating articles based on topic and review-specific inclusion criteria. Other early classifier techniques utilise ensemble SVMs (Wallace et al., 2010), Random Forest (RF) (Khabsa et al., 2016) or Latent Dirichlet Allocation (Miwa et al., 2014) algorithms with active learning strategies to combat the heavy "exclude" class imbalance that naturally occurs. Many more recent approaches such as Abstracker (Wallace et al., 2012), Rayyan (Olofsson et al., 2017) and RobotAnalyst (Przybyla et al., 2018) simply take this regime and streamline its usability. However, there are some clear issues with this approach. For example, Przybyla et al. (2018) found that RobotAnalyst required anywhere between 29.26% to 93.11% of their study collection pool to be manually screened before 95% recall relevance was achieved. Figure 1: A simple representation of the systematic review process depicting the stage which Bio-SIEVE aims to assist. The black funnels are the monotonous and highly resource intensive bottlenecks of the process. Qin et al. (2021) was first to apply transformers to classification in the context of SRs, yet found fine-tuned BERT was outperformed by their Light Gradient Boosting Machine. Active learning with transformers was applied to Technology Assisted Review tasks (Sadri and Cormack, 2022) with Yang et al. (2022) finding that the amount of pretraining before active learning is crucial. Wang et al. (2022) evaluated a variety of BERT models for relevance ranking both fine-tuned and zero-shot, but disregarded the models zero-shot capability after poor results. Most recently, Moreno-Garcia et al. (2023) applied BART "zero-shot" with input embeddings on sets of abstracts queried with short questions over a specific selection criterion but saw poor performance unless combined with an RF or SVM. The recent widespread adoption of ChatGPT has invigorated attempts to utilise LMs for classification, especially in the medical domain. Qureshi et al. (2023) comments that ChatGPT selected articles when used for relevance screening "could serve as a starting point for refinement depending on the complexity of the question". Wang et al. (2023) quantitatively explored ChatGPTs ability to assist in the searching process by constructing Boolean queries but found that although precision was promising, recall was disappointing and variability from minor prompt changes and even same prompt use brought the reproducibility of its use into question. Methodical studies evaluating ChatGPT's effectiveness in classification have started to emerge. Guo et al. (2023) reported 91% exclusion recall but only 76% recall of included articles when screening a dataset of 24k+ total abstracts where only 538 were inclusion samples. They also remarked on ChatGPT's ability to generate reasoning for exclusions and it's potential for improving SR screening quality. Syriani, David, and Kumar (2023) placed a strong emphasis on reproducibility, setting a temperature of zero to ensure a higher level of consistency and found that, when prompted to be more lenient with inclusions, ChatGPT could be more conservative and sustain high recall of eligible examples given the general topic of the study and the abstract of the potential reference. They concluded that ChatGPT is a viable option. We argue the use of ChatGPT unavoidably compromises reproducibility. The alteration and retraining of ChatGPT over time is opaque as Chen, Zaharia, and Zou (2023) found that its performance on certain tasks had changed dramatically between March and June of 2023. Furthermore, ChatGPT's size and consumption costs are similarly opaque but, as a generalised model, it can be assumed to be larger than any specialised approach. This elicits the demand for a smaller language model specialised for this task, where the exact model can be referenced and computational resources disclosed. LLaMA (Touvron et al., 2023) has become a popular foundational model for causal generation as it was made open for non-commercial use in contrast to the GPT family (Brown et al., 2020; OpenAI, 2023) which has been closed-source since GPT3. Reinforcement-Learning with Human-Feedback (RLHF) (Christiano et al., 2017) has become a popular technique for controlling generated outputs from language models. InstructGPT (Ouyang et al., 2022) applied RLHF to improve the response quality of LLMs and was expanded upon to create ChatGPT which has become the benchmark for zero-shot performance. Since ChatGPT, many open-source instruction tuned LLMs have emerged to try to match its performance. Guanco (Dettmers et al., 2023) is a family of LLaMA-based LLMs trained using 4-bit quantization and LoRA. The zero-shot performance of Guanaco-65B on the Vicuna benchmark (Chiang et al., 2023) achieves 99.3% the performance of ChatGPT. Instruction tuning is a method of fine-tuning where tasks are phrased as natural language prompts and has been shown to improve LLM performance on zero-shot tasks. (Wei et al., 2022) The detailed ablation study carried out by Longpre et al. (2023) found that treating instruction tuning as pre-training before downstream task fine-tuning caused faster convergence and often provided better performance overall. Vu et al. (2020) found that transfer learning with multiple tasks in the same domain could improve the performance of the tasks individually. Full fine-tuning of LLMs is prohibitively expensive to non-commercial entities; as such, techniques have been developed to minimise computational requirements and training time while maintaining high performance. Based on the hypothesis that parameter updates have a low intrinsic rank (Aghajanyan, Zettlemoyer, and Gupta, 2020), Low Rank Adaptation (LoRA) (Hu et al., 2021) applies a rank decomposition of specified weight matrices while freezing the original network to reduce the trainable parameter count whilst delivering comparable performance to full-finetuning. Combined with 8- or even 4-bit quantization, it is possible to fine-tune 65B parameter models on a single 48GB GPU (Dettmers et al., 2021, 2023). ## 4 Methods **Instruct Cochrane Dataset** We gathered a total of 7,330 medical SRs from all possible topic areas available on the Cochrane Reviews3 website. Each review contained the objectives and selection criteria along with all considered studies and whether they were included or excluded from the review. Excluded studies were accompanied by a reason for exclusion. Out of these 7,330 reviews were derived a training split of 6,963 and an evaluation split of 367. Each study is treated as an individual data point. The distributions of the separate splits are displayed in Table 1. Footnote 3: www.cochranelibrary.com Cochrane was selected for review gathering as the review format is standardised. The delineated objectives and selection criteria were suitably informative for review specification and the exhaustive references were clearly categorised into included and excluded. In addition, reviews provided justification for exclusion and descriptions of the population, intervention and outcome of considered reviews created a basis for a multi-task dataset. Comparison data was difficult to retrieve and were not included thus these tasks will hence be referred to as PIO extraction tasks. Topic distribution of the train and test sets can be found in Figure 2. Due to size of the test split and the fact that we are evaluating many models on many different test sets, we instead chose to use a truncated subset of the full test split that maximised the diversity of topics. This allowed the test set to remain the basis of evaluation for the model's generalisation across topics. **Instruction Tuning Method** Following the work of Chung et al. (2022), we utilised instruction fine-tuning in order to bolster the efficacy of the fine-tuning process. Input data was formatted with natural language instructions for the tasks of inclusion or exclusion classification, PICO extraction, and exclusion reason generation. To minimise information loss from truncation, the inputted sections were tokenised and scaled down proportional to their length until they fit within the max input token length of 2048. We utilise the Alpaca instruction format (Taori et al., 2023) with all our models to match Guanaco and to maintain consistency between the Guanaco and base LLaMA models. (See Figure 3 for an example instruction). Full details on our preprocessing methods can be found in Appendix C. **Safety-First Test Set** Since samples could potentially have been excluded during full-text screening with information unavailable in the abstract, the test split labels may not reflect the appropriate decisions at an abstract screening stage. We curated a safety-first test set by manually annotating include/exclude decisions for a small subset of 108 samples from the test split, with each sample consisting of the objectives and selection criteria for the review and the abstract of the prospective study. The resulting safety-first set was biased toward include with 79 samples and 29 samples treated as exclude. This results in a benchmark that rewards cautious models with high include recall. Annotation was performed by professional medical systematic reviewers, who were instructed to choose from 3 labels: 'Include', 'Exclude', or 'Insufficient Information'. For evaluation purposes, we treat 'Insufficient Information' as 'Include', since these would be samples that should proceed to full-text screening phase, in keeping with a safety-first approach. Overall, there were 11 disagreements between our labels and the labels provided by the original reviewers. Our annotators labelled 3 samples as "Include" and 8 samples as "Insufficient Information". In the Instruct Cochrane dataset, these samples were all labelled, after full-text screening, as _exclude_. This issue of label mismatch extends to the larger test set and the training set. However, applying this manual re-annotation procedure to training data and more testing data was infeasible due to costs, especially since professional medical reviewers were used as annotators. **Review Subsets** In order to compare to classifier techniques trained for a specific review, a subset of reviews with a sufficient number of abstracts is required to train and evaluate with k-fold cross validation. For this we took the 13 reviews from the evaluation set that have over 100 associated abstracts resulting in a total of 1711 individual samples. **Irrelevancy Test Set** To ensure that the model is able to exclude wildly irrelevant submissions, we constructed a evaluation set of selection criteria paired with abstracts from a completely distinct topic. Starting from the 13 reviews in the Review Subset, we paired each review with 5 random abstracts from the other reviews. Since each review in this subset is from a different topic area, each instruction \begin{table} \begin{tabular}{c c c c c c} Task & Train & Test & Subset & S-14 & Irr. \\ \hline Inclusion & 43,221 & 576 & 784 & 79 & - \\ Exclusion & 44,879 & 425 & 927 & 29 & 780 \\ \hline Inc/Exc & 88,100 & 1,001 & 1,711 & 108 & 780 \\ \hline Population & 15,501 & - & - & - & - \\ Intervention & 15,386 & - & - & - & - \\ Outcome & 15,518 & - & - & - & - \\ Exc. Reason & 11,204 & - & - & - & - \\ \hline **Total** & **168,842** & **1,001** & **1,711** & **108** & **780** \\ \end{tabular} \end{table} Table 1: Number of samples in each split of the Instruct Cochrane dataset. Figure 3: Example Instruct Cochrane sample used in instruction tuning. Wolf et al. (2001) is the study being being evaluated for inclusion in Gillespie et al. (2012). Figure 2: The topic distribution of the inclusion/exclusion classification samples in the train and test splits of the Instruct Cochrane dataset. prompt formed from this set is guaranteed to be irrelevant (i.e. should be classified as 'exclude'). ## 5 Experimental Setup ### Bio-SIEVE Model Description To evaluate the suitability of multi-task transfer learning [22, 23] and instruction tuning as pre-training [10] we conduct an ablation, training four Bio-SIEVE models: single-task/multi-task, Guanaco/LLaMA. We used QLoRA fine-tuning to train LLaMA7b and Guanaco7B on the Instruct Cochrane Train split. For the multi-task versions, 4 A100 80GB cards were used for 40 hours with an effective batch size of 16, whereas the single task versions were trained in the same setup for 24 hours. In order to maintain consistency with Guanaco [1], we fix the hyperparameters during QLora fine-tuning: 4-bit double quantisation to the NF-4 datatype, 0.1 LoRA dropout, LoRA alpha of 16, LoRA rank 64, LoRA adapters on all layers but without biases and a learning rate of 2e-4 with no warmup or learning rate decay. LLaMA base model LoRA weights were randomly initialised with seed 0. Each model was trained for 8 epochs and the best model on the safety-first set was selected for comparison. ### Evaluation Methodology We evaluate inclusion and exclusion performance through accuracy for an overview of model understanding, and precision and recall for the inclusion class as a more direct evaluation of the model's real world applicability to SR automation. We compare the different permutations of Bio-SIEVE to a series of zero-shot baselines and trained models using the standard approach on the review subset. Each experiment was ran once with a temperature of 0 and no sampling. **Zero-shot comparisons** To establish a baseline for our task, we query three generic, instruction tuned models: ChatGPT, Guanaco7B, and Guanaco13B. We test Guanaco7B as it represents the baseline performance of the Guanaco7B versions of Bio-SIEVE, prior to finetuning. ChatGPT is used as the state-of-the-art comparison and Guanaco13B to observe how model scale affects zero-shot performance. To alter the tasks for evaluation with ChatGPT, we use the prompt designed by Syriani, David, and Kumar (2023). We include additional information on the objectives and selection criteria of the SR for additional context since our regular test set is more challenging (See Figure 6 in Appendix B). In order to compare against the zero-shot method defined in Wang et al. (2022), we use Bio-BERT [10] finetuned on the MS MARCO dataset4 (hence Bio-BERT-MSM) [12, 23] and evaluate performance on our Test, Safety-first and Subset data zero-shot. Footnote 4: [https://huggingface.co/nboost/pt-biobert-base-msmarco](https://huggingface.co/nboost/pt-biobert-base-msmarco) **Inclusion Exclusion Baselines** To simulate the active learning approach standard to the field, we applied 5-fold cross validation using a logistic regression model to determine the average performance across the 13 reviews within the large review subset. We also fine-tuned and evaluated Bio-BERT-MSM in the same manner. We applied a standard data pre-processing methodology for logistic regression. Each abstract was lowercased, had stopwords removed, and was lemmatized using NLTK [11]. A new tokenizer was trained for each review based on TFIDF. All training and tokenization was performed using Scikit-Learn [13]. For Bio-BERT-MSM, only the abstract was provided when fine-tuning. However, when evaluating zero-shot performance the review's objectives and selection criteria were also provided utilising the Huggingface Zero-shot Classification Pipeline as in Moreno-Garcia et al. (2023). **Exclusion Reasoning** We evaluate the multi-task model variants' exclusion reasons via 5-star ranking. 81 exclusion tasks are taken from 3 reviews out of the 13 review subset. These are selected to best fit our expertise to speed up the process. All samples are validated to ensure that the justification could be derived from only the objectives, selection criteria and abstract. Outputs with significant generation artifacts are penalised 1 star with a minimum score of zero. A rating of 5 represents a perfect match with the original reviewer's justification. ## 6 Results Results for the classification task for each dataset is presented in Table 2. Preference results for exclusion reason generation ranking is provided in Figure 4. Agreement via Pearson's correlation coefficient between our two independent experts was r=.84 for our Guanaco7B variant, r=.42 for our LLaMA7b variant and r=.62 for ChatGPT generations and pj.001. **Our trained models achieve better accuracy scores than ChatGPT on the Test and Subset Eval sets** Our best performing model, Guanaco7B (Single), achieved 0.82 accuracy on the Test Set. By comparison, ChatGPT achieved 0.6 accuracy. Similarly, for the Subset, Guanaco7b (Single) achieved accuracy 0.26 higher than ChatGPT whilst only reducing inclusion recall by 0.01. **Our trained models slightly outperform active-learning style models specialised for a single review** Guanaco7B (Single) achieves 0.81 accuracy on the Subset eval set. The next best model is the Logistic Regression Baseline, which achieved 0.8. We highlight that this LR baseline had a data advantage over our generalised models: separate Logistic Regression models were trained for each individual review in the Subset whereas our trained models relied on only one single fine-tuned LLM for entire Subset. Additionally, the logistic regression models used 80% of samples per Subset review for training data (5 fold cross-validation). By contrast, our trained models had never seen any of the reviews or studies in the Subset Evaluation set during training. **ChatGPT is able to be more lenient, allowing it to perform well on the safety-first dataset** ChatGPT tended to include reviews rather than exclude, leading to high recall at the expense of precision. (Note: this was due to an explicit prompt instruction to 'be lenient') This means that it performed strongly on the safety-first set with an accuracy of 0.73. Our best model on this subset, LLaMA7B Single achieved higher precision (0.88) but slightly lower accuracy (0.72). We did not experiment with leniency prompting or thresholding for our trained models but anticipate that this would further improve results. **Single-task training was more effective than multi-task training** The single-task models tended to be include recall oriented and higher performing with the Guanaco7B-Single variant outperforming all our other models by at least 0.07 accuracy whilst preserving the highest inclusion recall of 0.82 and 0.85 on Test and Subset respectively. **Irrelevancy test: open-source LLMs must be fine-tuned in order to be effective systematic reviewers** Both ChatGPT and our trained models perform very well on the irrelevancy test, demonstrating that these models are able to effectively exclude off-topic abstracts. However, other zero-shot models perform very poorly (Guanaco7B achieves only 0.02), revealing that these open-source models are unsuitable for use in the zero-shot setting as they include many highly irrelevant abstracts. **ChatGPT provides the best exclusion reasons** ChatGPT managed an average score of 3.4 in our rankings in comparison to 2.4 and 2.0 for the Guanaco7B and LLaMA7B Bio-SIEVE-Multi variants and we therefore treat ChatGPT as the current state-of-the-art for this exclusion reasoning task. Bio-SIEVE-Multi variants managed to match the quality of ChatGPT for 45% of samples but there were minimal examples of Bio-SIEVE exceeding its quality (6-7%). Overall, the quality of exclusion reasons remains poor: ChatGPT generated subpar or incorrect reasons for 83% of samples. ## 7 Discussion **Model Comparison** Bio-SIEVE takes a more balanced approach to classification as shown by its consistently higher precision but relatively small decrease in recall. This suggests a greater ability to reason over the selection criteria. In contrast, ChatGPT, using the prompt format of (Syriani, David, and Kumar, 2023), tends to be overly lenient and overly inclusive. This is demonstrated when evaluating performance broken down per topic, as shown in Figure 5. Our models perform consistently across review topics whereas ChatGPT performance varies greatly. For "Genetic Disorders" ChatGPT results are significantly below other models; for "Heart & Circulation" and "Infectious Disease" topics, it predicted "Include" for all samples and never "Exclude". This draws into question the extent of ChatGPT's generality and underscores the necessity to assess model blind spots prior to their endorsement for real world application. Other zero shot model suffer this problem to an even greater extent. They tend to include everything, even abstracts from unrelated topics, as shown in the Irrelevancy column of 4. Therefore, despite high include recall performance on all three evaluation sets, they do not serve any practical benefit to reviewers. This shows that, despite claims of performance matching ChatGPT on chatbot benchmarks like Vicuna (Chiang et al., 2023), open-source models (when they are not fine-tuned to a task) are still a long way of reaching similar zero-shot capabilities. Bio-SIEVE outperforming the active learning style Logistic Regression models is also an achievement. This validates that LLMs can facilitate reasoning of SR criteria with lan \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**Test**} & \multicolumn{3}{c|}{**Subset**} & \multicolumn{3}{c|}{**Safety-first**} & **Irre.** \\ \cline{2-11} & **Pre.** & **Rec.** & **Acc.** & **Pre.** & **Rec.** & **Acc.** & **Pre.** & **Rec.** & **Acc.** & **Acc.** \\ \hline _Logistic Regression*_ & - & - & - & 0.79 & 0.78 & 0.80 & - & - & - & - \\ _Bio-BERT-MSM*_ & - & - & - & 0.43 & 0.30 & 0.50 & - & - & - & - \\ \hline _ChatGPT (ZS)_ & 0.59 & **0.96** & 0.60 & 0.50 & 0.86 & 0.55 & 0.79 & 0.86 & **0.73** & 0.98 \\ _Guanaco13B (ZS)_ & 0.58 & 0.95 & 0.56 & 0.47 & **0.96** & 0.47 & 0.71 & 0.90 & 0.67 & 0.03 \\ _Guanaco7B (ZS)_ & 0.57 & 0.95 & 0.56 & 0.46 & 0.95 & 0.46 & 0.73 & **0.96** & 0.72 & 0.02 \\ _Bio-BERT-MSM (ZS)_ & 0.69 & 0.14 & 0.54 & 0.46 & 0.93 & 0.47 & 0.66 & 0.93 & 0.64 & 0.09 \\ \hline _LLaMA7B (Single)_ & 0.77 & 0.79 & 0.74 & 0.65 & 0.83 & 0.71 & 0.88 & 0.72 & 0.72 & 0.96 \\ _LLaMA7B (Multi)_ & 0.88 & 0.64 & 0.74 & **0.85** & 0.70 & 0.80 & 0.91 & 0.38 & 0.52 & 0.97 \\ _Guanaco7B (Single)_ & 0.85 & 0.82 & **0.82** & 0.76 & 0.85 & **0.81** & 0.88 & 0.62 & 0.66 & 0.98 \\ _Guanaco7B (Multi)_ & **0.90** & 0.64 & 0.75 & 0.81 & 0.69 & 0.78 & **0.95** & 0.46 & 0.58 & **0.99** \\ \hline \end{tabular} \end{table} Table 2: Results of inclusion/exclusion classification for a logistic regression baseline, the zero-shot (L)LMs and the Bio-SIEVE variants. Test, Subset and Safety-first metrics are precision, recall and accuracy. Single variants were only trained on the task of include/exclude classification. Multi models were also trained on PIO extraction and exclusion reasoning. * indicates results when trained/fine-tuned with 5-fold cross validation on the Review Subset. \({}^{\dagger}\)ChatGPT version as of 27/07/23 Figure 4: Statistics for model generated exclusion reasons when scored by two experts independently given the original author’s exclusion justification. guage, a far more accessible and cost effective method of automation when compared to the training of models that learn the criterion of individual SRs which has been the de facto approach for over a decade. **Effect of Training Data** Training data topic imbalance had no noticeable effect on generalisation. For example,over half the training samples were in the "Child Health" topic yet Bio-SIEVE obtained similar if not better results on "Heart & Circulation" samples, which made up only 4.5% of the training data. "Genetic Disorder" topic performance is strong despite only making up 1.2% of the training data. We generally find that for multi-task models, cross-task transfer between PIO, Exclusion reasoning and Include/Exclude classification harms performance. We speculate two reasons for this 1) Dataset Imbalance: PIO data was only available for included studies which may have resulted in tasks concentrated around a smaller subset of topics with reduced variation 2) Hallucinations - exclusion reasons and PIO information often relied on information from full-text screening. Training the model to extract this information from the abstract when not present may have encouraged it to hallucinate and/or overfit to the training data. ## 8 Conclusion and Further Work In this paper, we demonstrate the effectiveness of training open-source LLMs to perform biomedical title/abstract screening. Our trained models achieve significant accuracy improvements over ChatGPT, and are specialised for the healthcare domain. Our results also reveal the dangers of relying on ChatGPT's zero-shot performance: its accuracy is very uneven across healthcare topics, performing particularly poorly on Genetic Disorders and being excessively lenient for Infectious Disease and Heart & Circulation. This is in addition to reproducibility concerns arising from the opacity of closed-source models. Bio-SIEVE also outperforms traditional active-learning models that are specialised for an individual review. This demonstrates the promising capability of LLMs to assist in the SR process: a single model can be widely deployed for an entire SR domain, without the need for re-training per review task. Though we focused on biomedical applications of this technology, our models and training process could be applied to other domains such as software engineering or scientific systematic reviews. Our model is a first step towards this objective and we set a benchmark for generative language model solutions. The performance of Bio-SIEVE could be further improved by including few-shot prompting: both in training data as well as during inference. We did not experiment with few-shot prompts due to the limitations of model context window: the length of each sample makes it difficult to include additional examples in the prompt. However, using a mix of zero-shot and appropriately selected few-shot examples is likely to lead to improved performance, as discussed in [10]. Finally, we highlight the current shortcomings of our approach. Namely, that the exclusion reasons generated by our Bio-SIEVE-Multi variants were outperformed by ChatGPT. In our case, multi-task training was necessary to enable exclusion reasoning capability, but this worsened inclusion-exclusion performance. For future work, we plan to explore better methods of achieving multi-task capability in Bio-SIEVE, such as using a Mixture-of-Experts architecture [1]. We hope that adding a greater variety of tasks will eventually improve the model's reasoning capabilities and extend its functionality to form an effective generalised assistant for every stage of the SR process. Figure 5: Performance measured by F1-score for each metric for all models compared to ChatGPT on different medical domain topics within the test set. ChatGPT excluded no samples for topics where no exclude bar is present. ## Acknowledgements We'd like to thank Ruth Wong at ScHARR for their input and facilitating the invaluable collaboration with ScHARR.
2307.16286
Cosmic Birefringence as a probe of dark matter nature: Sterile neutrino and dipolar dark matter
Recently, non-zero rotation angle $\beta=0.30^\circ\pm0.11^\circ$ $(68\%\text{ C.L.})$ [Phys. Rev. Lett. \textbf{128}, no.9, 091302 (2022)] has been reported for linear polarization of cosmic microwave background (CMB) radiation, which is known as cosmic birefringence (CB). We used this birefringence angle of CMB to study and distinguish different candidates of dark matter (DM), e.g., dipolar and sterile neutrino DM. We calculated CMB forward scattering by those probable candidates of DM to generate $\beta$ in the presence of primordial scalar fluctuations' background. We explicitly plotted bounds on the mass and electromagnetic coupling for different sectors of DM, sterile neutrino, and dipolar DM, and compared them with other experimental bounds. Regarding dipolar DM, our calculations put a bound on the Majorana magnetic dipole moment about $\mathcal{M}\leqslant 1.4\times10^{-14}\,\frac{\beta}{0.30^\circ}\sqrt{\frac{m_{\text{\tiny{DM}}}}{1\,GeV}}\, e.\text{\,cm}$. In the case of sterile neutrino DM, the bound on the mass and mixing angle was estimated at $\theta^2 \leqslant 3.3\,(rad)^2\frac{\beta}{0.30^\circ}\,\frac{m_{DM}}{\rm{KeV}} $, which can be a new constraint for sterile neutrino DM whose production mechanism is motivated by models with a hidden sector coupled to the sterile neutrino. Based on our results, if the constraint on the mass and the electromagnetic coupling for DM must be within the allowed region, none of the considered candidates can compensate for all the observed CB angles. We also discussed the maximum contribution of the CB angle via CMB forward scattering by different sectors of the dark matter.
Jafar Khodagholizadeh, S. Mahmoudi, R. Mohammadi, M. Sadegh
2023-07-30T17:42:06Z
http://arxiv.org/abs/2307.16286v1
# Cosmic Birefringence as a probe of dark matter nature: Sterile neutrino and dipolar dark matter ###### Abstract Recently, non-zero rotation angle \(\beta=0.30^{\circ}\pm 0.11^{\circ}\) (68% C.L.) [Phys. Rev. Lett. **128**, no.9, 091302 (2022)] has been reported for linear polarization of cosmic microwave background (CMB) radiation, which is known as cosmic birefringence (CB). We used this birefringence angle of CMB to study and distinguish different candidates of dark matter (DM), e.g., dipolar and sterile neutrino DM. We calculated CMB forward scattering by those probable candidates of DM to generate \(\beta\) in the presence of primordial scalar fluctuations' background. We explicitly plotted bounds on the mass and electromagnetic coupling for different sectors of DM, sterile neutrino, and dipolar DM, and compared them with other experimental bounds. Regarding dipolar DM, our calculations put a bound on the Majorana magnetic dipole moment about \(\mathcal{M}\leqslant 1.4\times 10^{-14}\,\frac{\beta}{0.30^{\circ}}\sqrt{\frac{m_{ \rm DM}}{1\,GeV}}\,e.\,\)cm. In the case of sterile neutrino DM, the bound on the mass and mixing angle was estimated at \(\theta^{2}\leqslant 3.3\,(rad)^{2}\frac{\beta}{0.30^{\circ}}\,\frac{m_{DM}}{ \rm KeV}\), which can be a new constraint for sterile neutrino DM whose production mechanism is motivated by models with a hidden sector coupled to the sterile neutrino. Based on our results, if the constraint on the mass and the electromagnetic coupling for DM must be within the allowed region, none of the considered candidates can compensate for all the observed CB angle. We also discussed the maximum contribution of the CB angle via CMB forward scattering by different sectors of the dark matter. Introduction Astrophysical observations of the past several decades confirm that the majority of matter in the universe consists of DM [1; 2; 3; 4]. Up to date, several attempts have been made to identify the physical properties of the missing mass, including accelerator-based techniques [5; 6; 7; 8], direct detection [9; 10; 11], indirect search experiments [7; 12; 13; 14], and astroparticle approaches such as studying their possible effects on the CMB radiation [15; 16; 17]. However, despite significant endeavors to determine its nature and properties, the identity of DM is still shrouded in mystery in astronomy and particle physics [18; 19; 20]. It seems that DM consists of non-relativistic particles that mainly interact gravitationally. In addition to gravitational interactions, they may experience very weak interaction with the standard model. Regarding the electromagnetic interaction, their coupling to photons is believed to be either non-existent or feeble. Nevertheless, an anomalous coupling with photons has been proposed for some candidates of DM which causes various phenomena in astrophysics and cosmology, such as superradiance around rotating black holes [21; 22; 23], X- or \(\gamma\)-ray emission from the decay of DM [24; 25; 26; 27; 28], and cosmic birefringence (CB) [29; 30; 31]. The CB angle can be generated in parity violating interaction [32; 33; 34]. This angle measures the possible rotation of the linear polarization plane for photons propagating over large distances in the universe. Among all cosmic rays, CMB photons are an ideal target to probe this effect for the following reasons: They are emitted at the epoch of recombination, accurate prediction of their polarization angular power spectra by the \(\Lambda\) cold DM (\(\Lambda\)CDM) model is possible, and the CMB polarization is sensitive to parity-violating phenomena. Indeed, it is conventional to decompose the observed pattern of CMB polarization into eigenstates of parity referred to as E and B modes whose correlations, which are zero in the standard scenario of cosmology due to parity conservation, lead to CMB birefringence. Generally, CB can be considered a probe of physics beyond the standard model of cosmology and elementary particles which breaks Lorentz and CPT symmetry. For instance, it provides a tantalizing hint for new physics of axions. The interaction between CMB photons and axion-like particles (ALPs) with various potentials occurring during or after the recombination epoch could be accounted for isotropic CB. For the quadratic and cosine potentials, the lower bounds on some physical properties of APL have been obtained using the observational value of CB [33]. An axion with a linear potential that plays the role of dark energy has been studied in [35], where an upper bound has been placed on the axion decay constant using the birefringence measurement and the constraint on the equation of state for dark energy by Planck 2018 [36] and the axion-photon coupling constant by \(Chandra\). There are several independent approaches for extracting the birefringence angle; one such approach on which most discussions of birefringence have focused is using \(EE\), \(BB\) and \(EB\) power spectra [36; 37; 38; 39; 40; 41]. However, this approach has a fundamental problem. Since this effect degenerates with an artificial rotation of polarization angles generated by orientation miscalibration of polarimeters, it is not possible to distinguish birefringence from the systematic uncertainty of a miscalibration of the orientation angle. The galactic foreground emission is one way to deal with this problem [42]. To this end, given that the miscalibration of detector orientation changes the foreground and the CMB spectra in the same way, one could use data at different frequencies to separate birefringence from foreground and calibration effects. Another independent method has been proposed by authors in [43], based on which the birefringence angle is determined only by using CMB temperature, \(E\) modes, and their cross-correlation. In addition to the above methods, it has recently been found that parity violating forward scattering of the CMB in the presence of scalar perturbation can lead to the CB effect. For instance, the authors in [44] have shown that the weak interaction of the CMB and cosmic neutrino background, in the order of one loop forward scattering, can generate non-vanishing CB in the presence of scalar perturbation whose value is at least one order larger than the CB angle reported by using Planck data release. Therefore, other mechanisms resulting in non-vanishing parity violating forward scattering should be taken into consideration to produce the CB [45; 46; 47; 48; 49]. Following the last mentioned approach, here, we introduce the two new sources of CB, i.e., dipolar DM and sterile neutrino DM. We show that the birefringence limit from CMB observations enables us to constrain their coupling to photons. In other words, the CB effect provides a new tool to investigate the DM properties and open up a new observational window to explore the nature of DM. The rest of the paper is organized as follows: We present a brief review of relativistic Boltzmann equations in section II. In section III, we give a general discussion of CB. Then, we choose dipolar DM and sterile neutrino DM as two kinds of DM and study their interaction effects on the linear polarization of the CMB which result in CB in section IV. In section V, we present the summary and conclusion. Relativistic Boltzmann equations The linear and circular polarizations of an ensemble of photons described by Stokes parameters are the density matrix components in the polarization space as follows: \[\hat{\rho}_{ij}\equiv\frac{1}{2}\begin{pmatrix}I+Q&U-iV\\ U+iV&I-Q\end{pmatrix}, \tag{1}\] where \(I\) is the total intensity of radiation, \(U\), \(Q\), and \(V\) describe the polarization intensity of photons and, for unpolarized photons, \(Q=U=V=0\). The linear polarization of the photon is defined in terms of the Stokes parameters \(Q\) and \(U\), and parameter \(V\) indicates the net circular polarization or the difference between left- and right-circular polarization intensities. The time evolution of the density matrix components \(\rho_{ij}(\mathbf{k})\) is given by the quantum Boltzmann equation as [50] \[(2\pi)^{3}\delta^{3}(0)(2k^{0})\frac{d}{dt}\rho_{ij}(\mathbf{k})=i\langle[H_{I }^{0}(t);D_{ij}^{0}(\mathbf{k})]\rangle-\frac{1}{2}\int dt\langle[H_{I}^{0}(t) ;[H_{I}^{0}(t);D_{ij}^{0}(\mathbf{k})]]\rangle, \tag{2}\] where \(H_{I}^{0}(t)\) is the first order of the interacting Hamiltonian \[H_{I}^{0}(t)=-\frac{i}{2}\int\limits_{-\infty}^{\infty}dt^{\prime}T\{H(t),H(t^ {\prime})\}, \tag{3}\] in which \(T\) signifies a time-ordered product and \(H\) denotes the interaction Hamiltonian which relates to the interaction Hamiltonian density \(\mathcal{H}(x)\) as follows \[H(t)=\int d^{3}\mathbf{x}\:\mathcal{H}(x). \tag{4}\] Moreover, \(D_{ij}(\mathbf{k})\equiv a_{i}^{\dagger}(\mathbf{k})a_{i}(\mathbf{k})\) is the photon number operator. The first term on the right-hand side of Eq. (2) is called the forward scattering term which is proportional to the amplitude of the scattering, whereas the second one is known as the higher order collision term, giving the scattering cross section which is highly subdominant compared to the first term. Although the Stokes parameters \(I\) and \(V\) are independent of the reference frame, \(Q\) and \(U\) are frame-dependent parameters. However, by introducing a set of linear combinations of polarization parameters \(Q\) and \(U\) as \(\Delta_{\rm P}^{\pm}=Q\pm iU\), one can find the reference frame-independent parameters, where \(P\) stands for polarization. Accounting for the scalar mode perturbation of the metric and using the Thomson scattering of CMB photons by cosmic electrons, the time evolution of the CMB radiation transfer is governed by the following set of equations [51]: \[\frac{d}{d\eta}\Delta_{\rm I}^{\rm S}+iK\mu\Delta_{\rm I}^{\rm S} +4[\dot{\Psi}-iK\mu\Phi]=\dot{\tau}_{\rm e}\left[-\Delta_{\rm I}^{\rm S}+ \Delta_{\rm I_{o}}^{\rm S}+i\mu v_{b}+\frac{1}{2}P_{2}(\mu)\Pi\right], \tag{5}\] \[\frac{d}{d\eta}\Delta_{\rm P}^{\pm S}+iK\mu\Delta_{\rm P}^{\pm S} =\dot{\tau}_{\rm e}\left[-\Delta_{\rm P}^{\pm S}-\frac{1}{2}[1-P_{2}(\mu)]\Pi \right], \tag{6}\] where \(\Psi\) and \(\Phi\) are the metric perturbations, \(v_{b}\) is the baryon bulk velocity, and \[\Pi\equiv\Delta^{\rm(S)}_{I2}+\Delta^{\rm(S)}_{P2}+\Delta^{\rm(S)}_{P0} \tag{7}\] The superscript "S" denotes the scalar metric perturbations, and the CMB radiation transfer is described by the multipole moments of temperature (I) and polarization (P) [52; 54] \[\Delta^{\rm S}_{\rm I,P}(\eta,K,\mu)=\sum_{l=0}^{\infty}(2l+1)(-i)^{l}\Delta^{ \rm S}_{\rm I,P_{l}}(\eta,K)P_{l}(\mu), \tag{8}\] where \(P_{l}(\mu)\) is the Legendre polynomial of rank \(l\), \(\mu=\hat{\bf n}.\hat{K}=\cos\theta\), and \(\theta\) is the angle between the CMB photon direction \(\hat{\bf n}=\frac{{\bf k}}{|{\bf k}|}\) and the wave vectors \(K\) of the Fourier modes of scalar perturbations (S). Besides, \(\dot{\tau}_{e}=an_{e}x_{e}\sigma_{T}\) indicates the differential optical depth for Thomson scattering in which \(\sigma_{T}\) is the Thomson cross-section and \(a(\eta)\) is the scale factor normalized to unity at present as a function of conformal time (\(\eta\)). Electron density and ionization fraction are denoted by \(n_{e}\) and \(x_{e}\), respectively. To study the polarization features of the CMB photons in the context of cosmology, it is common to separate the CMB polarization \(\Delta^{\pm{\rm S}}_{\rm P}(\eta,K,\mu)\) into a curl-free part (E mode) and a divergence-free part (B mode) as follows: \[\Delta^{\rm S}_{\rm E}(\eta_{0},K,\mu) \equiv -\frac{1}{2}\left[\bar{\eth}^{2}\,\Delta^{+{\rm S}}_{\rm P}(\eta_ {0},K,\mu)+\,\eth^{2}\Delta^{-{\rm S}}_{\rm P}(\eta_{0},K,\mu)\right], \tag{9}\] \[\Delta^{\rm S}_{\rm B}(\eta_{0},K,\mu) \equiv \frac{i}{2}\left[\bar{\eth}^{2}\,\Delta^{+{\rm S}}_{\rm P}(\eta_ {0},K,\mu)-\eth^{2}\Delta^{-{\rm S}}_{\rm P}(\eta_{0},K,\mu)\right], \tag{10}\] in which \(\eth\) and \(\bar{\eth}\) are spin raising and lowering operators, respectively. One can obtain the value of \(\Delta^{S}_{E,B}(\hat{\bf n})\) at the present time \(\eta_{0}\) and in the direction \(\hat{\bf n}\) by summing over all their Fourier modes \(K\) as follows [52; 54]: \[\Delta^{S}_{E,B}(\hat{\bf n}) = \int d^{3}{\bf K}\,\xi({\bf K})e^{\mp 2i\phi_{K,n}}\Delta^{S}_{E,B} (\eta_{0},K,\mu), \tag{11}\] where \(\phi_{K,n}\) is the angle needed to rotate the \({\bf K}\) and \(\hat{\bf n}\) dependent basis to a fixed frame in the sky. Moreover, it should be mentioned that the negative sign is used for the E mode and the positive sign is for the B mode The random variable \(\xi({\bf K})\) used to characterize the initial amplitude of the mode satisfies \[\langle\xi^{*}({\bf K}_{1})\xi({\bf K}_{2})\rangle=P_{S}({\bf K})\delta({\bf K }_{1}-{\bf K}_{2}), \tag{12}\] where \(P_{S}(K)\) is the initial power spectrum of the scalar mode perturbation and the angle brackets \(\langle...\rangle\) represent an ensemble average over initial conditions. Finally, to characterize the statistics of the CMB perturbations we need to calculate the power spectra which is defined as the rotationally invariant quantity as follows \[C_{E,B}^{\ell}=\frac{1}{2\ell+1}\sum_{m}\left\langle a_{E,lm}^{*}a_{B,lm}\right\rangle, \tag{13}\] where \(a_{E,lm}\) and \(a_{B,lm}\) are the expansion coefficients of \(\Delta_{E,B}^{S}(\hat{\mathbf{n}})\) in terms of spherical harmonics \[\Delta_{E}^{S}(\hat{\mathbf{n}}) = \sum_{lm}\left[\frac{(l+2)!}{(l-2)!}\right]^{1/2}a_{E,lm}Y_{lm}( \hat{\mathbf{n}}),\] \[\Delta_{B}^{S}(\hat{\mathbf{n}}) = \sum_{lm}\left[\frac{(l+2)!}{(l-2)!}\right]^{1/2}a_{B,lm}Y_{lm}( \hat{\mathbf{n}}). \tag{14}\] Now, using Eq.(13) and by integrating Eqs. (11) and (12) over the initial power spectrum of the metric perturbation, we obtain the power spectrum for \(E\)- and \(B\)- modes as follows \[C_{E,B}^{\ell(S)}=\frac{1}{2\ell+1}\frac{(\ell-2)!}{(\ell+2)!}\int d^{3} \mathbf{K}P_{S}(\mathbf{K})\Big{|}\sum_{m}\int d\Omega\,Y_{lm}^{*}(\hat{ \mathbf{n}})\Delta_{E,B}^{S}(\eta_{0},K,\mu)\Big{|}^{2}. \tag{15}\] ## III General Discussion about cosmic birefringence (CB) In the context of the standard model of cosmology, due to the parity symmetry, \(E\) and \(B\) mode polarizations are not correlated. However, if CMB photons experience some sort of interaction that violates parity symmetry and the Lorentz symmetry, the phase velocities of the right- and left-hand helicity states of photons will differ and, therefore, the plane of linear polarization will rotate in the sky by an angle \(\beta\) \[Q\pm iU\mapsto\left(Q\pm iU\right)e^{\pm 2i\beta}, \tag{16}\] where the function \(\beta\), called _birefringence angle_, characterizes the amplitude of deviation from the standard model. Hence, a part of \(E\)-modes' polarization transfers into \(B\)-mode ones, and the \(EB\) mode polarization will not be zero. Such an effect could have left measurable imprints in the CMB angular power spectra \(C_{\ell}\)'s. Indeed, the presence of parity violation interaction induces a rotation of the CMB angular power spectra as follows: \[C_{EE,\ell} = \cos^{2}(2\beta)\bar{C}_{EE,\ell}+\sin^{2}(2\beta)\bar{C}_{BB, \ell},\] \[C_{BB,\ell} = \cos^{2}(2\beta)\bar{C}_{BB,\ell}+\sin^{2}(2\beta)\bar{C}_{EE, \ell}, \tag{17}\] \[C_{EB,\ell} = \frac{1}{2}\sin(4\beta)(\bar{C}_{EE,\ell}-\bar{C}_{BB,\ell}),\] where \(\bar{C}_{EE,\ell}\) is the standard \(E\)-mode power spectra, and \(\beta=0.30^{\circ}\pm 0.11^{\circ}\) (68%C.L.) is the value of the CB angle reported by using Planck data release [53]. As the above equations show, in the presence of parity-violating interaction, space acts like a birefringent material, and the cross-correlators EB turn on. In contrast, by setting \(\beta=0\), one can recover the standard results of the CMB angular power spectra. ## IV Dark matter's impact on the linear polarization Regarding the possible features of DM, various experiments based on direct or indirect methods of detection have been proposed to explore its properties. Here, we use the CB effect as a way to indirectly examine the DM signatures. The key point is that if the physics behind DM leads to the parity symmetry violation assumed in the standard model of cosmology, the linear polarization of the CMB can rotate due to its coupling to this dark sector. This results in a non-vanishing B mode and parity-violating EB correlations. Taking into account this point, we consider two kinds of DM, i.e., dipolar DM and sterile neutrino DM, whose coupling to the photon would induce a rotation of the polarization plane of the CMB, and try to study their properties through CMB birefringence. ### Dipolar dark matter The effective Lagrangian for the coupling of the electromagnetic field \(F^{\mu\nu}\) with a Dirac fermion that possesses a magnetic dipole moment \(\mathcal{M}\) and an electric dipole moment \(\mathcal{D}\) is as follows: \[\mathcal{L}_{\rm{DDM}}=-\frac{i}{2}\bar{\psi}\sigma_{\mu\nu}(\mathcal{M}+ \gamma^{5}\mathcal{D})\psi F^{\mu\nu}, \tag{18}\] where \(\sigma^{\mu\nu}\) is the commutator of two Dirac matrices, \(\sigma^{\mu\nu}=\frac{i}{2}[\gamma^{\mu},\gamma^{\nu}]\). Regarding Majorana fermions, only non-zero transition multipole moments between different mass eigenstates can be defined, and their interaction with photons is described by [55; 56] \[\mathcal{L}_{\rm{DDM}}=-\frac{i}{2}\bar{\psi}_{2}\sigma_{\mu\nu}(\mathcal{M}_ {12}+\gamma^{5}\mathcal{D}_{12})\psi_{1}F^{\mu\nu}+H.c., \tag{19}\] where \(\mathcal{M}_{12}\) is a transition magnetic moment, and \(\mathcal{D}_{12}\) is a transition electric moment. This possible interaction between photon and dipolar DM opens up a new window to explore the features of this candidate of DM. Here, we focus on the Majorana DM which interacts with CMB photons via transition magnetic dipole moment and study its properties using the CB effect of the CMB. To this end, we consider two singlet Majorana fermions \(\chi_{1}\) and \(\chi_{2}\), with mass splitting \(\delta\), which couple to photons by the magnetic dipole interaction Lagrangian [57] \[{\cal L}_{\mbox{\tiny{DDM}}}=-\frac{i}{2}\,{\cal M}_{12}\,\bar{\chi}_{1}\sigma_{ \mu\nu}\chi_{2}F^{\mu\nu}+H.C.. \tag{20}\] Assuming the singlet Majorana fermions to be the right-handed neutrinos (\(\chi\,=\,\psi_{R}+\psi_{R}^{c}\)), the Lagrangian (20) casts into \[{\cal L}_{\mbox{\tiny{DDM}}}=-\frac{i}{2}\,\,{\cal M}_{12}\,(\,\bar{\psi}^{c} _{1}\sigma_{\mu\nu}P_{R}\psi_{2}\,\,F^{\mu\nu}\,\,+\,\,\bar{\psi}_{1}\sigma_{ \mu\nu}P_{L}\psi_{2}^{c}\,\,F^{\mu\nu})+H.C., \tag{21}\] where \(P_{R}=\frac{1}{2}(1+\gamma^{5})\), \(P_{L}=\frac{1}{2}(1-\gamma^{5})\) and \(\psi^{c}=-i\gamma_{2}\psi^{\star}\). The Feynman diagram corresponding to this interaction at the lowest order is shown in Fig.1. To investigate the possible effects of this interaction on the CMB polarization at the forward scattering level, one needs to calculate \(\langle[H_{I}^{0}(0),D_{ij}({\bf k})]\rangle\) which is obtained as [15] \[i\langle[H_{I}^{0}(0),D_{ij}({\bf k})]\rangle=i\int d{\bf q}\,n_{\mbox{\tiny{ DM}}}({\bf x},{\bf q})(\delta_{is}\rho_{s^{\prime}j}({\bf k})-\delta_{js^{\prime}} \rho_{is}({\bf k}))(2\pi)^{3}\delta^{(3)}(0)M\,\mid_{q^{\prime}=q,\,\,p^{\prime }=p=k,r=r^{\prime}}, \tag{22}\] where \(d{\bf q}\equiv\frac{d^{3}{\bf q}}{(2\pi)^{3}}\frac{m_{\mbox{\tiny{DM}}}}{q^{0}}\), \(\rho_{ss^{\prime}}({\bf k})\) indicates the elements of the CMB photon density matrix and \(n_{\mbox{\tiny{DM}}}({\bf x},{\bf q})\) is the number density of DM. Moreover, \(M\) denotes the total Feynman amplitude given by the following equation in the non-relativistic limit: \[M\,=\,-{\cal M}_{12}^{2}\frac{(2k.q)^{2}}{(2k.q)^{2}-(m_{\mbox{\tiny{DM}}_{2}} ^{2}-m_{\mbox{\tiny{DM}}_{1}}^{2})^{2}}\Big{(}{\bf k}\cdot(\vec{\epsilon}_{s^ {\prime}}\times\vec{\epsilon}_{s})-k^{0}v(\vec{\epsilon}_{s^{\prime}}\times \vec{\epsilon}_{s})\cdot\hat{\bf v}\Big{)}+(1\leftrightarrow 2), \tag{23}\] where \(v=|\vec{q}|/m_{\mbox{\tiny{DM}}}\) denotes the velocity of DM and \(\vec{\epsilon}_{s}(k)\)'s are the photon polarization vectors with \(s=1,2\) for two physical transverse polarization of a free photon. Considering the case that \(\delta=m_{\mbox{\tiny{DM}}_{2}}-m_{\mbox{\tiny{DM}}_{1}}\ll k^{0}\), (23) can be estimated as follows \[M\simeq-{\cal M}_{12}^{2}\,\,{\bf k}\cdot(\vec{\epsilon}_{s^{\prime}}\times \vec{\epsilon}_{s})\,+\,(1\,\leftrightarrow\,2), \tag{24}\] where since the order of the second term is smaller than the first one due to the presence of \(v\), we ignore the terms proportional to the DM velocity. Moreover, it is important to note that in the case \(\delta\gg k^{0}\), the contribution of the photon-dipolar DM scattering on the CMB polarization will be suppressed as \((\frac{k^{0}}{\delta})^{2}\) compared to the case \(\delta\ll k^{0}\) and therefore, we will not consider this case for the rest of the paper (see appendix A for more details). Figure 1: The typical diagrams for photon-DM scattering By substituting (24) in (22) and using (2), the time evolution of the density matrix element can be written as \[\frac{d\rho_{ij}}{dt}=-i\mathcal{M}^{2}\,\,\big{(}n_{\mbox{\tiny DM}_{1}}(\mathbf{ x})+n_{\mbox{\tiny DM}_{2}}(\mathbf{x})\big{)}(\delta_{is}\rho_{s^{\prime}j}( \mathbf{k})-\delta_{js^{\prime}}\rho_{is}(\mathbf{k}))(\vec{\epsilon}_{s^{ \prime}}\times\vec{\epsilon}_{s})\cdot\hat{\mathbf{k}}, \tag{25}\] where \(\mathcal{M}_{12}^{2}=\mathcal{M}_{21}^{2}=\mathcal{M}^{2}\), \(\hat{\mathbf{k}}=\mathbf{k}/k^{0}\) and the DM number density \(n_{\mbox{\tiny DM}_{1}}\) (\(\mathrm{i}=1,2\)) is \[n_{\mbox{\tiny DM}_{1}}(\mathbf{x})=\int\frac{d^{3}\mathbf{q}}{(2\pi)^{3}}\,\, n_{\mbox{\tiny DM}_{1}}(\mathbf{x},\mathbf{q}). \tag{26}\] Since \(n_{\mbox{\tiny DM}_{1}}(\mathbf{x})+n_{\mbox{\tiny DM}_{2}}(\mathbf{x})=n_{ \mbox{\tiny DM}}(\mathbf{x})\), (25) is reduced to \[\frac{d\rho_{ij}}{dt}=-i\mathcal{M}^{2}\,\,n_{\mbox{\tiny DM}}(\mathbf{x})\, \,(\delta_{is}\rho_{s^{\prime}j}(\mathbf{k})-\delta_{js^{\prime}}\rho_{is}( \mathbf{k}))(\vec{\epsilon}_{s^{\prime}}\times\vec{\epsilon}_{s})\cdot\hat{ \mathbf{k}}, \tag{27}\] and, consequently, the Stokes parameters evolve as \[\frac{dI}{dt} = C_{e\gamma}^{I}, \tag{28}\] \[\frac{d}{dt}(Q\pm iU) = C_{e\gamma}^{\pm}\mp i\dot{\tau}_{\mbox{\tiny DM}}(Q\pm iU),\] (29) \[\frac{dV}{dt} = C_{e\gamma}^{V}, \tag{30}\] where \(C_{e\gamma}^{I}\),\(C_{e\gamma}^{V}\) and \(C_{e\gamma}^{\pm}\) show the contribution of Thomson scattering [58] and \(\dot{\tau}_{\mbox{\tiny DM}}\) is defined as follows: \[\dot{\tau}_{\mbox{\tiny DM}}=\frac{\mathcal{M}^{2}}{m_{\mbox{\tiny DM}}}\,\rho _{\mbox{\tiny DM}}, \tag{31}\] in which \(\rho_{\mbox{\tiny DM}}\) is the DM mass density. The second term in the right-hand side of (29) shows that the photon-dipolar DM forward scattering affects the time evolution of the linear polarization of the CMB \[\frac{d}{d\eta}\Delta_{\rm P}^{\pm\rm S}+iK\mu\Delta_{\rm P}^{\pm\rm S}=\dot {\tau}_{\rm e}\left[-\Delta_{\rm P}^{\pm\rm S}-\frac{1}{2}[1-P_{2}(\mu)]\Pi \right]\mp ia(\eta)\dot{\tau}_{\mbox{\tiny DM}}\Delta_{\rm P}^{\pm\rm S}, \tag{32}\] which leads to the following equation of polarization anisotropy: \[\frac{d}{d\eta}\left[\Delta_{\rm P}^{\pm\rm S}e^{iK\mu\eta\pm i\dot{\tau}_{ \mbox{\tiny DM}}+\ddot{\tau}_{\rm e}}\right]=-\frac{1}{2}e^{iK\mu\eta\pm i \dot{\tau}_{\mbox{\tiny DM}}+\ddot{\tau}_{\rm e}}\dot{\tau}_{\rm e}[1-P_{2}( \mu)]\Pi, \tag{33}\] where \[\ddot{\tau}_{\mbox{\tiny DM}}(\eta,\mu)\equiv\int_{0}^{\eta}d\eta a(\eta) \dot{\tau}_{\mbox{\tiny DM}},\qquad\ddot{\tau}_{\rm e}(\eta)\equiv\int_{0}^{ \eta}d\eta\,a(\eta)\,\dot{\tau}_{\rm e}. \tag{34}\] After calculation of \(\Delta_{P}^{\pm\rm S}\) and using (9) and (10), the E-mode and B-mode polarizations produced through the dipolar DM-photon interaction will be obtained as follows: \[\Delta_{\rm E}^{\rm S}(\eta_{0},K,\mu) = -\frac{3}{4}\int_{0}^{\eta_{0}}d\eta\,g_{\rm e}(\eta)\Pi(\eta,K) \partial_{\mu}^{2}[(1-\mu^{2})e^{ix\mu}\cos\tau_{\mbox{\tiny DM}}], \tag{35}\] \[\Delta_{\rm B}^{\rm S}(\eta_{0},K,\mu) = \frac{3}{4}\int_{0}^{\eta_{0}}d\eta\,g_{\rm e}(\eta)\Pi(\eta,K) \partial_{\mu}^{2}[(1-\mu^{2})e^{ix\mu}\sin\tau_{\mbox{\tiny DM}}], \tag{36}\] where \(x=K(\eta_{0}-\eta)\) and \(g_{\rm e}(\eta)\equiv\dot{\tau}_{\rm e}e^{-\tau_{\rm e}}\) is the visibility function of the electron and describes the probability that a photon scattered at epoch \(\eta\) reaches the observer at the present time, \(\eta_{0}\)[59]. Finally, the power spectrum of the E and B-modes will be obtained by integrating over the initial power spectrum of the metric perturbation as: \[C^{\rm(S)}_{EE,\ell}\Big{|}_{\rm DM} = (4\pi)^{2}\frac{(\ell+2)!}{(\ell-2)!}\int d^{3}KP_{S}(K)\left[ \frac{3}{4}\int_{0}^{\eta_{0}}d\eta\,g_{e}(\eta)\,\Pi(K,\eta)\,\frac{j_{\ell}(x )}{x^{2}}\cos(\tau_{DM})\right]^{2}, \tag{37}\] \[C^{\rm(S)}_{BB,\ell}\Big{|}_{\rm DM} = (4\pi)^{2}\frac{(\ell+2)!}{(\ell-2)!}\int d^{3}KP_{S}(K)\left[ \frac{3}{4}\int_{0}^{\eta_{0}}d\eta\,g_{e}(\eta)\,\Pi(K,\eta)\,\frac{j_{\ell}( x)}{x^{2}}\sin(\tau_{DM})\right]^{2}. \tag{38}\] Moreover, the cross-power spectrum will be as follows: \[C^{\rm(S)}_{EB,\ell}\Big{|}_{\rm DM} = \frac{(4\pi)^{2}}{4}\frac{(\ell+2)!}{(\ell-2)!}\int d^{3}KP_{S}(K )\left[\frac{3}{4}\int_{0}^{\eta_{0}}d\eta\,g_{e}(\eta)\,\Pi(K,\eta)\,\frac{j_ {\ell}(x)}{x^{2}}\sin(2\tau_{DM})\right]^{2}. \tag{39}\] Since the standard E-mode coming from the Compton scattering in the CMB, i.e., \(\bar{C}^{\rm(S)}_{EE,\ell}\), is \[\bar{C}^{\rm(S)}_{EE,\ell}\Big{|}_{\rm DM} = (4\pi)^{2}\frac{(\ell+2)!}{(\ell-2)!}\int d^{3}KP_{S}(K)\left[ \frac{3}{4}\int_{0}^{\eta_{0}}d\eta\,g_{e}(\eta)\,\Pi(K,\eta)\,\frac{j_{\ell}( x)}{x^{2}}\right]^{2}, \tag{40}\] Eqs. (37), (38)and (39) can be approximated as follows: \[C^{\rm(S)}_{EE,\ell}\Big{|}_{\rm DM} \cong \cos^{2}(\tau_{DM})\bar{C}^{\rm(S)}_{EE,\ell},\] \[C^{\rm(S)}_{BB,\ell}\Big{|}_{\rm DM} \cong \sin^{2}(\tau_{DM})\bar{C}^{\rm(S)}_{EE,\ell},\] \[C^{\rm(S)}_{EB,\ell}\Big{|}_{\rm DM} \cong \frac{1}{4}\sin^{2}(2\tau_{DM})\bar{C}^{\rm(S)}_{EB,\ell}. \tag{41}\] The term \(\tau_{DM}\) is the effective opacity of DM produced by the DM-photon interaction from the last scattering surface to the present time and is determined by (31) and (34). It is more convenient to express Eq.(34) in terms of the redshift \(z\) which will be as follows: \[\tau_{\rm DM}(z)=\,\frac{{\cal M}^{2}}{m_{\rm DM}}\,\rho^{0}_{\rm DM }\,\int_{0}^{z}dz^{{}^{\prime}}\,\frac{(1+z^{{}^{\prime}})^{2}}{H(z^{{}^{\prime }})}. \tag{42}\] To arrive the above equation, we used \[\rho_{\rm DM}=\rho^{0}_{\rm DM}(1+z)^{3}, \tag{43}\] where \(\rho^{0}_{\rm DM}\) is the mass density of DM in the present time and \[a\,d\eta=-\frac{dz}{H(z)(1+z)}, \tag{44}\] where \(H(z)\) is the Hubble parameter. Making use of the Friedmann equation in the matter dominated era \[\frac{H^{2}}{H_{0}^{2}}=\Omega^{0}_{M}(1+z)^{3}+\Omega^{0}_{\Lambda}, \tag{45}\] The effective opacity (42) becomes \[\tau_{DM}=\frac{{\cal M}^{2}}{m_{\rm DM}}\,\rho_{\rm DM}^{0}\frac{2H(z^{\prime})} {3\Omega_{M}^{0}H_{0}^{2}}\,\big{|}_{z^{\prime}=0}^{z^{\prime}=z}, \tag{46}\] where \(H_{0}\,=\,(67.4\,\pm\,0.5)\,{\rm kms}^{-1}{\rm Mpc}^{-1}\), \(\Omega_{M}^{0}\,=\,0.315\,\pm\,0.007,\,\,\,\Omega_{\Lambda}^{0}\approx 0.69\)[60]. The redshift dependence of the above equation indicates that the maximum value of \(\tau_{DM}\) occurs near the last Figure 2: (color online). Dipolar DM parameter space \([m,{\cal M}]\). The shaded blue region is the place where the viable Dirac candidates must lie in, below the solid lines and outside the long-dashed lines. The short-dashed relic abundance curve, which is obtained by considering the standard freeze-out, indicates where the DM would meet a cosmological density \(\Omega h^{2}=0.135\), assuming no particle-antiparticle asymmetry, and no interactions with standard-model particles except the dipole coupling to photons. Note that the EGRET and GLAST curves constrain the combination \(({\cal D}^{4}+{\cal M}^{4})^{1/4}\); the perturbative and unitarity curves apply to the stronger of \(({\cal D},{\cal M})\); while all other curves restrict \(({\cal D}^{2}+{\cal M}^{2})^{1/2}\). (see [7] for more detail). The plot of cosmic birefringence constrains \({\cal M}\). The shaded pink region is the area excluded from the dipolar DM parameter space by the results of CB for the Majorana dipolar DM. Moreover, the pale pink area marked by the dashed-dotted line represents the uncertainty in the upper limit obtained due to this effect. scattering surface and its approximate value is as follows: \[\tilde{\tau}_{DM}\approx 4.6\times 10^{25}\,\left(\frac{\cal M}{e\,{\rm cm }}\right)^{2}\,\left(\frac{GeV}{m_{{}_{\rm DM}}}\right)\,\,\,\,\,\,\,\,,\,\, \left(\frac{\rho_{{}_{\rm DM}}^{0}}{2.5\times 10^{-30}\,\,g/cm^{3}}\right)\, \left(\frac{z^{\prime}}{10^{3}}\right)\!. \tag{47}\] Now, using the equation between the CB angle \(\beta\) and the maximum value of the effective opacity [44] \[\beta\,\approx\,\frac{1}{2}\,\tilde{\tau}_{DM}, \tag{48}\] we can estimate the contribution of the dipolar DM interaction with the photon in producing the CB effect. For instance, the CB angle of the CMB due to the interaction with dipolar DM whose mass is around 1MeV will be approximated as \[\beta\big{|}_{DM}\approx 2.0\times 10^{-3}\,rad\,\,\,\,\,\left(\frac{\cal M}{3 \times 10^{-16}\,e\,cm}\right)^{2}\,\,\,\left(\frac{10^{-3}GeV}{m_{{}_{\rm DM}}} \right). \tag{49}\] Considering the value of the CB angle reported by using the Planck data, \(\beta=0.30^{\circ}\pm 0.11^{\circ}\) (68%C.L.) [53], we realize that dipolar DM with the mentioned properties can approximately compensate for about \((40\pm 13)\%\) of the CB angle of the CMB, where the uncertainty originates from the uncertainty on \(\beta\). It is important to note that the chosen values for mass and magnetic moment in Eq. (49) are threshold values, which means that according to cosmological constraints, the mass of dipolar DM must be larger than \(1MeV\)[61], and also based on the reported results concerning the magnetic moment of DM, the value of this quantity is approximately less than \(10^{-15}ecm\), (see Fig. 2). Therefore, using relations Eqs. (47) and (48), one can easily find that with the increase in mass and decrease in coupling constant, the contribution of this type of DM on the generation of the CB effect will be less, and hence, it can contribute to producing CB up to the mentioned value at most. Another perspective that we can follow concerning this issue is putting a constraint on the magnetic dipole moment depending on the mass of DM particles. In this regard, by combining Eqs. (47) and (48), \[{\cal M}\approx\big{(}\frac{1}{2.3\times 10^{25}}\big{)}^{1/2}\,\sqrt{\beta} \,\,\,e\,{\rm cm}\,\,\,\,\,\,\,\,\,\,\,\big{(}\frac{m_{{}_{\rm DM}}}{1\,GeV} \big{)}^{1/2}, \tag{50}\] and substituting the value of CB angle reported by using the Planck data release, \(\beta=0.30^{\circ}\pm 0.11^{\circ}\) (68%C.L.), the following constraint will be placed on the phase parameters of the dipolar DM \[{\cal M}\leqslant(1.4\pm 0.23)\times 10^{-14}\,e\,{\rm cm}\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\big{(}\frac{m_{{}_{\rm DM}}}{1\,GeV}\big{)}^{1/2}, \tag{51}\] where the uncertainty on \({\cal M}\) originates from the uncertainty on \(\beta\), i.e. \[\Delta{\cal M}\approx\big{(}\frac{\beta}{2.3\times 10^{25}}\big{)}^{1/2}\,\, \frac{\Delta\beta}{2\beta}\,\,\,\,\,\,\,\,\,e\,{\rm cm}\,\,\,\,\big{(}\frac{m _{{}_{\rm DM}}}{1\,GeV}\big{)}^{1/2} \tag{52}\] For instance, our results put a bound on the magnetic dipole moment about \({\cal M}\leqslant(7\pm 1.1)\times 10^{-16}\,e\,{\rm cm}\) for dipolar DM particles whose mass is around \(m_{{}_{\rm DM}}\approx 3\,MeV\). The full mass dependence of this result is shown in Fig.2. This figure is originally based on the constraints on the dipolar DM parameter space that come from some theoretical and experimental research and are adapted from [7]. The Viable Dirac dipolar DM must lie in the shaded blue region, below the solid lines, and outside the long-dashed lines (see [7] for more detail). The shaded pink region is the region excluded from the dipolar DM parameter space by the CB results. It is important to mention that the result obtained from considering the CB effect is related to Majorana dipolar DM particles, while other results are dedicated to Dirac dipolar DM. Indeed, we put a constraint on the sub-GeV Majorana dipolar DM through the CB effect of the CMB photons. It is also notable that the constraints obtained on Majorana dipolar DM, including the constraints that come from the direct detection experiments [55], study DM particles whose mass is around GeV or higher, while the constraint that we obtain here is regarding the sub-GeV DM particles. Before ending this section, it is worth mentioning that although an experiment like W boson mass excludes the sub-GeV Dirac dipolar DM with a magnetic moment larger than \(7\times 10^{-16}\,e\,{\rm cm}\), part of this region will be accessible for Majorana dipolar DM based on the CB effect of the CMB. ### Sterile Neutrino dark matter The existence of the right-handed sterile neutrinos is elegantly formulated in the seesaw model. In the framework of type-I seesaw model, the SM is extended by at least two heavy sterile neutrino singlets \(\nu_{{}_{\rm R}}^{i}\) (\(i\) indicates the generation) which mix with the SM neutrinos through a mixing angle \(\theta\) and form the neutrino physical states as follows: \[N=V_{{}_{\rm N}}^{\dagger}\nu_{{}_{\rm R}}+U_{N}^{\dagger}\theta\nu_{{}_{\rm L }}^{c}+h.c.\,,\hskip 14.226378pt\mbox{and}\hskip 14.226378pt\mathbf{ \nu}=V_{\nu}^{\dagger}\nu_{{}_{\rm L}}-U_{\nu}^{\dagger}\theta\nu_{{}_{\rm R }}^{c}+h.c.\,, \tag{53}\] where \(V\), \(\theta\) and \(U\) contain information about the mixing angle [17]. Indeed, \(V_{\nu}\) is the usual neutrino mixing matrix connecting the observed light mass eigenstates to the active flavor eigenstates: \[V_{\nu}\equiv(1-\frac{1}{2}\theta\theta^{\dagger})U_{\nu}, \tag{54}\] and \(U_{\nu}\) is the unitary part of neutrino mixing matrix [62]. Meanwhile, the corresponding parameters in the sterile sector are \(V_{{}_{\rm N}}\) and \(U_{{}_{\rm N}}\) and the active-sterile mixing angle is \[\Theta\equiv\theta U_{{}_{\rm N}}^{\star}. \tag{55}\] In the SM, neutrinos interact with other particles only via the weak interaction, \[-\frac{g}{\sqrt{2}}\overline{\nu_{L}}\gamma^{\mu}l_{L}W_{\mu}^{+}-\frac{g}{\sqrt{ 2}}\overline{l_{L}}\gamma^{\mu}\nu_{lL}W_{\mu}^{-}-\frac{g}{2\cos\theta_{W}} \overline{\nu_{lL}}\gamma^{\mu}\nu_{lL}Z_{\mu}, \tag{56}\] where \(g\) is the gauge coupling constant, \(\theta_{W}\) stands for the Weinberg angle, \(l=e,\mu,\tau\) and, \(\nu_{lL}\) denotes the flavor state of left-handed SM neutrinos. Using Eqs.(53), one can express the neutrino flavor eigenstates (\(\nu_{L}\)) in terms of neutrino physical states, i.e. \(\nu_{L}=V_{\nu}{\bf v}+\Theta N\) and therefore, in the most general form, the mass eigenstate sterile neutrinos(\(N\)) can interact with the SM particles through the mixing angle as follows [62]: \[{\cal L} \supset \sum_{l}-\frac{g}{\sqrt{2}}\bar{N}\,\Theta^{\dagger}\,\gamma^{\mu }\,l_{\rm L}\,W_{\mu}^{+}\,-\sum_{l}\frac{g}{\sqrt{2}}\,\bar{l}_{\rm L}\, \gamma^{\mu}\,\Theta\,N\,W_{\mu}^{-}-\frac{g}{2\cos\theta_{\rm W}}\,\bar{N}\, \Theta^{\dagger}\,\gamma^{\mu}\,\nu_{l_{\rm L}}\,Z_{\mu} \tag{57}\] \[-\frac{g}{2\cos\theta_{\rm W}}\,\bar{\nu}_{l_{\rm L}}\,\gamma^{\mu }\,\Theta\,N\,Z_{\mu}-\frac{g}{\sqrt{2}}\,\frac{M_{\rm N}}{m_{\rm W}}\,\Theta \,h\,\bar{\nu}_{l_{\rm L}}\,N\,-\frac{g}{\sqrt{2}}\,\frac{M_{\rm N}}{m_{\rm W} }\,\Theta^{\dagger}\,h\,\bar{N}\,\nu_{l_{\rm L}}.\] where \(h\) is physical Higgs field, \(M_{N}\) denotes the mass of sterile neutrino and \(m_{W}\) stands for the mass of \(W\) boson. Note that depending on models and the sterile neutrino production mechanism, and considering the astrophysical constraints, one can find different bounds on the mixing angle. For instance, some considerations which predict active generation of such particles in the early Universe constrain \(\theta^{2}\ll 10^{-8}\) from the total DM relic density and the absence of X-ray signal from sterile neutrino decay[62; 63]. However, regarding some models with a hidden sector coupled to the sterile neutrino, these bounds can be extended to \(\theta^{2}\leqslant 10^{-1}\) from the total DM relic density [64]. Based on some reports regarding the galaxy phase space density, universal galaxy surface density and the DM density, the mass eigenstate sterile neutrinos (N) can be fit to a warm DM scenario [65]. Here, we study the CB of the CMB due to its interaction with sterile neutrino DM. In the context of the seesaw model, photons can scatter from sterile neutrinos at a one-loop level with a lepton (or antilepton) and weak gauge bosons propagating in the loop. As the authors have shown in [17], the sterile neutrino-CMB interaction given by Fig.3 can affect the CMB polarization features and modify the power spectrum of the B- mode polarization. Indeed, to examine the effects of the photon-sterile neutrino interaction on the polarization of the CMB photons, we take (57) and (2) into account to find the time evolution of the density matrix components as follows (see appendix B for more detail) [17]: \[\frac{d}{dt}\rho_{ij}(k) = -\frac{\sqrt{2}}{12\,\pi\,k^{0}}\alpha\,\theta^{2}\,G_{\rm F}\int d {\bf q}\,\left(\delta_{is}\rho_{s^{\prime}j}(k)-\delta_{js^{\prime}}\rho_{is} (k)\right)f_{\rm DM}({\bf x},{\bf q})\,\bar{u}_{r}(q)\,\left(1-\gamma^{5}\right) \tag{58}\] \[\left(q\cdot\epsilon_{s}\not\!\epsilon_{s^{\prime}}\,+\,q\cdot \epsilon_{s^{\prime}}\not\!\epsilon_{s}\right)u_{r}(q)+\frac{\sqrt{2}}{24\,\pi \,k^{0}}\alpha\,\theta^{2}\,G_{\rm F}\int d{\bf q}\,\left(\delta_{is}\rho_{s^{ \prime}j}(k)-\delta_{js^{\prime}}\rho_{is}(k)\right)\] \[f_{\rm DM}({\bf x},{\bf q})\,\bar{u}_{r}(q)\left(1-\gamma^{5} \right)\not\!k\left(\not\!\epsilon_{s^{\prime}}\not\!\epsilon_{s}\,-\not\! \epsilon_{s}\not\!\epsilon_{s^{\prime}}\right)u_{r}(q).\] where \(\epsilon_{s}(k)\) with \(s=1,2\) are the photon polarization 4-vectors of two physical transverse polarizations, while \(u_{r}(q)\) and \(v_{r}(q)\) are the Dirac spinors. Furthermore, \(f_{\mbox{\tiny DM}}\) denotes the distribution function of DM, and \(G_{F}\) and \(\alpha\) are the Fermi coupling constant and electromagnetic fine structure constant, respectively. One can reconstruct the Stokes parameters through the density matrix elements and using the following identities \[\sigma_{\mu\nu}\gamma_{\alpha} = -i(\delta_{\mu\alpha}\gamma_{\nu}+\delta_{\nu\alpha}\gamma_{\mu}+ \epsilon_{\mu\nu\alpha\lambda}\gamma^{\lambda}\gamma^{5}),\] \[\bar{u}_{r}(q)\gamma^{\mu}\,u_{r}(q) = 2\frac{q^{\mu}}{m_{DM}},\] \[\bar{u}_{r}(q)\gamma^{\mu}(1\pm\gamma^{5})\,u_{r}(q) = 2\frac{q^{\mu}}{m_{DM}}, \tag{59}\] where the completely antisymmetric alternating symbol \(\epsilon_{\mu\nu\alpha\lambda}\) is equal to \(+1\) for \((\mu,\nu,\alpha,\lambda)\) an even permutation of \((0,1,2,3)\), is equal to \(-1\) for an odd permutation, and vanishes if two or more indices are the same. Consequently, reconstruction of the Stokes parameters shows that this interaction can affect the evolution of the linear polarization of the CMB as follows: \[\frac{d}{d\eta}\,\Delta_{P}^{\pm(S)}\,+\,i\,K\,\mu\,\Delta_{P}^{\pm(S)}\,=\,C _{e\gamma}^{\pm}\,\mp\,i\,a(\eta)\dot{\tau}_{\mbox{\tiny DM}}\,\Delta_{P}^{\pm}, \tag{60}\] in which \(\dot{\tau}_{\mbox{\tiny DM}}\) considered for the contribution of the photon-sterile neutrino scattering can be obtained as \[\dot{\tau}_{\mbox{\tiny DM}} = \frac{\sqrt{2}}{3\pi k^{0}\,m_{\mbox{\tiny DM}}}\ \alpha\ G_{\mbox{\tiny F}}\,\theta^{2}\,\int d{\bf q}\,f_{\mbox{\tiny DM}}({ \bf x},{\bf q})\,\times(\varepsilon_{\mu\,\nu\,\rho\,\sigma}\epsilon_{2}^{\mu }\,\epsilon_{1}^{\nu}\,k^{\rho}\,q^{\sigma}), \tag{61}\] where it can be reduced to \[\dot{\tau}_{\mbox{\tiny DM}} = \frac{\sqrt{2}}{3\pi k^{0}\,m_{\mbox{\tiny DM}}}\ \alpha\ G_{\mbox{\tiny F}}\,\theta^{2}\int d{\bf q}\,f_{\mbox{\tiny DM}}({ \bf x},{\bf q})\times\left[q^{0}{\bf k}\cdot(\epsilon_{1}\times\epsilon_{2}) +k^{0}{\bf q}\cdot(\epsilon_{1}\times\epsilon_{2})\right] \tag{62}\] \[= \frac{\sqrt{2}}{3\pi}\ \alpha\ G_{\mbox{\tiny F}}\,\theta^{2}\,n_{ \mbox{\tiny DM}}\left[1+\langle{\bf v}\rangle\cdot(\epsilon_{1}\times\epsilon _{2})\right]\approx\frac{\sqrt{2}}{3\pi}\ \alpha\ G_{\mbox{\tiny F}}\,\theta^{2}\,n_{\mbox{\tiny DM}},\] where \({\bf k}\cdot(\epsilon_{1}\times\epsilon_{2})=|{\bf k}|\), the DM number density \(n_{\mbox{\tiny DM}}=\int\frac{d^{3}{\bf q}}{(2\pi)^{3}}f_{\mbox{\tiny DM}}({ \bf x},{\bf q})\), and \(\langle{\bf v}\rangle\) is the average velocity of DM particles. Since the average velocity of DM is small, the dominated contribution of this scattering to photon polarization comes from the first term and thus we ignore the term including \(\langle{\bf v}\rangle\). Using (34), (44) and (45), we arrive at the following equation for the effective opacity \(\tau_{\mbox{\tiny DM}}\) \[\tau_{\mbox{\tiny DM}}=\frac{\sqrt{2}}{3\pi\,m_{\mbox{\tiny DM}}}\ \alpha\ G_{\mbox{\tiny F}}\,\theta^{2}\,\rho_{\mbox{\tiny DM}}^{0}\,\frac{2H(z ^{\prime})}{3\Omega_{M}^{0}H_{0}^{2}}\,\frac{|z^{\prime}=z}{|z^{\prime}=0}, \tag{63}\] where we have used the fact that \(\rho_{\mbox{\tiny DM}}\,=\,\rho_{\mbox{\tiny DM}}^{0}(1+z)^{3}\) in which \(\rho_{\mbox{\tiny DM}}^{0}\) is the mass density of DM in the present time. Now, we try to make an estimate of the maximum value \(\tau_{\mbox{\tiny DM}}\) near the last scattering which leads to the following equation: \[\tilde{\tau}_{\mbox{\tiny DM}}\approx 3\times 10^{-9}\,\theta^{2}\,\left(\frac{GeV}{m _{\mbox{\tiny DM}}}\right)\,\left(\frac{\rho_{\mbox{\tiny DM}}^{0}}{10^{-47}\, GeV^{4}}\right)\,\left(\frac{z^{\prime}}{10^{3}}\right). \tag{64}\] Hence, the CB angle of the CMB due to the interaction with the sterile neutrinos can be approximated as \[\beta\approx 1.5\times 10^{-9}\,\theta^{2}\,\left(\frac{GeV}{m_{\mbox{\tiny DM }}}\right). \tag{65}\] Before to proceed, it is worth discussing the approximate CB angle which can be caused by this sort of interaction. Since according to some cosmological constraints, the mass of sterile neutrinos must be larger than \(100eV\)[62] and based on the reported results regarding the mixing angle, the value of this quantity is approximately less than \(10^{-3}\), we come to the conclusion that by considering the value of the CB angle, \(\beta=0.30^{\circ}\pm 0.11^{\circ}\), around \((0.30\pm 0.10)\%\) of the CB angle can be caused by the interaction with the sterile neutrino DM (if it exists). Moreover, using the mentioned value reported for the CB by the Planck collaboration, one can put a constraint on the parameter space of the sterile neutrino DM: \[\theta^{2}\leq\,(3.3\pm 1.1)\,(rad)^{2}\quad(\frac{m_{\mbox{\tiny DM}}}{1\, \mbox{\rm KeV}}) \tag{66}\] The parameter space of the sterile neutrino DM \([m,\sin^{2}\theta]\) is depicted in Fig. 4. This figure, adapted from [66], is originally based on the summary of astrophysical constraints on the parameter space, \(m_{st}-\theta\) plane, for the sterile neutrino DM. As this figure shows, the constraint on the sterile neutrino parameter space due to the CB effect is placed in the region that has already been excluded by the X-ray experiment. However, we should point out that the values of mass (\(m_{\mbox{\tiny DM}}\)) and the mixing angle (\(\theta\)) could depend on cosmological production scenarios. In fact, one of the most important issues of sterile neutrinos is to address the question of how they have been produced in the early universe. Generally, sterile neutrinos can be produced by neutrino oscillation in the Figure 3: The representative Feynman diagrams represent the photon-Sterile neutrino scattering, where \(l=e,\mu,\tau\) and \(\bar{l}\) indicates the anti-particle of \(l\). primordial plasma via a tiny active-sterile neutrino mixing angle \(\theta\), as first described by Dodelson and Widrow (DW) [65]. Various current astrophysical observations impose severe constraints on the sterile neutrino DM which is generated by the DW mechanism [67; 68; 69; 70; 71; 72; 73; 74]. Meanwhile, more complex mechanisms for the production of DM, such as the Shi-Fuller mechanism [75] which describes resonant oscillation production or other non-thermal production mechanisms including the decay of an extra-singlet scalar [76; 77], and scatterings through new mediators in the thermal bath without reaching thermal equilibrium [78; 79; 80; 81], have been proposed. Before ending this section, we emphasize that if the production mechanism of the sterile neutrino DM is based on the DW mechanism, the area excluded by the results of the CB effect is placed in the region that was already ruled out by the X-ray experiment. However, if sterile neutrinos are produced through other mechanisms, masses less than 1 KeV and mixing angles larger than Figure 4: (color online). The constraints on the sterile neutrino DM parameter space assuming a standard cosmology below the temperature when neutrino oscillations occur are adapted from [66]. The shaded brown region is the area excluded by the CB effect of the CMB. are allowed; therefore, the constraint obtained from the results of the CB effect is a new constraint and excludes part of those areas. As a final point, it is worth mentioning that, here, we considered sterile neutrino DM in the context of seesaw model. However, it is possible to introduce the right-handed sterile neutrinos as the DM candidates which can be coupled effectively to the SM particles through the right-handed current interactions with the SM intermediate gauge bosons [82; 83; 84; 85]. Indeed, this model was motivated by the parity symmetry reconstruction at high energies without any extra gauge bosons. This sort of DM candidates might also be considered as a new source of CB which is under investigation as a future work. ## V Conclusion We examined whether the existence of the CB angle of CMB photons could be used as a tool to study the nature and properties of DM. To this end, we considered two types of DM candidates, i.e. dipolar and sterile neutrino DM, and calculated the forward scattering contribution to investigate the CMB polarization effects on DM properties. We found that the interaction of those probable DM candidates and CMB photons results in generating the B mode polarization patterns and, consequently, produces the CMB CB effect. Using the birefringence angle reported by using the Planck data release, we discussed the properties of the mentioned candidates of DM, and the results are as follows: * Calculations performed regarding the dipolar DM showed that this DM candidate can contribute to generate a part of the CB effect of the CMB. Moreover, from another point of view, we used the reported CB angle of the CMB to put a new constraint on its electromagnetic coupling and mass, i.e., \(\mathcal{M}/10^{-15}e.cm\approx 10\,\sqrt{m_{DM}/GeV}\), which means that for dipolar DM particles whose mass is about \(m_{\text{\tiny DM}}\approx 3\,MeV\), the magnetic dipole moment will be around \(\mathcal{M}\leqslant(7.6\pm 1.1)\times 10^{-16}\,e\,\text{cm}\). For further clarification, we provided Fig.2 in which the full mass dependence of the result has been illustrated. Note that this figure is originally adapted from ref[7] and we added our result to it. Indeed, we put a constraint on the sub-GeV Majorana dipolar DM through the CB effect of the CMB photons. It is also notable that the constraints obtained on Majorana dipolar DM, including the constraints that come from the direct detection experiments [55], study DM particles whose mass is around GeV and higher, while the constraint that we obtain here is concerning the sub-GeV DM particles. * In the case of the sterile neutrino DM, we found that this sort of DM candidate can also contribute to produce a part of the CB angle of the CMB. Furthermore, by using the reported CB angle of the CMB, the mixing angle \(\theta^{2}\) and mass of the sterile neutrino are constrained as \(\theta^{2}\approx(3.3\pm 1.1)\;(rad)^{2}\;\frac{m_{DM}}{KeV}\). It seems that this constraint has already been excluded by some experiments such as X-ray experiment, which explains the exclusion based on the DW mechanism of sterile neutrino production. However, it should be pointed out that the values of mass (\(m_{\text{\tiny{DM}}}\)) and the mixing angle (\(\theta\)) could depend on the cosmological production scenarios. Besides the DW mechanism, more complex mechanisms for the production of DM, such as the Shi-Fuller mechanism or other non-thermal production mechanisms including the decay of an extra singlet scalar, and scatterings through new mediators in the thermal bath without reaching thermal equilibrium, have been proposed. It is notable that the importance of our result depends on the sterile neutrino production mechanism; if the production mechanism of the sterile neutrino DM is based on the DW mechanism, the area which is excluded by the results of the CB effect is placed in the region that already been ruled out by the X-ray experiment. Meanwhile, if sterile neutrinos are produced through other mechanisms, masses less than 1 KeV and mixing angles larger than \(10^{-4}\) are allowed and, therefore, the constraint obtained from the results of the CB effect is a new constraint and excludes part of those areas. ## Acknowledgment S. Mahmoudi is grateful to the Iran Science Elites Federation for the financial support. Appendix A Studying the contribution of the photon-dipolar DM scattering on the CMB polarization in case \(\delta\gg k^{0}\) In this appendix, we are going to calculate the contribution of the photon-Majorana dipolar DM forward scattering in case \(\delta\gg k^{0}\). In this case, Eq. (23) can be estimated as follows \[M\simeq\,\mathcal{M}_{12}^{2}\;\frac{(2k\cdot q)^{2}}{(m_{\text{\tiny{DM}}_{ 2}}^{2}-m_{\text{\tiny{DM}}_{1}}^{2})^{2}}\;\mathbf{k}\cdot(\vec{\epsilon}_{s^ {\prime}}\times\vec{\epsilon}_{s})+\,(1\,\leftrightarrow\,2). \tag{26}\] Working in the non-relativistic limit (\(\mathbf{q}\approx m_{\text{\tiny{DM}}}\)) and assuming \(m_{\text{\tiny{DM}}_{1}}\) to be the same order of \(m_{\text{\tiny{DM}}_{2}}\), one will arrive at the following relation \[M\simeq\,\mathcal{M}_{12}^{2}\;(\frac{k^{0}}{\delta})^{2}\;\mathbf{k}\cdot( \vec{\epsilon}_{s^{\prime}}\times\vec{\epsilon}_{s})\,+\,(1\,\leftrightarrow \,2). \tag{27}\] For the cases in which \(k^{0}\ll\delta\ll m_{{}_{\rm DM}}\), (\(m_{{}_{\rm DM1}}\approx m_{{}_{\rm DM2}}\approx m_{{}_{\rm DM}}\)), and after some calculation, one can find the evolution of the Stokes parameters similar to (28)-(30) except that \(\dot{\tau}_{{}_{\rm DM}}\) is defined as follows \[\dot{\tau}_{{}_{\rm DM}}=\,(\frac{k^{0}}{\delta})^{2}\frac{{\cal M}^{2}}{m_{{}_ {\rm DM}}}\,\rho_{{}_{\rm DM}}. \tag{50}\] The above relation clearly shows that the contribution of the photon-dipolar DM scattering on the CMB polarization will be suppressed as \((\frac{k^{0}}{\delta})^{2}\) compared to the case \(\delta\ll k^{0}\) and therefore, we will not consider this case in this paper. Appendix B Calculation of Time evolution of the density matrix components via photon-Sterile neutrino interaction This appendix aims to calculate the time evolution of the density matrix components due to the forward scattering of the photon-Sterile neutrino interaction. To this end, we use the see-saw Lagrangian given in (57) to find the types of possible interactions between sterile neutrinos and photons. Indeed, the dominant interaction comes from the scattering of photons from Sterile neutrinos at a one-loop level with a lepton and weak gauge bosons propagating in the loop. Representative relevant Feynman diagrams are shown in Fig. 3. Fourier transformations of the electromagnetic free gauge field \(A^{\mu}\) and Majorana fermion field \(N(x)\), which are self-conjugate, are as follows \[A_{\mu}(x)=\int\frac{d^{3}{\bf k}}{(2\pi)^{3}2k^{0}}[a_{s}(p) \epsilon_{s\mu}(k)e^{-ik.x}+a_{s}^{\dagger}(k)\epsilon_{s\mu}^{*}(k)e^{ik.x}], \tag{51}\] \[N(x)=\int\frac{d^{3}{\bf q}}{(2\pi)^{3}}\frac{m_{\rm DM}}{q^{0}} \left[b_{r}(q)u_{r}(q)e^{-iq\cdot x}+b_{r}^{\dagger}(q)v_{r}(q)e^{iq\cdot x} \right], \tag{52}\] where \(\epsilon_{s\mu}(p)\) with \(s=1,2\) are the photon polarization 4-vectors of two physical transverse polarization while \(u_{r}(q)\) and \(v_{r}(q)\) are the Dirac spinors. The creation \(a_{s}^{\dagger}(k)\) (\(b_{r}^{\dagger}(q)\)) and annihilation \(a_{s}(k)\) (\(b_{r}(q)\)) operators respect the following canonical commutation (anti-commutation) relations \[[a_{s}(k),a_{s^{\prime}}^{\dagger}(k^{\prime})] = (2\pi)^{3}2k^{0}\delta_{ss^{\prime}}\delta^{(3)}({\bf k}-{\bf k}^ {\prime}),\] \[\{b_{r}(q),b_{r^{\prime}}^{\dagger}(q^{\prime})\} = (2\pi)^{3}\frac{q^{0}}{m_{\rm DM}}\delta_{rr^{\prime}}\delta^{(3 )}({\bf q}-{\bf q}^{\prime}). \tag{53}\] Making use of Eqs. (3), (57), and the above relations, one can find that the leading-order interacting Hamiltonian for the scattering representing in Fig. 3 can be expressed by the following scattering amplitude \[H_{I}^{0}(t) = \int d{\bf q}d{\bf q}^{\prime}d{\bf k}d{\bf k}^{\prime}(2\pi)^{3} \delta^{(3)}({\bf q}^{\prime}+{\bf k}^{\prime}-{\bf q}-{\bf k})\exp(i[q^{\prime 0 }+k^{\prime 0}-q^{0}-k^{0}]) \tag{100}\] \[\times [b_{r^{\prime}}^{\dagger}({\bf q}^{\prime})a_{s^{\prime}}^{ \dagger}({\bf k}^{\prime}){\cal M}_{\rm tot}(N\gamma\,\rightarrow\,N\gamma)\,a_ {s}({\bf k})b_{r}({\bf q})],\] with \(d{\bf q}\equiv\frac{d^{3}{\bf q}}{(2\pi)^{3}}\frac{\rm DM}{q^{0}}\), \(d{\bf k}\equiv\frac{d^{3}{\bf k}}{(2\pi)^{3}}\frac{1}{2k^{0}}\) and the total amplitude \(M_{\rm tot}\) can be obtained from the sum of all Feynman diagrams in Fig.3, as follows \[M_{tot}({\bf q}^{\prime}r^{\prime},{\bf k}^{\prime}s^{\prime},{ \bf q}r,{\bf k}s) \equiv M_{1}({\bf q}^{\prime}r^{\prime},{\bf k}^{\prime}s^{\prime},{ \bf q}r,{\bf k}s)+M_{2}({\bf q}^{\prime}r^{\prime},{\bf k}^{\prime}s^{\prime},{\bf q}r,{\bf k}s) \tag{101}\] \[-M_{3}({\bf q}^{\prime}r^{\prime},{\bf k}^{\prime}s^{\prime},{ \bf q}r,{\bf k}s)-M_{4}({\bf q}^{\prime}r^{\prime},{\bf k}^{\prime}s^{\prime},{\bf q}r,{\bf k}s),\] where \(M_{3,4}({\bf q}^{\prime}r^{\prime},{\bf k}^{\prime}s^{\prime},{\bf q}r,{\bf k}s)\) are, respectively, the Hermitian conjugates of \(M_{1,2}({\bf q}^{\prime}r^{\prime},{\bf k}^{\prime}s^{\prime},{\bf q}r,{\bf k}s)\) and have been contributed from antiparticles in the loops as follows \[M_{1}({\bf q}^{\prime}r^{\prime},{\bf k}^{\prime}s^{\prime},{\bf q }r,{\bf k}s) = \frac{1}{(2\pi)^{4}}\,\frac{e^{2}\,g^{2}}{8}\,\theta^{2}\int d^{4 }l\,\,\,\bar{u}_{r^{\prime}}({\bf q}^{{}^{\prime}})\gamma^{\alpha}\,(1-\gamma ^{5})\,S_{F}(l+k-k^{{}^{\prime}})\hbox to 0.0pt{$\epsilon$}_{s^{\prime}}({\bf k}^{{}^{ \prime}}) \tag{102}\] \[S_{F}(k+l)\hbox to 0.0pt{$\epsilon$}_{s}({\bf k})\,S_{F}(l)\gamma^{ \beta}\,(1-\gamma^{5})\,u_{r}({\bf q})\,D_{F_{\alpha\beta}}(q-l),\] \[M_{2}({\bf q}^{\prime}r^{\prime},{\bf k}^{\prime}s^{\prime},{\bf q }r,{\bf k}s) = \frac{1}{(2\pi)^{4}}\,\frac{e^{2}\,g^{2}}{8}\,\theta^{2}\int d^{4 }l\,\,\,\bar{u}_{r^{\prime}}({\bf q}^{{}^{\prime}})\gamma^{\alpha}\,(1-\gamma ^{5})\,S_{F}(l+k-k^{{}^{\prime}})\hbox to 0.0pt{$\epsilon$}_{s}({\bf k}) \tag{103}\] \[S_{F}(l-k^{{}^{\prime}})\hbox to 0.0pt{$\epsilon$}_{s^{\prime}}({\bf k }^{{}^{\prime}})\,S_{F}(l)\gamma^{\beta}\,(1-\gamma^{5})\,u_{r}({\bf q})\,D_{F _{\alpha\beta}}(q-l),\] \[M_{3}({\bf q}^{\prime}r^{\prime},{\bf k}^{\prime}s^{\prime},{\bf q }r,{\bf k}s) = \frac{1}{(2\pi)^{4}}\,\frac{e^{2}\,g^{2}}{8}\,\theta^{2}\int d^{4 }l\,\,\,\bar{v}_{r}({\bf q})\gamma^{\alpha}\,(1+\gamma^{5})\,S_{F}(-l)\hbox to 0.0pt{$ \epsilon$}_{s}({\bf k})S_{F}(-k-l) \tag{104}\] \[\hbox to 0.0pt{$\epsilon$}_{s^{\prime}}({\bf k}^{{}^{\prime}})\,S_{F}(k^{{}^{ \prime}}-k-l)\gamma^{\beta}\,(1+\gamma^{5})\,v_{r}({\bf q}^{{}^{\prime}})\,D_ {F_{\alpha\beta}}(l-q),\] and \[M_{4}({\bf q}^{\prime}r^{\prime},{\bf k}^{\prime}s^{\prime},{\bf q }r,{\bf k}s) = \frac{1}{(2\pi)^{4}}\,\frac{e^{2}\,g^{2}}{8}\,\theta^{2}\int d^{4 }l\,\,\,\bar{v}_{r}({\bf q})\gamma^{\alpha}\,(1+\gamma^{5})\,S_{F}(-l)\hbox to 0.0pt{$\epsilon$}_{s^{\prime}}({\bf k}^{{}^{ \prime}})S_{F}(k^{{}^{\prime}}-l) \tag{105}\] \[\hbox to 0.0pt{$\epsilon$}_{s}({\bf k})\,S_{F}(k^{{}^{\prime}}-k-l) \gamma^{\beta}\,(1+\gamma^{5})\,v_{r^{\prime}}({\bf q}^{{}^{\prime}})\,D_{F_{ \alpha\beta}}(l-q),\] where \(S_{F}\) denotes the fermion propagator, the indices \(r,r^{\prime}\) and \(s,s^{\prime}\) stand for the Sterile neutrino and photon spin states, respectively. Now, in order to calculate the forward scattering term in (2), one should find the commutator \([H_{I}^{0}(t),D_{ij}^{0}({\bf p})]\), then evaluate the expectation value \(\langle[H_{I}^{0}(t),D_{ij}^{0}({\bf p})]\rangle\) according to the following operator expectation value \[\langle\,b_{r^{\prime}_{i}}^{\dagger}(q^{\prime})b_{r_{j}}(q)\,\rangle=(2\pi)^{ 3}\delta^{3}({\bf q}-{\bf q}^{\prime})\delta_{rr^{\prime}}\delta_{ij}\frac{1}{2} f_{\rm DM}({\bf x},{\bf q}). \tag{106}\] In this regard, one can substitute (B5-B9) into (B4) and then (2) to find the time evolution of the density matrix components which is obtained as follows \[\frac{d}{dt}\rho_{ij}(k) = -\frac{\sqrt{2}}{12\,\pi\,k^{0}}\alpha\,\theta^{2}\,G_{\rm F}\int d \mathbf{q}\,\left(\delta_{is}\rho_{s^{\prime}j}(k)-\delta_{js^{\prime}}\rho_{is} (k)\right)f_{\rm DM}(\mathbf{x},\mathbf{q})\,\bar{u}_{r}(q)\,\left(1-\gamma^{ 5}\right) \tag{11}\] \[\left(q\cdot\epsilon_{s}\not\!\!t_{s^{\prime}}\;+\;q\cdot \epsilon_{s^{\prime}}\not\!\!t_{s}\right)u_{r}(q)+\frac{\sqrt{2}}{24\,\pi\,k^{ 0}}\alpha\,\theta^{2}\,G_{\rm F}\int d\mathbf{q}\,\left(\delta_{is}\rho_{s^{ \prime}j}(k)-\delta_{js^{\prime}}\rho_{is}(k)\right)\] \[f_{\rm DM}(\mathbf{x},\mathbf{q})\,\bar{u}_{r}(q)\left(1-\gamma^ {5}\right)\not\!\!k\left(\not\!\!t_{s^{\prime}}\not\!\!t_{s}\,-\not\!\!t_{s} \,\epsilon_{\not\!\!k^{\prime}}\right)u_{r}(q).\]
2303.12517
Real-time modelling of observation filter in the Remote Microphone Technique for an Active Noise Control application
The remote microphone technique (RMT) is often used in active noise control (ANC) applications to overcome design constraints in microphone placements by estimating the acoustic pressure at inconvenient locations using a pre-calibrated observation filter (OF), albeit limited to stationary primary acoustic fields. While the OF estimation in varying primary fields can be significantly improved through the recently proposed source decomposition technique, it requires knowledge of the relative source strengths between incoherent primary noise sources. This paper proposes a method for combining the RMT with a new source-localization technique to estimate the source ratio parameter. Unlike traditional source-localization techniques, the proposed method is capable of being implemented in a real-time RMT application. Simulations with measured responses from an open-aperture ANC application showed a good estimation of the source ratio parameter, which allows the observation filter to be modelled in real-time.
Chung Kwan Lai, Bhan Lam, Dongyuan Shi, Woon-Seng Gan
2023-03-21T08:03:22Z
http://arxiv.org/abs/2303.12517v1
Real-Time Modelling of Observation Filter in the Remote Microphone Technique for an Active Noise Control Application ###### Abstract The remote microphone technique (RMT) is often used in active noise control (ANC) applications to overcome design constraints in microphone placements by estimating the acoustic pressure at inconvenient locations using a pre-calibrated observation filter (OF), albeit limited to stationary primary acoustic fields. While the OF estimation in varying primary fields can be significantly improved through the recently proposed source decomposition technique, it requires knowledge of the relative source strengths between incoherent primary noise sources. This paper proposes a method for combining the RMT with a new source-localization technique to estimate the source ratio parameter. Unlike traditional source-localization techniques, the proposed method is capable of being implemented in a real-time RMT application. Simulations with measured responses from an open-aperture ANC application showed a good estimation of the source ratio parameter, which allows the observation filter to be modelled in real-time. Chung Kwan Lai, Bhan Lam, Dongyuan Shi, Woon-Seng Gan+School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore. Virtual sensing, Virtual microphone, Source decomposition, Acoustic source localization, Active Noise Control Footnote †: We thank Kenneth Ooi for assistance with the mathematical derivation. ## 1 Introduction Virtual sensing (VS) algorithms utilise physical remote monitoring sensors to estimate the acoustic pressure at a virtual position, and thus it is often employed in active noise control (ANC) applications with design constraints in error microphone placements where control is desired, such as at the human ears. [1, 2, 3, 4]. One of the prominent VS methods is the remote microphone technique (RMT) [5], which directly estimates the acoustic pressure at these virtual positions through pre-calibrated observation filters (OF), as shown in Fig.1. This technique as described in Section 2, however, assumes a stationary acoustic field to achieve a robust estimation at the virtual locations with the pre-calibrated fixed-coefficient OFs [6, 7]. This limits its application to where the acoustic field remains relatively stationary throughout the active control period, such as in road noise ANC in automobile cabins, where head-tracking techniques were applied with the RMT to continuously update the location of the virtual error microphones due to head movement [8, 9]. In cases where noise sources are time-varying and could arise from unknown directions, such as in the active control of noise through an open-aperture [10, 11] or mobile phones [12], estimation performance will be degraded. While it was shown previously that the RMT estimation performance can be improved by reconstructing the correlation matrices (CMs) between microphones based on the superposition of CMs associated with its respective incoherent noise source [13], the reconstruction requires knowledge of the relative source strengths between these incoherent noise sources. Whereas source-localization techniques, such as the deconvolution approach for the mapping of acoustic sources (DAMAS), inverse acoustic method, or CM fitting (CMF) [14, 15], could be utilised to estimate the source strengths, none of these methods was suitable for a VS application due to its modelling assumptions as detailed in Section 4. Through an open-aperture ANC implementation [13], the significance of the source ratio parameter is first described in Section 3, followed by the proposed algorithm in Section 4 to obtain the source ratio parameter through a source localization method, and finally its verification by simulation in Section 5. ## 2 The Remote Microphone Technique Fig.1 shows the block diagram arrangement of a virtual sensing system using the remote microphone technique [1, 8]. The observation filter \(\mathbf{O}\) can be expressed in either the frequency domain or in the causally-constrained time domain to minimise the expected mean squared error between the estimated disturbance signal at the virtual microphone, \(\mathbf{\hat{d}}_{c}\), and the actual disturbance signal, \(\mathbf{d}_{c}\). Thus, the optimal observation filter in the frequency and causally-constrained time domain are expressed as [5, 16] \[\mathbf{O}_{opt}(j\omega) =\mathbf{S}_{d_{m}d_{e}}\left(\mathbf{S}_{d_{m}d_{m}}+\beta\mathbf{I}\right)^{-1} \tag{1}\] \[=\mathbf{P}_{e}\mathbf{S}_{w}\mathbf{P}_{m}^{H}\left(\mathbf{P}_{m}\mathbf{S}_{w}\mathbf{P }_{m}^{H}+\beta\mathbf{I}\right)^{-1}\] and \[\mathbf{O}_{opt}=\mathbf{R}_{d_{m}d_{e}}^{T}\left[0\right](\mathbf{R}_{d_{m}d_{m}} \left[0\right]+\beta\mathbf{I})^{-1} \tag{2}\] respectively, where \(\mathbf{P}_{e}\) and \(\mathbf{P}_{m}\) are matrices of responses from the array of modelled primary sources \(\mathbf{v}\) to the vectors of error and monitoring microphones \(\mathbf{e}\) and \(\mathbf{m}\), \(E\left[\cdot\right]\) is the expectation operator and \(\beta\) is a regularisation parameter, \(\mathbf{S}_{d_{m}d_{e}}\), \(\mathbf{S}_{d_{m}d_{m}}\) and \(\mathbf{S}_{vv}\) are the spectral density matrices and \(\mathbf{R}_{d_{m}d_{e}}\), \(\mathbf{R}_{d_{m}d_{m}}\) are the correlation matrices, each defined with a general notation of \(\mathbf{S}_{xy}(\omega)=E\left[\mathbf{y}(\omega)\mathbf{x}(\omega)^{H}\right]\) and \(\mathbf{R}_{xy}[n_{0}]=E\left[\mathbf{x}[n]\mathbf{y}[n-n_{0}]^{T}\right]\) respectively. For brevity, the "\([0]\)" notation from Eq.(2) will be omitted throughout the paper, and while regularization can improve the robustness of the RMT when subject to uncertainty in the acoustic field [8], it is omitted to limit the scope (i.e. \(\beta=0\)). To evaluate the accuracy of the RMT, the overall estimation error of the RMT in the frequency domain is used, that is [16] \[L_{\epsilon}=10\log_{10}\left|\frac{\text{tr}\left\{\mathbf{S}_{\epsilon c}\right\} }{\text{tr}\left\{\mathbf{S}_{d_{e}d_{e}}\right\}}\right| \tag{3}\] where \(\epsilon=\mathbf{d}_{e}-\hat{\mathbf{d}}_{e}\) and \(\text{tr}\{\cdot\}\) is the trace operator. ## 3 Source decomposition in the remote microphone technique On the assumption of incoherence between modelled noise sources, it can be shown that the CM from Eq.(1) and Eq.(2) can be further decomposed into a sum of CMs, with each associated to the respective noise sources [13]. This allows the OF to be reconstructed in real-time based on the current primary acoustic field, given by \[\mathbf{O}_{opt}(j\omega)=\left(\sum_{n_{v}=1}^{N_{v}}r_{n_{v}}^{2}\mathbf{S}_{d_{m}d _{e}}^{(n_{v})}\right)\left[\left(\sum_{n_{v}=1}^{N_{v}}r_{n_{v}}^{2}\mathbf{S}_{d _{m}d_{m}}^{(n_{v})}\right)+\beta\mathbf{I}\right]^{-1} \tag{4}\] and \[\mathbf{O}_{opt}=\left(\sum_{n_{v}=1}^{N_{v}}r_{n_{v}}^{2}\mathbf{R}_{d_{m}d_{e}}^ {(n_{v})}\right)^{T}\left[\left(\sum_{n_{v}=1}^{N_{v}}r_{n_{v}}^{2}\mathbf{R}_ {d_{m}d_{m}}^{(n_{v})}\right)+\beta\mathbf{I}\right]^{-1}, \tag{5}\] where \(N_{v}\) is the total number of the modelled primary sources in the system and \(r_{n_{v}}\) denotes the source strength ratio at the \(n_{v}\)-th modelled primary source relative to its calibrated source strength. As \(r_{n_{v}}\) varies with time in a real-time implementation, it is important to understand the significance of this parameter. Additional real-time experiments from [13] were conducted with its arrangement shown in Fig.3. Each loudspeaker reproduced white Gaussian noise during the calibration stage to obtain the individual CMs \(\mathbf{R}_{d_{m}d_{e}}^{(n_{v})}\) and \(\mathbf{R}_{d_{m}d_{m}}^{(n_{v})}\), which will be used to reconstruct the OF from Eq.(5). Fig.3a-3b showed the estimation error spectra when both \(v_{1}\) and \(v_{2}\) reproduced known amplitude ratios. While the nominal OF \(\mathbf{O}_{opt}\) was directly obtained from both loudspeakers with the new amplitude ratio, the correctly estimated and mismatched OF \(\hat{\mathbf{O}}\) and \(\hat{\mathbf{O}}_{mis}\) uses the CM obtained from the calibration stage where the original amplitude was used for each individual loudspeaker, followed by the superposition formulation from Eq.(5) using the correct and mismatched amplitude ratio input. The correctly estimated OF for both Fig.3a and 3b showed a similar estimation spectra with the nominal OF which effectively validates Eq.(5). However, the estimation error can degrade when the wrong amplitude ratio is used. While Fig.3a showed a decrease in estimation error at frequencies 400-600Hz, higher frequency region from 800-1000Hz did not show much change. Fig.3b showed a larger difference in estimation error in a wider frequency range, suggesting a larger degradation in estimation error when its mismatch becomes larger. Thus, it can be concluded that the source ratio \(r_{n_{v}}\) would play an important role to achieve robust estimation. ## 4 Source tracking formulation In RMT implementation, multiple physical monitoring microphones are typically used, and thus the true cross-correlation matrix (CCM) between monitoring microphones \(\mathbf{R}_{d_{m}d_{m}}\) can be obtained. By assuming incoherence between noise Figure 1: Block diagram of the virtual sensing control algorithm using remote microphone technique [8]. Figure 2: Top view of the virtual sensing system for an open aperture, showing the arrangement of primary loudspeakers (Genelce 8302A) \(v_{1}-v_{4}\), physical monitoring microphones \(m_{1}-m_{3}\) and virtual error microphones \(e_{1}-e_{5}\) (Pro Signal NPA415-OMNI). Signals were acquired and computed on a low-latency system (NI PXIe-8880) at a sampling frequency \(F_{s}=10\)kHz. [13]. sources, and each CCM associated with that noise source was measured, the estimated CCM for \(\hat{\mathbf{R}}_{d_{m}d_{m}}\) can be expressed as \[\hat{\mathbf{R}}_{d_{m}d_{m}}=\sum_{n_{v}=1}^{N_{v}}z_{n_{v}}\mathbf{R}_{d_{m}d_ {m}}^{(n_{v})}, \tag{6}\] with its respective source ratio parameter \(z_{n_{v}}=r_{n_{v}}^{2}\) derived from Eq.(5). As \(\mathbf{R}_{d_{m}d_{m}}\) changes with time due to changes in \(z_{n_{v}}\), the cost function can be formulated by minimizing its \(l_{2}\) norm of the difference between the estimated CCM and its true CCM measured at that time. Thus, the minimization problem can be formulated as \[\min\;J(n)= \left\|\mathbf{R}_{d_{m}d_{m}}(n)-\hat{\mathbf{R}}_{d_{m}d_{m}}( n)\right\|_{2}^{2} \tag{7}\] \[s.t. \mathbf{a}\leq\mathbf{z}\leq\mathbf{b}\] where \(\mathbf{z}=[z_{1}\;z_{2}\;\cdots\;z_{N_{v}}]^{T}\) is the squared source ratio vector; \(\mathbf{a}=[a_{1}\;a_{2}\cdots\;a_{N_{v}}]^{T}\) and \(\mathbf{b}=[b_{1}\;b_{2}\cdots\;b_{N_{v}}]^{T}\) are the element-wise positive lower and upper bound vector constraints, i.e. \(a_{n},\;b_{n}\geq 0\) for all \(n\). The objective function can thus be formulated with the use of the quadratic penalty function, defined as \[Q =\left\|\mathbf{R}_{d_{m}d_{m}}-\hat{\mathbf{R}}_{d_{m}d_{m}} \right\|_{2}^{2}\] \[+\sum_{n_{v}=1}^{N_{v}}\sigma_{n_{v}}\left[h_{n_{v}}(z_{n_{v}}-a_{ n_{v}})^{2}+g_{n_{v}}(z_{n_{v}}-b_{n_{v}})^{2}\right] \tag{8}\] where \(\mathbf{h}=[h_{1}\;h_{2}\;\cdots\;h_{N_{v}}]^{T}\) and \(\mathbf{g}=[g_{1}\;g_{2}\;\cdots\;g_{N_{v}}]^{T}\) are penalty vectors that serves as a Heaviside function if the constraints were violated, given by \[h_{n_{v}}=\begin{cases}1,&z_{n_{v}}<a_{n_{v}}\\ 0,&\text{otherwise}\end{cases},\;g_{n_{v}}=\begin{cases}1,&z_{n_{v}}>b_{n_{v}} \\ 0,&\text{otherwise}\end{cases}, \tag{9}\] and \(\sigma_{n_{v}}\) is the penalty weight. Thus, the derivatives can be shown to be \[\frac{\partial Q}{\partial z_{n_{v}}}= 2tr\left\{-\mathbf{R}_{d_{m}d_{m}}\mathbf{R}_{d_{m}d_{m}}^{(n_{v }),T}+\mathbf{R}_{d_{m}d_{m}}^{(n_{v})}\hat{\mathbf{R}}_{d_{m}d_{m}}^{T} \right\} \tag{10}\] \[+ 2\sigma_{n_{v}}\left[(z_{n_{v}}-a_{n_{v}})h_{n_{v}}+(z_{n_{v}}-b _{n_{v}})g_{n_{v}}\right].\] Assuming that optimal \(\mathbf{z}\) remained unconstrained, i.e. \(\mathbf{a}<\mathbf{z}_{opt}<\mathbf{b}\), and thus \(\mathbf{h}=\mathbf{g}=0\), the optimal squared source ratio \(\mathbf{z}_{opt}\) can be obtained by setting the derivatives to zero, that is \[\mathbf{z}_{opt}=\mathbf{A}^{-1}\mathbf{c}, \tag{11}\] where \[\mathbf{A}=\begin{bmatrix}\text{tr}\left\{\mathbf{R}_{d_{m}d_{m}}^{(1)} \mathbf{R}_{d_{m}d_{m}}^{(1),T}\right\}&\cdots&\text{tr}\left\{\mathbf{R}_{d_{ m}d_{m}}^{(1)}\mathbf{R}_{d_{m}d_{m}}^{(N_{v}),T}\right\}\\ \text{tr}\left\{\mathbf{R}_{d_{m}d_{m}}^{(2)}\mathbf{R}_{d_{m}d_{m}}^{(1),T} \right\}&\cdots&\text{tr}\left\{\mathbf{R}_{d_{m}d_{m}}^{(2)}\mathbf{R}_{d_{m}d _{m}}^{(N_{v}),T}\right\}\\ \cdots&\ddots&\vdots\\ \text{tr}\left\{\mathbf{R}_{d_{m}d_{m}}^{(N_{v})}\mathbf{R}_{d_{m}d_{m}}^{(1),T }\right\}&\cdots&\text{tr}\left\{\mathbf{R}_{d_{m}d_{m}}^{(N_{v})}\mathbf{R}_{d _{m}d_{m}}^{(N_{v}),T}\right\}\end{bmatrix} \tag{12}\] and \[\mathbf{c}=\begin{cases}\text{tr}\left\{\mathbf{R}_{d_{m}d_{m}}\mathbf{R}_{d_{ m}d_{m}}^{(1),T}\right\}\\ \text{tr}\left\{\mathbf{R}_{d_{m}d_{m}}\mathbf{R}_{d_{m}d_{m}}^{(2),T}\right\} \\ \vdots\\ \text{tr}\left\{\mathbf{R}_{d_{m}d_{m}}\mathbf{R}_{d_{m}d_{m}}^{(N_{v}),T} \right\}\end{cases}. \tag{13}\] Since the optimal approach in Eq.(11) is unconstrained and may lead to matrix ill-conditioning with large \(N_{v}\)[17], an iterative gradient descent approach is adopted, whereby \[z_{n_{v}}^{(n+1)}= z_{n_{v}}^{(n)}-\alpha_{n_{v}}\Bigg{\{}\text{tr}\left\{ \mathbf{R}_{d_{m}d_{m}}^{(n_{v})}\hat{\mathbf{R}}_{d_{m}d_{m}}^{T}-\mathbf{R}_{d _{m}d_{m}}\mathbf{R}_{d_{m}d_{m}}^{(n_{v}),T}\right\}\] \[+ \sigma_{n_{v}}\left[(z_{n_{v}}^{(n)}-a_{n_{v}})h_{n_{v}}+(z_{n_{v}} ^{(n)}-b_{n_{v}})g_{n_{v}}\right]\Bigg{\}}. \tag{14}\] As \(\mathbf{z}\) is being iterated closer to the optimal value, and therefore getting an accurate reconstruction of \(\mathbf{R}_{d_{m}d_{m}}\), an accurate estimation of \(\mathbf{R}_{d_{m}d_{m}}\) will be achieved indirectly as both correlation terms share the same source ratio term, which allows the reconstruction shown from Eq.(5) to be implemented in real-time. The normalised estimation error of the source tracking technique across the \(n\)-th iteration can therefore be expressed given the general form of \[L_{xy}(n)=10\log_{10}\left(\frac{\left\|\mathbf{R}_{xy}(n)-\hat{\mathbf{R}}_{ xy}(n)\right\|_{2}^{2}}{\left\|\mathbf{R}_{xy}(n)\right\|_{2}^{2}}\right) \tag{15}\] where the \(xy\) subscript from Eq.(15) can either be \(d_{m}d_{m}\) or \(d_{m}d_{e}\). There is a distinction in this approach as compared to other traditional source-localization methods. Unlike the conventional CMF approach or DAMAS that makes use of a steering vector [14], this method does not assume a free-field propagation from the noise source to the microphones which allows us to obtain the source ratio in a diffuse field. While it is certainly comparable to the acoustic inverse method [15], this formulation strictly assumes that the noise source clusters are incoherent as explained in Section 3. Additionally, the time domain CM is used instead of the cross-spectral density (CSD) in the frequency domain in this formulation which Fig. 3: The estimation spectra when loudspeaker 1 and 2 as arranged from Fig.2[13] were present, using the optimal OF \(\boldsymbol{O}_{opt}\), correctly predicted OF \(\boldsymbol{O}_{opt}\) and the mismatched OF \(\tilde{\boldsymbol{O}}_{mis}\) calculated from Eq.(5). allows the causally constrained time-domain observation filter from Eq.(5) to be reconstructed. This method, therefore, is well suited for certain virtual sensing applications, such as separating road noise and wind noise in automobile ANC as the CM caused by the road noise and wind noise can be measured separately. ## 5 Simulation Results To verify the proposed algorithm in Section 4, simulations were made on the VS system in the case where all four primary loudspeakers from Fig.2 were present with a given source ratio parameter of 1, but the source ratio parameters were iterated along sample frame using Eq.(14). Fig.4a-b show the iteration plot of the source ratio parameter and the normalised source-tracking estimation error over a period of 10 seconds using Eq.(14) and Eq.(15). The iteration plot of the source ratio parameter showed convergence for all noise sources to around 1, with \(r_{1}\) and \(r_{3}\) converging quicker than \(r_{2}\) and \(r_{4}\). As the true \(\mathbf{R}_{d_{m}d_{m}}\) with a length of 400 and overlap of 50% has been obtained throughout the time frame, it is expected for the source ratio parameter to have random fluctuations from its stochastic nature, thus validating Eq.(14) from obtaining its source ratio parameter. This convergence is also supported in Fig.4b where \(L_{d_{m}d_{m}}\) converges to around -13dB with expected random fluctuations. Interestingly, the estimation error of \(\mathbf{R}_{d_{m}d_{e}}\) has decreased with time and converges at about -10dB even if it's indirectly estimated, which verifies the estimation concept described in Section 4 where a good correlation in \(L_{d_{m}d_{m}}\) and \(L_{d_{m}d_{e}}\) is shown in Fig.4b. Once \(\mathbf{z}\) has been found through iteration, the observation filter due to source-tracking algorithm \(\hat{\mathbf{O}}_{st}\) will be iteratively updated and was used to simulate the estimated error signals. It has been shown from the estimation error plot from Fig.5 that the estimation spectra between the nominal observation filter \(\mathbf{O}_{opt}\) and source tracking observation filter \(\hat{\mathbf{O}}_{st}\), which ultimately verifies the source-tracking algorithm. ## 6 Conclusion Although the remote microphone technique is sensitive to changes in the primary acoustic field, it can be reconstructed in real-time implementation through source decomposition as shown previously. The effect of the source ratio parameter on the estimation performance, however, is yet to be studied. In this paper, we have demonstrated the significance of the source ratio parameter when performing the RMT through source decomposition, as a large mismatch in the source ratio will cause a degradation of the estimation. Thus, we proposed a source-tracking algorithm by matching the CM directly and then iterating the source ratio parameter through gradient descent. To verify our algorithm, simulations with physical measurements from an open-aperture setup were conducted. Simulation results showed a good convergence when estimating the source ratio using the proposed algorithm, and thus can be used to reconstruct the observation filter in real-time implementation.
2302.05342
Combining Reconstruction and Contrastive Methods for Multimodal Representations in RL
Learning self-supervised representations using reconstruction or contrastive losses improves performance and sample complexity of image-based and multimodal reinforcement learning (RL). Here, different self-supervised loss functions have distinct advantages and limitations depending on the information density of the underlying sensor modality. Reconstruction provides strong learning signals but is susceptible to distractions and spurious information. While contrastive approaches can ignore those, they may fail to capture all relevant details and can lead to representation collapse. For multimodal RL, this suggests that different modalities should be treated differently based on the amount of distractions in the signal. We propose Contrastive Reconstructive Aggregated representation Learning (CoRAL), a unified framework enabling us to choose the most appropriate self-supervised loss for each sensor modality and allowing the representation to better focus on relevant aspects. We evaluate CoRAL's benefits on a wide range of tasks with images containing distractions or occlusions, a new locomotion suite, and a challenging manipulation suite with visually realistic distractions. Our results show that learning a multimodal representation by combining contrastive and reconstruction-based losses can significantly improve performance and solve tasks that are out of reach for more naive representation learning approaches and other recent baselines.
Philipp Becker, Sebastian Mossburger, Fabian Otto, Gerhard Neumann
2023-02-10T15:57:20Z
http://arxiv.org/abs/2302.05342v4
# Reinforcement Learning from Multiple Sensors via Joint Representations ###### Abstract In many scenarios, observations from more than one sensor modality are available for reinforcement learning (RL). For example, many agents can perceive their internal state via proprioceptive sensors but must infer the environment's state from high-dimensional observations such as images. For image-based RL, a variety of self-supervised representation learning approaches exist to improve performance and sample complexity. These approaches learn the image representation in isolation. However, including proprioception can help representation learning algorithms to focus on relevant aspects and guide them toward finding better representations. Hence, in this work, we propose using _Recurrent State Space Models_ to fuse all available sensory information into a single consistent representation. We combine reconstruction-based and contrastive approaches for training, which allows using the most appropriate method for each sensor modality. For example, we can use reconstruction for proprioception and a contrastive loss for images. We demonstrate the benefits of utilizing proprioception in learning representations for RL on a large set of experiments. Furthermore, we show that our joint representations significantly improve performance compared to a post hoc combination of image representations and proprioception. Machine Learning, ICML ## 1 Introduction Learning concise representations of high-dimensional images has led to considerable advances in reinforcement learning (RL) from pixels. However, while images are crucial to perceive an agent's surroundings in unstructured environments, they are often not the only available source of information. Most realistic agents can additionally observe their internal state directly using the sensing in their actuators, inertial measurement units, force and torque sensors, or other forms of proprioceptive sensing. So far, most RL approaches that use representations learn them for a single high-dimensional sensor, such as a camera, in isolation (Hafner et al., 2019; Srinivas et al., 2020; Zhang et al., 2020). In this work, we study how to best combine these different sensors into a representation that facilitates successful and data-efficient RL. We propose using State Space Models to learn a single representation of all available sensors. The state space formulation lends itself naturally to the problem at hand - accumulating information across multiple sensors and time to form a single concise representation of the entire system state. By building on _Recurrent State Space Models (RSSMs)_(Hafner et al., 2019), this approach provides a scalable basis for RL in tasks with complex observations and dynamics. Given this formalism, the question of training the representation arises. Previous works suggest using either reconstruction (Hafner et al., 2019, 2021) or contrastive methods (Hafner et al., 2020; Ma et al., 2020; Nguyen et al., 2021) to train _RSSMs_, both of which have their strengths and weaknesses. Reconstruction is a powerful tool if observations contain only task-relevant or static elements. Yet, it can fail to learn good representations if observations are noisy or include distracting elements (Zhang et al., 2020; Ma et al., 2020; Deng et al., 2022). In such cases, contrastive methods can ignore irrelevant parts of the observation and still learn useful representations. Furthermore, reconstruction creates an overhead when learning a representation, but the resulting learning approach is relatively stable. On the other hand, contrastive methods require less overhead but more care during training to prevent the representation from collapsing. Which loss is preferable thus depends on the kind of observations at hand. Consequently, we propose to choose the loss for each sensor individually, according to the specific characteristics of the observation. The common approach to train _RSSMs_ is variational inference, which maximizes a lower bound to the data log-likelihood. Using this bound involves computing the likelihood of the individual observations under the model, which requires explicit reconstruction. However, we can replace these likelihood terms with mutual information terms, resulting in a contrastive loss (Hafner et al., 2020; Ma et al., 2020). Contrastive predictive coding (Oord et al., 2018) offers an alternative to the variational approach of training _RSSMs_(Nguyen et al., 2021; Srivastava et al., 2021). Those methods train the _RSSMs'_ system dynamics by explicitly predicting the future latent state and maximizing the agreement of the prediction with future observations. As recent literature is inconclusive about whether the variational or the predictive approach is preferable, we evaluate our representation learning using both methods. To act based on our representations, we train a policy using the model-free Soft Actor-Critic (SAC) approach, as purely model-based _latent imagination_(Hafner et al., 2020) tends to underperform with contrastively learned _RSSMs_(Hafner et al., 2020; Ma et al., 2020). Figure 1 shows an overview of our approach. We evaluate our approach on several tasks from the Deep-Mind Control (DMC) Suite (Tassa et al., 2018) and compare it to several baselines for image-based RL. To test the ability of all approaches to cope with distractions and missing information, we modify the DMC Suite tasks to use natural video backgrounds and occlusions. While previous contrastive approaches tackled the natural video background case (Zhang et al., 2020; Ma et al., 2020; Nguyen et al., 2021; Deng et al., 2022), we are the first to consider occlusions. Those significantly increase the difficulty of the tasks. As of now, they are only solvable by using the additional proprioception to guide representation learning. Furthermore, we evaluate our method using a new Locomotion Suite and a challenging robot manipulation task. For the Locomotion Suite, agents must combine proprioception and egocentric vision to move and navigate. Our experiments are inconclusive about whether to use variational or predictive methods. However, they show learning joint representations of all sensors is beneficial in either case. Doing so outperforms both exclusively using images and concatenating proprioception with independently learned image representations. While results increase particularly in settings where a combination of images and proprioception is necessary for decision-making, we also observe significant performance improvements when all information is available through images. These findings show that including proprioception can indeed guide representation learning algorithms to find better representations for RL. We summarize our contributions as follows: * We propose using _RSSMs_ to learn joint representations of multiple observations emitted by different sensors over time. * We introduce a general framework for training such joint representations by combining reconstruction-based and contrastive approaches. * We show that such joint representations outperform concatenating proprioception and independent image representations. In addition, using proprioception in representation learning enables solving tasks that are currently not solvable with only image representations. ## 2 Related Work **Representations for Reinforcement Learning.** Many recent approaches use ideas from generative (Wahlstrom et al., 2015; Watter et al., 2015; Banijamali et al., 2018; Lee et al., 2020; Yarats et al., 2021) and self-supervised representation learning (Zhang et al., 2020; Srinivas et al., 2020; Yarats et al., 2021; You et al., 2022) to improve performance, sample efficiency, and generalization of RL from images. Particularly relevant for this work are those based on _Recurrent State Space Models (RSSMs)_. When proposing the _RSSM_, Hafner et al. (2019) used a generative approach. They formulated their objective as auto-encoding variational Figure 1: For many reinforcement learning problems, more than one sensor modality is available. For example, in the shown locomotion task, the robot has an egocentric vision to perceive its environment but directly observes its proprioceptive state, i. e., the position and velocity of its body parts. We propose learning a joint representation of all available sensor sources as the basis for decision-making by a policy. We build on _Recurrent State Space Models_ to accumulate information across sensors and time and use a combination of contrastive and reconstruction-based losses for training. Our experiments show that such jointly learned representations can significantly improve the performance of downstream policies. inference, which trains the representation by reconstructing observations. Such reconstruction-based approaches have limitations with observations containing noise or many task-irrelevant details. As a remedy, Hafner et al. (2020) proposed a contrastive alternative based on mutual information and the InfoNCE estimator (Poole et al., 2019). Ma et al. (2020) refined this approach and improved results by modifying the policy learning mechanism. Using a different motivation, namely contrastive predictive coding (Oord et al., 2018), Okada and Taniguchi (2021); Nguyen et al. (2021); Srivastava et al. (2021) proposed alternative contrastive learning objectives for _RSSMs_. In this work, we leverage the variational and predictive coding paradigms and show that joint representations improve performance for both. Other recent approaches for learning _RSSMs_, e. g., using prototypical representations (Deng et al., 2022) or explicit factorizations to remove task-irrelevant aspects (Wang et al., 2022), are orthogonal to our work and may be combined with it in the future. Out of these works, only Srivastava et al. (2021) consider using additional proprioceptive information. Yet, they did so only in a single experiment, without deeper analysis or elaboration on the topic. Further, they did not compare to the concatenation of image representations and proprioception. **Sensor Fusion in Reinforcement Learning.** Many application-driven approaches to visual RL for robots use proprioception to solve their specific tasks (Finn et al., 2016; Levine et al., 2016; Kalashnikov et al., 2018; Xiao et al., 2022; Fu et al., 2022). Yet, they usually do not use dedicated representation learning or concatenate image representations and proprioception. There are notable exceptions (Wu et al., 2022; Becker and Neumann, 2022), which use _RSSMs_ with images and proprioception and thus learn joint representations. However, both put little emphasis on the fusion itself, only use reconstruction-based losses, and focus on learning with real-world robots (Wu et al., 2022) or analyze the _RSSM's_ assumptions (Becker and Neumann, 2022). We instead focus on learning joint representations and study different ways of training them. Our results show that such representations can significantly outperform concatenating image representations and proprioception. **Multimodal Representation Learning.** Representation learning from multiple modalities has widespread applications in general machine learning, where methods such as _CLIP_(Radford et al., 2021) combine language concepts with the semantic knowledge of images and allow language-based image generation (Ramesh et al., 2022). For robotics, Brohan et al. (2022); Mees et al. (2022) combined language models with the robot's perception for natural language-guided manipulation tasks using imitation learning. In contrast, we work in an online RL setting and mainly consider different modalities, namely images, and proprioception. ## 3 Learning Representation from Multiple Sensors with State Space Models Given trajectories of observations \(\mathbf{o}_{1:T}=\{\mathbf{o}_{t}\}_{t=1:T}\) and actions \(\mathbf{a}_{1:T}=\{\mathbf{a}_{t}\}_{t=1:T}\) we aim to learn a state representation that is well suited for RL. We assume the observations stem from \(K\) different sensor sources, \(\mathbf{o}_{t}=\{\mathbf{o}_{t}^{(k)}\}_{k=1:K}\), where the individual \(\mathbf{o}_{t}^{(k)}\) might be high dimensional, noisy, and contain only partial information about the system. Further, even the whole observation \(\mathbf{o}_{t}\) may not contain all the information necessary for acting optimally, i. e., the environment is partially observable, and the representation needs to accumulate information over time. Our goal is to learn a concise, low dimensional representation \(\phi(\mathbf{o}_{1:t},\mathbf{a}_{1:t-1})\) that accumulates all relevant information until time step \(t\). We provide this representation to a policy \(\pi(\mathbf{a}_{t}|\phi(\mathbf{o}_{1:t},\mathbf{a}_{1:t-1}))\) which aims at maximizing the expected return in a given RL problem. Here, we have a cyclic dependency, as the policy collects the trajectories to learn the representation by acting in the environment. In this setting, the policy's return and the sample complexity of the entire system determine what constitutes a _good_ representation. ### Representations from State Space Models State Space Models (SSMs) (Murphy, 2012) are commonly used to model time series data and naturally lend themselves to sensor fusion and information accumulation problems. We assume a latent state variable, \(\mathbf{z}_{t}\), which evolves according to some Markovian dynamics \(p(\mathbf{z}_{t+1}|\mathbf{z}_{t},\mathbf{a}_{t})\) given an action \(\mathbf{a}_{t}\). At each time step \(t\), each of the \(K\) observations are generated from the latent state by an observation model \(p^{(k)}(\mathbf{o}_{t}^{(k)}|\mathbf{z}_{t})\) and the initial state is distributed according to \(p(\mathbf{z}_{0})\). Thus, the joint distribution over a sequence of latent states and observations is assumed to factorize as \[p(\mathbf{z}_{1:T},\mathbf{o}_{1:T}|\mathbf{a}_{1:T})\] \[=p(\mathbf{z}_{0})\prod_{t=1}^{T}p(\mathbf{z}_{t}|\mathbf{a}_{t- 1},\mathbf{z}_{t-1})\prod_{t=0}^{T}\prod_{k=1}^{K}p_{k}(\mathbf{o}_{t}^{(k)}| \mathbf{z}_{t}).\] In this approach, the representation is given by (the parameters of) the belief over the latent state, taking into account all previous actions as well as previous and current observations \[\phi(\mathbf{o}_{1:t},\mathbf{a}_{1:t-1})\hat{=}p(\mathbf{z}_{t}|\mathbf{a}_{1 :t-1},\mathbf{o}_{1:t}).\] Yet, due to the nonlinearity of the dynamics and observation models, computing \(p(\mathbf{z}_{t}|\mathbf{a}_{1:t-1},\mathbf{o}_{1:t})\) is intractable for models of relevant complexity. Thus, we approximate it using a variational distribution \(q(\mathbf{z}_{t}|\mathbf{a}_{1:t-1},\mathbf{o}_{1:t})\). This variational approximation plays an integral part in training the SSM and is thus readily available to use as input for the policy. ### Instantiating the State Space Model We build our model on _Recurrent State Space Models (RSSMs)_(Hafner et al., 2019), which serve as the basis for many recent approaches to RL from images. The _RSSM_ splits the latent state \(\mathbf{z}_{t}\) into a stochastic part \(\mathbf{s}_{t}\) and a deterministic part \(\mathbf{h}_{t}\), i. e., \(\mathbf{z}_{t}=\{\mathbf{s}_{t},\mathbf{h}_{t}\}\). Following Hafner et al. (2019, 2020), we assume \(\mathbf{s}_{t}\) to be Gaussian distributed, as the more recently introduced parametrization as a categorical distribution (Hafner et al., 2021) has not been proven beneficial for the continuous control tasks considered in this work. The deterministic part evolves according to \(\mathbf{h}_{t}=g(\mathbf{h}_{t-1},\mathbf{s}_{t-1},\mathbf{a}_{t-1})\), where \(g\) is implemented as a Gated Recurrent Unit (GRU) (Cho et al., 2014). The stochastic part \(\mathbf{s}_{t}\) only depends on the deterministic part \(\mathbf{h}_{t}\) and is given by a model \(p(\mathbf{s}_{t}|\mathbf{h}_{t})\). While the original _RSSM_ only has a single observation model \(p(\mathbf{o}_{t}|\mathbf{z}_{t})\), we extend it to \(K\) models, one for each observation modality \(\mathbf{o}_{t}^{(k)}\). The variational distribution takes the deterministic part of the state together with the \(K\) observations \(\mathbf{o}_{t}=\{\mathbf{o}_{t}^{(k)}\}_{k=1:K}\) and factorizes as \[q(\mathbf{z}_{1:t}|\mathbf{o}_{1:t},\mathbf{a}_{1:t-1})=\prod_{t=1}^{t}q( \mathbf{z}_{t}|\mathbf{z}_{t-1},\mathbf{a}_{t-1},\mathbf{o}_{t}).\] Compared to the original _RSSM_, we again have to account for multiple observations instead of one. Thus, we first encode each observation individually using a set of \(K\) encoders, concatenate their outputs and provide the result to the _RSSM_. Finally, we also learn a reward model \(p(r_{t}|\mathbf{z}_{t})\) to predict the achieved reward from the representation. While this is not necessary for policy learning in our case, it has proven beneficial for representation learning in prior work (Srivastava et al., 2021) and our preliminary experiments. ### Learning the State Space Representation We combine reconstruction-based and contrastive approaches to train our representations. The contrastive approaches can be based on either a contrastive variational viewpoint (Hafner et al., 2020; Ma et al., 2020) or a contrastive predictive coding (CPC) (Oord et al., 2018) viewpoint (Nguyen et al., 2021; Srivastava et al., 2021). As neither approach decisively outperforms the other, we investigate both methods. **Reconstruction.** Originally, Hafner et al. (2019) proposed leveraging a fully generative approach for _RSSMs_. Building on the stochastic variational autoencoding Bayes framework (Kingma and Welling, 2013; Sohn et al., 2015), they derived a variational lower bound. Maximizing this bound simultaneously trains the variational distribution and all parts of the generative model. Given our assumption that the observation factorizes into \(K\) independent observations, and after introducing a likelihood term for the reward model, the loss following from the lower bound is given by \[\sum_{t=1}^{T}\Biggl{(}\left(\sum_{k=1}^{K}-\mathbb{E}\left[\log p ^{(k)}(\mathbf{o}_{t}^{(k)}|\mathbf{z}_{t})\right]\right)-\mathbb{E}\left[ \log p(r_{t}|\mathbf{z}_{t})\right]\] \[+\mathbb{E}\left[\text{KL}\left[q(\mathbf{z}_{t}|\mathbf{z}_{t-1}, \mathbf{a}_{t-1},\mathbf{o}_{t})\parallel p(\mathbf{z}_{t}|\mathbf{z}_{t-1}, \mathbf{a}_{t-1}]\right]\right)\Biggr{)}, \tag{1}\] where the expectations are formed over the distribution \(p(\mathbf{o}_{1:t},\mathbf{a}_{1:t-1})q(\mathbf{z}_{t}|\mathbf{o}_{1:t}, \mathbf{a}_{1:t-1})\), i. e., sub-trajectories from a replay buffer and the variational distribution. To optimize this bound, we obtain unbiased gradient estimates using the reparametrization trick (Kingma and Welling, 2013; Rezende et al., 2014), which we then use for stochastic gradient descent. While this reconstruction-based approach can be highly effective, reconstructing high-dimensional, noisy observations can also cause issues. First, it requires introducing large parameter-rich observation models (usually un-convolutional neural nets for images) solely for representation learning. These observation models are unessential for the downstream usage of the representation and are usually discarded after training. Second, the reconstruction forces the model to capture all details of the observations, which can lead to highly suboptimal representations if images are noisy or contain task-irrelevant distractions. **Contrastive Variational Learning.** Contrastive learning can provide a remedy to the previously mentioned problems with reconstruction. The key is that the expected log-likelihood \(\mathbb{E}\left[\log p^{(k)}(\mathbf{o}_{t}^{(k)}|\mathbf{z}_{t})\right]\) can be replaced with the mutual information (MI) \(I(\mathbf{o}_{t}^{(k)},\mathbf{z}_{t})\) by adding and subtracting the evidence \(\log p(\mathbf{o}^{(k)})\)(Hafner et al., 2020; Ma et al., 2020) \[\mathbb{E}\left[\log\frac{p^{(k)}(\mathbf{o}_{t}^{(k)}|\mathbf{z} _{t})}{p(\mathbf{o}_{t}^{(k)})}+\log p(\mathbf{o}_{t}^{(k)})\right]\] \[=\mathbb{E}\left[I(\mathbf{o}_{t}^{(k)},\mathbf{z}_{t})\right]+c. \tag{2}\] Intuitively, the MI measures how informative a given latent state is about the corresponding observations. Thus, maximizing it leads to similar latent states for similar sequences of observations and actions. To estimate the MI we can use the InfoNCE lower bound (Oord et al., 2018; Poole et al., 2019). This approach eliminates the need for reconstruction and instead requires only a score function \(f_{v}^{(k)}(\mathbf{o}_{t}^{(k)},\mathbf{z}_{t})\mapsto\mathbb{R}_{+}\). The score function measures the compatibility of pairs of observations and latent states, is trained as described in Section 3.5, and shares large parts of its parameters with the _RSSM_. For further details, we refer to Appendix B.2. **Contrastive Predictive Coding.** CPC (Oord et al., 2018) provides an alternative to the contrastive variational approach. The idea is to maximize the MI between the latent variable \(\mathbf{z}_{t}\) and the next observation \(\mathbf{o}_{t+1}^{(k)}\), i. e. \(I(\mathbf{o}_{t+1}^{(k)},\mathbf{z}_{t})\). While this approach seems similar to contrastive variational learning, there is a crucial conceptual difference. We estimate the MI between the current latent state and the next observation, not the current observation. Thus, we explicitly predict one step ahead to compute the loss. As we use the _RSSM's_ dynamics model for the prediction, this formalism provides a training signal to the dynamics model. However, prior works (Shu et al., 2020; Nguyen et al., 2021; Srivastava et al., 2021) showed that solely optimizing for prediction is insufficient. We follow Srivastava et al. (2021) and regularize the objective further by adding an inverse dynamics predictor \(\hat{\mathbf{a}}_{t}=a(\mathbf{z}_{t},\mathbf{z}_{t+1})\), a reward prediction term, and the KL-term from Equation 1 multiplied with a small factor \(\beta\). The resulting objective is given by \[\sum_{t=1}^{T}\Biggl{(}\sum_{k=1}^{K}-\mathbb{E}\left[\hat{I}( \mathbf{o}_{t+1}^{(k)},\mathbf{z}_{t})+\log p(r_{t}|\mathbf{z}_{t})+||\mathbf{ a}_{t}-\hat{\mathbf{a}}_{t}||^{2}\right]\] \[+\beta\mathbb{E}\left[\text{KL}\left[q(\mathbf{z}_{t}|\mathbf{z }_{t-1},\mathbf{a}_{t-1},\mathbf{o}_{t})\parallel p(\mathbf{z}_{t}|\mathbf{z }_{t-1},\mathbf{a}_{t-1}]\right]\right)\Biggr{)},\] where \(\hat{I}(\mathbf{o}_{t+1}^{(k)},\mathbf{z}_{t})\) is an estimate of \(I(\mathbf{o}_{t+1}^{(k)},\mathbf{z}_{t})\). We again use InfoNCE, this time with a score function \(f_{p}^{(k)}(\mathbf{o}_{t+1}^{(k)},\mathbf{z}_{t})\mapsto\mathbb{R}_{+}\). From an implementation viewpoint, the resulting approach differs only slightly from the variational contrastive one. For CPC approaches, we use a sample from the _RSSM's_ dynamics \(p(\mathbf{z}_{t+1}|\mathbf{z}_{t},\mathbf{a}_{t})\) and for contrastive variational approaches we use a sample from the variational distribution \(q(\mathbf{z}_{t}|\mathbf{z}_{t-1},\mathbf{a}_{t-1},\mathbf{o}_{t})\). **Reconstruction vs. Contrastive Learning.** The question arises in which cases contrastive approaches are better suited than reconstruction-based approaches and vice versa. While contrastive approaches are clearly beneficial for high-dimensional noisy observations, such as images, they might not be the optimal choice for more concise, noise-free observations, e. g., proprioception. We thus propose combining the two approaches and choosing the optimal method for each of the \(K\) observations separately. We already demonstrated how to replace contrastive loss terms based on mutual information with reconstruction terms based on the log-likelihood in Equation 2. Following the same principle, we can turn individual contrastive loss terms into reconstruction terms for the CPC approach. In this case, we use a sample of the dynamics \(p(\mathbf{z}_{t+1}|\mathbf{z}_{t},\mathbf{a}_{t})\) for reconstruction (as opposed to a sample from the variational model for the variational approaches). Using the dynamics here again provides a training signal by explicit one-step-ahead prediction. ### Learning to Act Based on the Representation We use the parameters of the variational posterior belief \(q(\mathbf{z}_{t}|\mathbf{z}_{t-1},\mathbf{a}_{t-1},\mathbf{o}_{t})\) as our representation. More specifically, we use the deterministic part of the latent state \(\mathbf{h}_{t}\) as well as the mean of the stochastic part, which we denote as \(\boldsymbol{\mu}_{t}\), i. e. \(\boldsymbol{\phi}=\phi(\mathbf{o}_{1:t},\mathbf{a}_{1:t-1})=\{\boldsymbol{\mu} _{t},\mathbf{h}_{t}\}\). Given the representation, we learn a policy \(\pi(\mathbf{a}|\boldsymbol{\phi})\) using Soft Actor-Critic (SAC) (Haarnoja et al., 2018). Here, we use \(\boldsymbol{\phi}\) as input for both the actor and the critic. During training, we alternatingly update the _RSSM_, actor, and critic for \(d\) steps before collecting a new sequence. Here, \(d\) is half the length of the previous sequence. The _RSSM_ uses only the representation learning loss and gets neither gradients from the actor nor the critic. ### Practical Aspects **Image Augmentation.** Following prior works (Srivastava et al., 2021; Deng et al., 2022), we found image augmentation helpful for contrastive approaches. Thus, we use random cropping for the contrastive variational and the CPC approach. During training, crops are randomly selected for each sequence but remain consistent within the sequence. For evaluation, we crop at the center. **InfoNCE and Negative Samples.** We estimate the mutual information (MI) using \(b\) mini-batches of sub-sequences of length \(l\). After computing the latent estimates, we get \(I=b\cdot l\) pairs (\(\mathbf{o}_{i}\), \(\mathbf{z}_{i}\)), i. e., we use both samples from the elements of the batch as well as all the other time steps within the sequence as negative samples. Using those, the symmetry of MI, the InfoNCE bound (Poole et al., 2019), and either \(f=f_{v}^{(k)}\) or \(f=f_{p}^{(k)}\), we can estimate the MI as \[0.5\left(\sum_{i=1}^{I}\log\frac{f(\mathbf{z}_{i},\mathbf{o}_{i})}{\sum_{j=1}^ {I}f(\mathbf{z}_{j},\mathbf{o}_{i})}+\log\frac{f(\mathbf{z}_{i},\mathbf{o}_{i}) }{\sum_{j=1}^{I}f(\mathbf{z}_{i},\mathbf{o}_{j})}\right).\] ## 4 Experiments We evaluate our joint representation learning approach on several environments from the DeepMind Control (DMC) Suite (Tassa et al., 2018) with different types of image observations, a new Locomotion Suite, and a simulated robot manipulation task. Unless stated otherwise, we train five seeds per task for \(10^{6}\) environment steps and evaluate for \(20\) rollouts every \(20,000\) step. Following the suggestions from Agarwal et al. (2021), we aggregate the results over all environments in the benchmark suites and report interquartile means and 95% stratified bootstrapped confidence intervals, indicated by shaded areas. Appendix C provides the individual environments results. For hyperparameters, we refer to Appendix B1. In summary, we compare the following approaches: **Joint Representation Learning.** We denote all methods that learn joint representations of images and proprioception with _Joint(X)_. Here, \(X\) will denote how we trained the representation, which is either reconstruction _(R)_, contrastive variational _(CV)_, contrastive predictive _(CPC)_, or combinations of a contrastive approach and reconstruction, i.e., _(CV+R)_ and _(CPC+R)_. For these combinations, we use contrastive losses for images and reconstruction for proprioception. Here, the reconstruction uses a sample from the dynamics for _(CPC+R)_ while it uses a sample from the variational distribution for _(CV+R)_. **Concatenating Representations.** One important baseline is concatenating the proprioception to a representation trained solely on images. To ensure a fair comparison, we train this representation using our State-Space approach using only the images. Here, we again use reconstruction and both contrastive methods for training and refer to the resulting approaches as _Concat(R)_, _Concat(CV)_, and _Concat(CPC)_. **Single Observations.** We also train policies using only a single sensor to ensure our approaches can exploit the additional information provided by multiple sensors. For the image-only policies, we again use our State-Space representation approach and the different training schemes, resulting in _Img-Only(R)_, _Img-Only(CV)_, and _Img-Only(CPC)_. For the proprioception-only policies (_Proprio-Only_), we use SAC (Haarnoja et al., 2018) directly on the proprioception without learning an explicit representation. **Related Approaches.** While we focus on learning joint representations from multiple sensors, we also ensure our method performs comparably or favorably to other recent approaches. We compare against the reconstructing _Dreamer-v2_(Hafner et al., 2021), the contrastive _DreamerPro_(Deng et al., 2022). Additionally, we compare to_DrQ-v2_(Yarats et al., 2022), which does not explicitly learns a representation, and extend it to handle both images and proprioception. We introduce a separate encoder for the proprioception and concatenate its output to the output of th original _DrQ-v2's_ image encoder. We refer to the resulting method as _(DrQ-v2(I+P))_. Additionally, _Img-Only(CPC)_ and _Joint(CPC)_ closely resemble the approach of (Srivastava et al., 2021). The main differences are that we do not use the critic's gradients to train the representation and that we adapted the hyperparameters to match those of our other approaches. ### DeepMind Control Suite For the first experiment, we use seven tasks from the DMC Suite (Tassa et al., 2018). Those are Ball in Cup Catch, Cartpole Swingup, Cheetah Run, Reacher Easy, Walker Walk, Walker Run, and Quadruped Walk. For each task, we split the state into proprioceptive entries and non-proprioceptive entries. While the former is directly provided, the latter can only be perceived via an additional image. For example, in Cup Catch the cup's state is proprioceptive while the ball's state needs to be inferred from the image. Appendix A lists the details for all the above environments. Besides standard images, we run experiments with two types of modified images by adding natural video backgrounds or occlusions. Figure 2 shows examples for Cheetah Run. **Natural Video Background.** We render natural videos from the Kinetics400 dataset (Kay et al., 2017) behind the agent. Following, Nguyen et al. (2021); Zhang et al. (2020); Deng et al. (2022) we use the videos from the driving car class and adhere to the train-validation split. Here, the challenge is to learn a representation that allows for efficient policy learning by filtering out as many irrelevant visual details as possible while not ignoring relevant aspects. Methods based on image reconstruction struggle at this task, as the corresponding likelihood maximization treats every pixel as equally important. Thus, it forces the representation to include irrelevant details about the background. **Images with Occlusions.** We render occlusions in front of the agent by adding disks that move slowly according to simple linear dynamics. Those occlusions can remove relevant information from the images for multiple consecutive time steps, which decreases observability and increases the task's complexity. This modification tests the approaches' capabilities to maintain a consistent representation across multiple time steps and to rely on the dynamics model when certain relevant aspects are missing. We compare all our methods on all tasks and show an excerpt of the results in Figure 3. Appendix C provides the remaining results. The most important result is that joint representations generally outperform approaches that concatenate image representations and proprioception or use only a single sensor modality. This result holds for all image types independent of using a variational or predictive approach for training. When using only images, our approach Figure 2: Examples of the different image types we use with the DMC Suite. **Left:** The standard images as provided by the suite. **Center:** Images with natural video background. We render randomly sampled videos from the Kinetics400 dataset behind the agent to cause visual distractions (Kay et al., 2017). **Right:** Images with occlusions. We render slow-moving disks in front of the agent to occlude relevant parts of the environment. performs similarly or better than _Dreamer-v2_ and _DRQ-v2_. _Img-Only(R)_ also performs similarly to _DreamerPro_ on the standard images, while _Img-Only(CV)_ and _Img-Only(CPC)_ achieve slightly worse results. However, _Img-Only(CV)_ and _Img-Only(CPC)_ suffer much less from adding natural videos and occlusions than _DreamerPro_ and _DRQ-v2_. As expected, none of the pure reconstruction approaches can solve the natural video and occlusion tasks, while methods that use a contrastive loss for the image perform much better. For the occluded tasks, all approaches using only images struggle to learn any reasonable behavior. While there is only a minor difference between _CV_ and _CPC_ methods in tasks with natural video background, _CPC_ outperforms _CV_ on the occlusion task. We explain this with the higher importance of learning appropriate dynamics in the occlusion tasks. It appears that _CPC_-based approaches with their explicit ahead prediction loss are better equipped to learn dynamics in this setting. Finally, combining a contrastive approach for images with reconstruction for proprioception is most of the time preferable to a purely contrastive loss. ### Locomotion Suite We propose another benchmark suite consisting of six locomotion tasks. For all tasks, we use proprioception and egocentric images. The agents need the proprioception to be aware of their own state, as they cannot observe themselves from their egocentric perspective. Moreover, the agents require egocentric images to avoid obstacles whose position and size are only available through the image. Three of the tasks are readily available with the DeepMind Control Suite (Tassa et al., 2018) and the software stack introduced in (Tassa et al., 2020). We designed three more by modifying the Cheetah Run, Walker Walk, and Walker Run tasks. Figure 4 shows some examples and Appendix A.2 provides de Figure 4: Example images from our Locomotion Suite. **Upper Row:** Observations provided to the agent. **Lower Row:** External images for visualization. **Left:** Similar to the cheetah in the introductory example shown in Figure 1, the walker has to walk forward while not stumbling over the hurdles placed in its way. **Center:** The ant has to move through the corridor as fast as possible, it thus needs to avoid the walls in its way. **Right:** The Quadruped has to escape a hilly landscape. This task corresponds to the standard Quadruped Escape task, where we modified the observation space by replacing the range finders with egocentric vision. Figure 3: Excerpt of the aggregated results on the standard DMC Suite tasks, with different forms of image observations. In all tasks using proprioception is beneficial. More importantly, learning a joint representation outperforms the concatenation of image representations and proprioception. **(a)**: Reconstruction on the standard images. When given only the images, our representation learning approach achieves results comparable to those of _Dreamer-v2_ and _DRQ-v2_. Even though the images provide all information necessary to solve the tasks, our method can still exploit the proprioceptive information and improve upon the image-only baselines in both sample efficiency and final performance. **(b)**: Contrastive variational approaches on images with natural video backgrounds. While the concatenation initially learns faster than image-only representations, only joint representations substantially improve the final performance. **(c)**: Contrastive predictive coding approaches on images with occlusions. No approach using a single sensor is capable of solving this task. However, using both images and proprioception can give good results, in particular with _Joint(CPC+R)_ and _Joint(CPC)_. Learning joint representations, allows the proprioception to shape the image features to focus on aspects that are more relevant for downstream policy learning. tails about all environments. The results in Figure 5 support the previous findings that using joint representations gives the best performance of all considered methods. Again approaches that combine contrastive and reconstruction losses outperform those using only contrastive losses. In the locomotion tasks, variational methods fare better than predictive ones, as opposed to the previous experiments. ### Box Pushing For our final evaluation, we use a custom Box Pushing task. Here, a simulated seven degrees of freedom Franka Emika Panda robot has to push a box to a goal pose (position and orientation) using a rod mounted to its end-effector. We provide the positions and velocities of the seven joints as proprioception and an image to observe the box. We feed the learned representations and goal pose to the policy, which directly outputs torques for the robot's actuators. Appendix A.3 provides further details about this setup. In this task, the challenge is to capture the complex interaction between the robot and the box, which is only observable through the image. We only use reconstruction approaches for a full-scale evaluation, as none of the contrastive methods led to reasonable behavior. The results presented in Figure 6 again highlight that using joint representations can lead to significant performance improvements. ## 5 Conclusion We investigated the problem of Reinforcement Learning (RL) from multiple sensors, in particular images and proprioception. Building on _Recurrent State Space Models_(Hafner et al., 2019), we propose learning a joint representation of all available sensors instead of learning image representations in isolation. For training, we evaluated reconstruction-based and contrastive losses and showed that choosing the right loss for each type of observation yields the best results. Additionally, we considered both variational and predictive coding learning paradigms, and while neither seems preferable in general, learning joint representations is beneficial for both. Our experiments show that joint representation can significantly improve over not using proprioception for representation learning. Using proprioception can guide representation learning and allows success in settings where current image-only approaches fail. One limitation of our current framework is its inability to use additional data for pretraining representations. For reinforcement learning and robotics, recent work (Parisi et al., 2022; Xiao et al., 2022) showed the potential of pretraining image representations on large amounts of out-of-domain data. Studying how to include such data or image representations extracted from it into our framework is a promising Figure 5: Aggregated results for all environments of the locomotion suite. Learning joint representations (_Joint (\(X\))_) gives the best results with all approaches for representation learning. The joint representations clearly outperform the concatenation of image representations and proprioception, as well as _DRQ-v2_. Combining a contrastive loss for the image with reconstruction for the proprioception (_Joint (CV+R)_ and _Joint (CPC+R)_) is better than using contrastive losses for both. _Joint (CV+R)_ slightly outperforms the pure reconstruction approach _Joint (R)_, even though there are no distractions in the environments. Finally, the results show that using only images or only proprioception is insufficient to achieve good performance. Figure 6: Example image and results for the Box Pushing task. The green box indicates the goal location, is added only for visualization, and is not visible to the agent. We evaluate \(20\) seeds and report the average achieved success rate. While _Concat(R)_ cannot exploit the additional proprioception, using a joint representation (_Joint(R)_)improves the success rate from about \(60\%\) to over \(75\%\). avenue for future research.
2305.01183
Faster OreFSDet : A Lightweight and Effective Few-shot Object Detector for Ore Images
For the ore particle size detection, obtaining a sizable amount of high-quality ore labeled data is time-consuming and expensive. General object detection methods often suffer from severe over-fitting with scarce labeled data. Despite their ability to eliminate over-fitting, existing few-shot object detectors encounter drawbacks such as slow detection speed and high memory requirements, making them difficult to implement in a real-world deployment scenario. To this end, we propose a lightweight and effective few-shot detector to achieve competitive performance with general object detection with only a few samples for ore images. First, the proposed support feature mining block characterizes the importance of location information in support features. Next, the relationship guidance block makes full use of support features to guide the generation of accurate candidate proposals. Finally, the dual-scale semantic aggregation module retrieves detailed features at different resolutions to contribute with the prediction process. Experimental results show that our method consistently exceeds the few-shot detectors with an excellent performance gap on all metrics. Moreover, our method achieves the smallest model size of 19MB as well as being competitive at 50 FPS detection speed compared with general object detectors. The source code is available at https://github.com/MVME-HBUT/Faster-OreFSDet.
Yang Zhang, Le Cheng, Yuting Peng, Chengming Xu, Yanwei Fu, Bo Wu, Guodong Sun
2023-05-02T03:30:03Z
http://arxiv.org/abs/2305.01183v1
# Faster OreFSDet : A Lightweight and Effective Few-shot Object Detector for Ore Images+ ###### Abstract For the ore particle size detection, obtaining a sizable amount of high-quality ore labeled data is time-consuming and expensive. General object detection methods often suffer from severe over-fitting with scarce labeled data. Despite their ability to eliminate over-fitting, existing few-shot object detectors encounter drawbacks such as slow detection speed and high memory requirements, making them difficult to implement in a real-world deployment scenario. To this end, we propose a lightweight and effective few-shot detector to achieve competitive performance with general object detection with only a few samples for ore images. First, the proposed support feature mining block characterizes the importance of location information in support features. Next, the relationship guidance block makes full use of support features to guide the generation of accurate candidate proposals. Finally, the dual-scale semantic aggregation module retrieves detailed features at different resolutions to contribute with the prediction process. Experimental results show that our method consistently exceeds the few-shot detectors with an excellent performance gap on all metrics. Moreover, our method achieves the smallest model size of 19MB as well as being competitive at 50 FPS detection speed compared with general object detectors. The source code is available at [https://github.com/MVME-HBUT/Faster-OreFSDet](https://github.com/MVME-HBUT/Faster-OreFSDet). ## 1 Introduction The particle size of the ore is an important data index to determine the crushing effect of the ore. Accurate and efficient detection of ore particle size is the basis of ore crushing optimization, which has a direct impact on the productivity of the entire beneficiation process. Complex beneficiation site environments, dense adhesion, and stacking have posed great difficulties for ore particle size detection. In addition, some ores are reflected in the mineral processing workshop after using light, which makes it more difficult to distinguish the ore from the background. Some scholars have proposed some traditional techniques [25] for detecting ores particle sizes. To achieve good performance, these approaches require laborious parameter adjustment processes. With the development of convolutional neural networks (CNN), there have been some significant advancements in object detection. However, general object detectors require a large amount of box labels to train, and obtaining such high-quality ore labeling data is expensive and time-consuming. As labeled data become scarcer, the CNNs are easily overfitting and fail to be generalized. Therefore, object detectors have trouble detecting real-world scenarios involving novel objects that are absent from common datasets for object detection. On the basis of few-shot learning, some approaches [5, 16, 45] have developed insightful ideas for addressing data scarcity. Few-shot object detection (FSOD) is the combination of traditional object detection and few-shot learning, which aims to predict and locate the object under a few annotated training samples. As a result, it lessens the workload associated with labeling substantial volumes of data in the target domain. However, the existing FSOD methods are mainly based on the traditional Faster RCNN [28], as shown in Fig. 1. This two-stage detector includes a slow and independent region proposal generation step. In addition, to reduce the loss of accuracy caused by the lack of training data, a large number of complex modules are designed to establish the correlation between support and query, which leads to slow detection speed and high memory usage. It is extremely challenging for scenario deployments with limited computing resources and tight memory budgets. To solve these problems, a well-known detector CenterNet2 [44] is served as the basic detector for the FSOD task. The CenterNet2 employs CenterNet with the ability to create accurate probability in the first stage, which is more accurate than the two-stage detector. Additionally, it enables the detector to employ fewer proposals (256 vs. 1K) in the region of interest (RoI) head, enhancing the overall accuracy and speed. Furthermore, we design a support-feature mining block (SM Block) and relationship guidance block (RG Block) to fully establish the relationship between support and query features. Specifically, adhesion, occlusion, and variations in ore appearance are particularly common, which present great difficulty in establishing high-quality support features. If sufficient discriminant information is not provided, the model can hardly learn the crucial features for class and bounding box predictions. The SM Block is first suggested to encode feature representations along the height and width dimensions using linear projection. The suggested SM block has the ability to assess the significance of the feature data provided by the ore images and remove detection interference brought on by the addition of background noise. Next, we establish the spatial and channel correlation between support and query in the RG Block, which significantly improves the guidance performance of the query branch. Finally, we propose a dual-scale semantic aggregation (DSA) module that retrieves detailed features at different resolutions for final classification and bounding box regression. Extensive experiments show the effectiveness of our method in comparison with state-of-the-art detectors. Our main contributions can be summarized as follows: 1. A real-time few-shot object detector is designed for ore particle detection, which can alleviate the over-fitting issue when dealing with limited labeled data and significantly improve the performance of the FSOD task for the ore images. 2. We propose the SM Block to characterize the importance of semantic information in support features, and the RG Block to better establish the correlation between support and query features for guiding the generation of precise candidate proposals. Figure 1: Faster R-CNN is usually served as the basic detector of FSOD. This anchor-based few-shot object detector uses RPN to maximize the recall of the top 1K proposals and does not use these proposal scores in the test phase. A large number of proposals slows the speed. In addition, a large number of complex modules are designed to establish the correlation between support and query, which leads to slow detection speed and high memory requirements. It is extremely challenging for scenario deployments with limited computing resources and tight memory budgets. 3. The proposed DSA module is designed to retrieve detailed features at different resolutions for final classification and bounding box regression. The remainder of this paper is structured as follows. In Section II, we provide a brief introduction to the ore image processing, general object detection, and FSOD methods. Section III presents the problem definition and four components. Section IV details experimental procedures and results over the ore dataset. Finally, the conclusion is drawn in Section V. ## 2 Related Work ### Ore Image Processing The particle size analysis task is usually aimed at the ores on the belt and can be divided into three modes: particle size statistics, particle size classification, and large block detection. Particle size statistics refers to the determined value of ore size in an image that is obtained. Generally, the semantic segmentation network is used to segment each object in the image, and then the number of pixels of each object is obtained by OpenCV and other toolkits. According to the relationship between the unit pixel and the actual size, the area \(S\) of each ore is obtained. Finally, the corresponding ore particle size statistics are completed according to the actual needs, such as the particle size \(d\) (the equivalent circle diameter corresponding to the area \(S\)) of each ore in the image. Some researchers have proposed some solutions and achieved good results. The primary measures are regression-based classifiers and techniques based on certain theories [25]. Watershed transform processes [34] were introduced in region-based segmentation techniques for ore particle sizes. However, it is difficult to adapt these methods to different situations since they require a time-consuming parameter change procedure to achieve satisfactory performance. The development of CNN-based image classification has led to significant advancements in downstream fields such as object detection and semantic segmentation. For example, Liu et al. [21] used U-Net to detect ore particle size, and Li et al. [13] also proposed a U-Net-based model that alleviated ore particle size detection issues by improving the loss function and utilizing the watershed technique. Liu et al. [22] used morphological transformation to process the mineral image mask and segment the key areas in the mineral image. Sun et al. [32] proposed a novel efficient and lightweight method for ore particle size detection. Particle size classification refers to the classification of particle size grades by considering all ores in the image as a whole. In general, the ore datasets of different particle sizes are constructed first, and then the image classification network is trained. Finally, the trained model is used to classify the unknown ore images. Olivier et al. [26] used VGG16 network [30] to classify 10 particle size grades of ore images, which provided guidance for subsequent mine production operation control. Large block detection refers to the identification of oversized ores on the belt. The object detection network is first used to obtain the coordinate information of the ore, then the external rectangular area of each ore is calculated. Finally, it will compare with the set threshold to determine whether there is a large block on the belt. At present, there are few related researches, and this paper is a study of the third detection task. ### General Object Detection Object detection is to identify and categorize a variety of targets in the images and then determine the categories of numerous objects as well as their locations. Object detection has generally been the most challenging subject in the field of computer vision since different things might have a wide range of appearances, shapes, and attitudes. The deep learning framework may be utilized with one-stage and two-stage mainstream object detectors. The former, such as the well-known Faster R-CNN [28], first created category-unknown region proposals in the form of RPN, and then projected the proposals onto the feature maps following the RoI pooling. Finally, proposal features were fed into the fully connected layer for classification and regression to determine class labels and fine-tune bounding boxes. Grid RCNN [24] introduced a grid-guided localization mechanism for accurate object detection. Cascade RCNN [1] used cascade regression as a resampling mechanism to improve the performance of the probe by increasing the intersection over union (IoU) of the proposal stage by stage. You only look once (YOLO) series [2, 27] and single shot multi-box detector (SSD) [20] were examples of one-stage detectors that provided a non-region proposal framework for class and bounding box prediction. On the other hand, there are two categories of existing object detection: anchor-based and anchor-free. Faster R-CNN [28] was the one that initially put out the idea of anchors. A proper initialization for RPN allows it to avoid using an unnecessary amount of search space and produce better region suggestions since each anchor represents a predetermined box with a certain aspect ratio and size. Many one-stage detectors also employ anchors to raise the quality of their proposals. However, anchors add a lot of hyper-parameters and the imbalance between positive and negative proposals is exacerbated since most anchors do not include targets. Numerous anchor-free techniques were then proposed, such as FCOS [33], which directly predicted using the center-based technique and positive and negative samples are defined using various methods. In addition, there are some methods designed to solve special problems in object detection. RetinaNet [17] proposed focal loss to alleviate the problem of foreground-background class imbalance. Based on focal Loss, VarifocalNet [43] used Varifocal Loss to predict each image for dense object detection. Li et al. [14] proposed a generalized focal loss via joint quality estimation and classification. Shuang et al. [29] proposed a loss function to alleviate a matching imbalance due to different scales of objects. CenterNet2 [44] replaced RPN with CenterNet with the ability to generate accurate likelihood in the first phase, making it more accurate and faster. According to the peculiarities of the parallel implementation of classification and localization in conventional one-stage object detection, Probabilistic anchor assignment (PAA) [11] proposed a new anchor assignment strategy, which adaptively assigns labels to anchors in the form of probability according to the training state of the model. ### Few-shot Object Detection With only a few training samples, the FSOD task attempts to tackle the object detection issue. There are two main categories: transfer learning-based and meta-learning-based methods. For transfer learning-based methods [39], the target domain model was first initialized using the model parameters from the source domain model on large-scale datasets, and then fine-tuned on small-scale datasets. There are two main methods of meta-learning-based approaches [37]: learn to fine-tune and learn to measure. The former is to learn category-agnostic parameters for new categories and specific weights on new tasks. Two-stage fine-tuning approach (TFA) [36] first trained the entire detector on a data-rich base class, and then fine-tuned the last layer of the detector on a small balanced training set consisting of base classes and new classes while freezing other parameters of the model. Sun et al.[31] provided a comparative learning method into the two-stage fine-tuning method to reduce the intra-class differences and increased the inter-class differences. In contrast, learning to measure requires the feature fusion of the query set and the support set to complete an example search in a constrained number of support sets. However, how these features are incorporated, where they are integrated, and what training strategies are employed vary depending on the model. Features in two-stage methods such as Meta RCNN [41] and FsDetView [40] are fused after the RPN. Similarly, the feature fusion of Meta YOLO [9] was directly performed before the detection head. AttentionRPN [6] provided a feature fusion module to exclude proposals by category information. In addition, there are some other methods to deal with the special problems in this field. Wu et al. [39] proposed a multi-scale positive sample refinement (MPSR) method to enrich the object scale in FSOD. A dual attention strategy was introduced by dual-awareness attention for few-shot object detection (DAnA) [3] to address the issues of spatial information loss brought on by global pooling and spatial imbalance brought on by convolutional attention. Spatial reasoning is introduced into few-shot object detection to detect new categories in [10], which is a novel approach in this field. In our framework, we use the AttentionRPN [6] as the baseline and further improve the performance on ore images. Fine-tuning FSOD method is adopted with CenterNet2 [44] as the basic detection framework. Different from the previous works, we develop a lightweight and effective few-shot object detector on CenterNet2 [44]. Compared to the two-stage anchor-based detector, our method uses CenterNet with the ability to generate accurate likelihood, which is more accurate and allows the detector to use fewer proposals (256 vs 1K) on the RoI head. In the second stage, we adopt a uniquely designed lightweight detection head for ore images, making our detector more accurate and faster overall. In addition, we design lightweight and effective few-shot strategies to generate higher quality supports and establish the effective correlation between support and query features for more precise guidance. ## 3 Methodology In this section, we first introduce the motivation overview. Subsequently, we introduce SM block to characterize the importance of semantic information in support features and RG Block to better establish the correlation between support and query features for guiding the generation of precise candidate proposals. Finally, a dual-scale semantic aggregation module is designed to retrieve detailed features at different resolutions for final classification and bounding box regression. ### Motivation Overview It is a challenging task to realize the mobile deployment of an ore detection model with limited training data in a complex detection environment. A lack of training data will cause general object detectors to be overfitting. Despite alleviating the overfitting issue, FSOD methods usually suffer some difficulties caused by the poor performance of speed and accuracy, and excessive model size, especially when used to embedded mobile devices. Specifically, there are two main reasons for the poor performance of the existing FSOD methods: (i) There is just one class of ore in the ore particle size detection, only a straightforward classification of the foreground and background is required. However, the previous works generally employed the two-stage Faster RCNN [28] as the detection framework. Faster RCNN generated a series of anchor boxes at each anchor according to certain rules, and then adjusted proposals (RoIs) at the second stage for anchors combined with neural network output bias and a series of selection rules. When the existing methods are directly migrated to the ore particle size detection, this detector exhibits slow speed and redundant classification full connection layer. In addition, there are a large number of complex modules to decrease the accuracy loss caused by limited training data, which resulted in slow detection speeds and enormous memory requirements. (ii) The occlusion and adhesion of ore appearance cause incomplete expression of characteristics. Therefore, when establishing the correlation between support and query features for guidance, the support feature information of the ore is particularly important. Under few-shot task setting, however, the ore class prototype is derived from the characteristics of the global average pooled support feature maps, which results in the loss of particular local contexts. The correlation between ore image support and query features cannot thus be clearly established. As shown in Fig. 2, we present a lightweight and faster few-shot detector on CenterNet2 [44]. The proposed SM Block characterizes the importance of location information carried by support features, and RG Block makes full use of support features to guide the generation of accurate candidate proposals. The DSA retrieves detailed features at different resolutions to contribute with the prediction process. ### A Faster and Lighter Detection Framework Most FSOD methods are built on a two-stage detector Faster RCNN [28]. All two-stage detectors used a weak RPN to maximize the recall of the first 1K proposals and did not utilize these proposal scores during the test phase. A large number of proposals slows the speed, and the proposal network based on recall considerations does not directly provide a clear probability explanation like the one-stage detector. Additionally, the last fully connected layer used by the original Faster RCNN [28] takes up a large portion of the parameters. All RoIs after RoI pooling will go through this full connection and are calculated separately without shared computing. To obtain a lighter and stronger detection framework, our method is built on the two-stage detector CenterNet2 [44], as shown in Fig. 3. Compared to the two-stage anchor-based detector, CenterNet2 [44] uses the CenterNet with the ability to generate accurate likelihood in the first stage, which is more accurate and allows the detector to use fewer proposals (256 vs 1K) on the RoI head, making our detector more accurate and faster overall. In addition, CenterNet2 Figure 2: The overall framework of our proposed method. The training input of each episode consists of a query image and several support images from one class. The shared feature extractor and feature pyramid networks (FPN) first extract the query and support features. The P3, P4, and P5 of support features from FPN are fed into the SM Block for effective feature representation and then fed into the RG Block to generate attention maps with the same level query features from FPN. These attention feature maps are sent to the one-stage detector CenterNet. After filtering out the negative objects that do not belong to the support category, accurate candidate proposals are generated. Subsequently, the candidate proposals and the support features are sent to the detection head, where a more accurate parsing is conducted between the support box and the potential box in the query feature maps. [44] employs a complex cascade RCNN architecture in the second stage, which we customize to achieve a lightweight design due to the single class characteristics of ore images. ### Support Feature Mining Block **Feature information Mining.** The quality of supports is crucial in determining how to guide the query branch. In previous work, the supports from the backbone were frequently used directly, which introduced distracting background noise. To this end, we propose a simple and data-efficient SM Block that characterizes the importance of the location information carried by supports. Figure 4: The structure of SM block. Two branches in SM Block are in charge of encoding information along the height and width dimensions. The dimension information supplied by the two branches is adaptively aggregated using the attention method to acquire the position-sensitive information of the final support. Figure 3: The structure of CenterNet2. A lightweight VoVNet [12] is served as backbone. To extract and classify region-level characteristics in stage one, CenterNet2 [44] employs the CenterNet with the ability to create precise probability. In stage two, we remove the extra two heads and shrink the number of head channels to a smaller 128. To increase the log-likelihood of GT targets, these two stages are trained concurrently. The final log-likelihood is used by our detector as the detection score in inference. In Fig. 4, our module consists of two branches that are responsible for encoding information along the height and width dimensions. When encoding spatial information along the height dimension, the height channel permutation operation is performed first. Given \(\mathbf{X}\in t^{H\times W\times C}\), to satisfy \(C=H\ast S\), we first divide it into \(S\) parts along the channel dimension, and then perform a high-channel permutation operation to get \([H_{1},H_{2}\cdots H_{s}]\). Next, the height information is encoded through a fully connected layer, followed by a height channel permutation operation to restore the dimension information to \(\mathbf{X}\). Similarly, these operations are performed in the width direction. Finally, the weighted sum of two branches (\(K=2\)) is carried out, which is described as follows: Multilayer Perceptron (MLP) for feature mapping of the two summed branches \[\mathbf{z}=T_{MLP}(((\mathbf{X_{h}+X_{w}})\mathbf{R_{1}})\mathbf{R_{2}}),\quad\mathbf{R_{1}}\in \mathbb{R}^{C\times C},\mathbf{R_{2}}\in\mathbb{R}^{C\times KC}. \tag{1}\] \[Reshape\text{ and }softmax \tag{2}\] Weighted summation \[\mathbf{\hat{X}}=\sum_{k=1}^{K}\mathbf{X_{k}}[i,:]\odot\mathbf{Z}[k,:]. \tag{3}\] The SM block can capture long-range dependencies along one spatial direction while retaining accurate location information in the other direction. The output features of the location-aware obtained in this way are aggregated in a complementary way to form an effective representation of the target of interest. ### Relationship Guidance Block After obtaining a high-quality support class prototype, an effective relationship between query and support is crucial to the performance of the model. Previous work AttentionRPN[6] performed a global average pooling operation on support and used it as a convolution kernel (\(1\times 1\)) to slide over the query feature map to obtain an attention map with support spatial information. However, when employing the same technique on ore images, this global average pooling operation results in the loss of support and only concentrates on spatial information, whereas channel information Figure 5: The structure of RG Block. Support is pooled into two different sizes of kernels with ore prototype spatial information, which are then convoluted channel by channel on the query map. The output two feature maps are then superimposed on the original query map to get the final attention map with spatial scale correlation. Finally, the query map and attention map are concatenated along the channel to establish the feature channel correlation. relating to categories is not correlated. Therefore, we suggest a RG Block to fully build an effective relationship between support and query as indicated in Fig. 5. **Spatial scale correlation**: the category of the target is closely related to the appearance, which is determined by the feature's spatial dimension. Consequently, the spatial correlation between the two features might substantially indicate how similar they are to one another. To retain more support spatial information for query context guidance, we pool supports into 1x1, and 3x3 sizes, and perform parallel convolution operations on the query. The 3x3 convolution is divided into 1x3 and 3x1 deep strip convolutions to further decrease the computational cost while facilitating the extraction of banded ore characteristics. As follows, we specify the spatial scale correlation \(\mathcal{R}_{\mathrm{s}}\) \[\mathcal{R}_{\mathrm{s}}(\mathbf{X},\mathbf{Y})=\boldsymbol{Q}_{c,h,w}=\sum_{ m}^{P}\sum_{n}^{P}\boldsymbol{X}_{c,m,n}\cdot\boldsymbol{Y}_{c,h+m-1,w+n-1}, \tag{4}\] where \(\boldsymbol{Q}\) represent the generated attention feature maps. Each support feature map of \(\boldsymbol{X}\in t^{P\times P\times C}\) (\(P\) denotes the kernel size after pooling) is served as a convolution kernel for convolution operations on the corresponding query feature map of \(\boldsymbol{Y}\) in depth cross-correlation. After spatial scale correlation, we superimpose two feature maps with different spatial information of supports onto the original query feature maps to get the final feature maps. **Feature channel correlation**: prior research has demonstrated that the category information of images is frequently present in the feature channel. Along the distribution of the channel, the deep features of the same category are similar. We use the following criteria to define the similarity \(\mathcal{R}_{\mathrm{c}}\) \[\mathcal{R}_{\mathrm{c}}(\mathbf{X},\mathbf{Y})=\mathrm{Conv}(\mathrm{Cat}( \mathbf{X},\mathbf{Y})), \tag{5}\] where Cat represents two features concatenated along the channel, and the interaction between the channels is modeled using a normal \(1\times 1\) convolution. Figure 6: A global matching relationship is established between support features \(\boldsymbol{X}\) and query features \(\boldsymbol{Y}\). Then, two different resolution feature maps are aggregated to provide a more comprehensive feature representation for final classification and box regression. ### Dual-scale Semantic Aggregation Module After one stage, the RoI align module performs feature extraction for final class prediction and bounding box regression. Based on previous experience, implementation with a fixed resolution of 8 may cause information loss during training. An abundance of training data can make up for this information loss in general object detection, while the issue gets severe for few-shot object detection with few shots. Therefore, we propose a dual-scale semantic aggregation module as shown in Fig. 6. Empirically speaking, small resolution tends to focus on large target information, while larger resolution tends to focus on smaller object information. Since the ore image sizes are only medium and large, we choose 4 and 8 resolutions and perform parallel pooling. In addition, to further guide more accurate classification and bounding box regression using support information in the second stage, we establish a global matching relationship between the support feature map and the query feature map \[\mathcal{A}_{\text{DSA}}(\mathbf{X},\mathbf{Y})=\text{Conv3}(\text{Cat}( \mathbf{X},\mathbf{Y}))+\text{Cat}(\text{Conv1}(\mathbf{X}),\text{Conv2}( \mathbf{Y})). \tag{6}\] Finally, we aggregate two different resolution feature maps to obtain a more comprehensive feature representation for final classification and bounding box regression. ## 4 Experiments In this section, we first introduce the implementation details, datasets, and evaluation metrics. Then, we conduct ablation for a lightweight few-shot detector and evaluate the effectiveness of SM Block, RG block, and DSA by comprehensive experiments. Finally, we compare our proposed OreFSDet with the state-of-the-art methods on the ore dataset. ### Experiments Setup #### 4.1.1 Implementation Details The CenterNet2 [44] serves as the detection framework of our network throughout the experiments. During the training process, for the constructed probabilistic interpretation of two-stage detection, the loss calculation in CenterNet is used in the first stage, and the loss in Cascade RCNN is employed in the second stage. Finally, the above two losses are added to obtain the total loss. We use a fine-tuning method to achieve detection under a few-shot scenario. The 80 categories in Microsoft COCO 2017 [18] are employed as base classes for base training. And in place of the fully-connected layer, a newly created layer with a random initialization value is created for new class ore. A significant aspect of our learning process is freezing the backbone and incorporating the newly designed network model in the second fine-tuning phase. For fine-tuning on the ore image dataset, the number of iterations is set to 20000 and the learning rate is 0.001. The batch size is set to 1 with a single NVIDIA GTX2080Ti GPU. The scaled query images have a short side of 320 pixels and a long side of fewer than 1000 pixels. Support images are \(240\times 240\) pixels with zero-padded. #### 4.1.2 Datasets The MS COCO [18] is set up for base training and ore image dataset for fine-tuning training. The MS COCO is a large-scale dataset for image detection, semantic segmentation, and other vision tasks. It includes 80 target categories, 1.5 million targets, and more than 330K photos. Figure 7 shows how the experiment platform is used to collect ore images of different sizes for the ore image dataset. The ore distribution is then changed on different scales by making it dense, sparse, etc. In order to be used in network training, we also split huge ore images into smaller ones. #### 4.1.3 Evaluation Metrics A total of eight indicators are utilized to determine whether the proposed method is effective: \(AP^{box}\), \(AP^{box}_{50}\), \(AP^{box}_{75}\), \(AP^{m}\), \(AP^{l}\), frame per second (FPS), inference memory consumption, and model size. The accuracy evaluation metrics adopt the COCO[18] evaluation metrics commonly used in object detection(top five). \(AP^{box}\)(average precision of boxes) represents the average of precision of boxes under different box IOU values. IOU values vary from 0.5 to 0.95, with a calculation of precision of boxes at an interval of 0.05. \(AP^{box}_{50}\) and \(AP^{box}_{75}\) represent values of precision of boxes with IOU of 0.5 and 0.75, respectively. The \(AP^{s}\), \(AP^{m}\) and \(AP^{l}\) represent the three categories of small, medium and large. The pixels with an area less than \(32\times 32\) are classified as small targets, those with an area greater than \(96\times 96\) are classified as large targets, and those between the two are medium targets. There are only medium and large ore targets. The frame per second (FPS) is used to indicate how much a model costs to compute. Memory consumption during inference demonstrates the model's dependence on the hardware. The last factor is critical in determining whether or not a model can be implemented on low-cost hardware. It is crucial to consider the last factor when determining whether or not a model can be implemented on low-cost hardware. To further validate the effectiveness, we train 4060 images for general object detection in experiments. In contrast, the shot is 10 for our proposed OreFSDet, and the validation set is the same for both of them at 1060 images. ### Ablation Study #### 4.2.1 Ablation for a lightweight few-shot detector The three cascaded heads exist in the second stage detection of original CenterNet2 [44]. To find out which heads and corresponding IoU values are crucial for ore detection and the influence of the number of channels on the performance of the model, we conducted comprehensive experiments. It can be concluded from the table that as the number of channels in the head decreases sharply, only slight accuracy damage is caused, which is attributed to the single category detection and simple feature information of the ones. Compared with the single number of head, the multiple number of cascaded detection head generally brings accuracy improvement under different channels, but it is followed by the unbearable model size and detection speed burden. Therefore, we choose channel 128 and cut the three heads into one head, and the corresponding IoU value is 0.6. To further achieve a smaller and speedier model, we chose a lighter backbone, as seen in Table 2. The lightweight VoVNet [12] significantly contributes reduced computational overhead and model size occupancy to our model compared to the backbone ResNet50 [8] and DLA [42]. Compared with the baseline, we are more than 7x smaller in model size and nearly 19 frames faster in inference speed, which is of great significance for our model deployment on the edge. As shown in Table 2, we verify the effectiveness of the three designed modules with lightweight CenterNet2 [44] as the baseline. As a module that processes support information separately, the SM Block needs to be used in conjunction with the other two modules. The combination of RG Block and DSA with SM Block improves the overall performance of the model. #### 4.2.2 Impact of SM We employ three distinct attention mechanisms to efficiently characterize the feature information of support features as given in Table 3. Convolutional attention module (CBAM [38]) that integrates channel and spatial attention processes. Based on the dual attention process, Polarized self-attention [19] suggests a more refined version of polarized self-attention. CoTNet [15] is a transformer-style module that fully exploits the context information contained between input keys to direct the learning of a dynamic attention matrix. In contrast to the approaches mentioned above, our approach utilizes linear projection to encode feature expression along the two dimensions of height and width rather Figure 7: We collect ore images with different densities and build an experimental platform for ore image detection. than 2D convolution or an attention mechanism. The results demonstrate that SM Block outperforms other approaches due to its quicker inference speed and better precision. #### 4.2.3 Impact of RG We use four different few-shot strategies to establish the relationship between support and query features. The support feature is encoded into a class attention vector in [7], and then element-by-element multiplication is performed along the channel dimension at each spatial position of the query feature map. AttentionRPN [6] takes support feature map as the convolution kernel, and performs sliding convolution on the query feature to establish the association between the support feature and the query feature. DAnA [3] converts the support feature into a query-position-aware (QPA) feature with specific semantic information, and areas of the query feature map with high QPA response should \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{**Head\_channel**} & \multicolumn{3}{c}{**Cascade head**} & \multirow{2}{*}{\(\mathbf{AP}^{\text{box}}\)} & \multirow{2}{*}{\(\mathbf{AP}^{\text{box}}\)} & \multirow{2}{*}{\(\mathbf{AP}^{\text{box}}_{\text{80}}\)} & \multirow{2}{*}{\(\mathbf{AP}^{\text{box}}_{\text{75}}\)} & \multirow{2}{*}{**FPS**} & \multirow{2}{*}{**Model Size** (MB)} \\ \cline{3-3} \cline{5-8} & stage1(0.6) & stage2(0.7) & stage3(0.8) & & & & & **(MB)** \\ \hline \multirow{4}{*}{1024} & \multirow{4}{*}{\(\mathbf{\nu}\)} & \multirow{4}{*}{\(\mathbf{\nu}\)} & 53.0 & **77.6** & 63.3 & **43** & **62.3** \\ & & & **54.1** & 76.3 & **65.0** & 40 & 102.0 \\ & & & & 53.0 & 74.2 & 63.2 & 33 & 138.0 \\ \hline \multirow{4}{*}{512} & \multirow{4}{*}{\(\mathbf{\nu}\)} & \multirow{4}{*}{\(\mathbf{\nu}\)} & 52.4 & **77.6** & 62.7 & **42** & **36.2** \\ & & & & **54.2** & 76.4 & **65.4** & **42** & 54.3 \\ & & & & 53.2 & 74.3 & 63.6 & 33 & 71.3 \\ \hline \multirow{4}{*}{256} & \multirow{4}{*}{\(\mathbf{\nu}\)} & \multirow{4}{*}{\(\mathbf{\nu}\)} & 52.3 & **77.3** & 62.6 & **43** & **24.7** \\ & & & & **54.7** & 77.0 & **65.9** & 37 & 33.2 \\ & & & & 53.5 & 74.9 & 64.2 & 36 & 41.5 \\ \hline \multirow{4}{*}{128} & \multirow{4}{*}{\(\mathbf{\nu}\)} & \multirow{4}{*}{\(\mathbf{\nu}\)} & 52.5 & **77.9** & 63.0 & **50** & **19.4** \\ & & & & **54.4** & 76.8 & **65.8** & 40 & 23.5 \\ \cline{1-1} & & & & 52.9 & 74.7 & 63.3 & 33 & 27.6 \\ \hline \hline \end{tabular} \end{table} Table 1: The influence of the number of heads and corresponding channels in CenterNet2 on the lightweight design of the model be identified as targets. When used for dense ore detection, this inappropriate treatment of spatial characteristics results in poor performance, especially speed burdens. Unlike AttentionRPN, our method uses kernels with different sizes of support spatial information, which are fully utilized to obtain spatial correlation. And two features are concatenated in the channel dimension to obtain the feature channel correlation. The results show that our method outperforms the other three methods and is lightweight enough without additional speed and model size burden. #### 4.2.4 Impact of Dual-scale Semantic Aggregation Module The impact of resolution on model performance can be seen in Fig. 8. With the decrease of resolution, the overall \(AP^{box}\) of the model does not change much, and the corresponding \(AP^{m}\) shows a stable downward trend. The \(AP^{l}\) nevertheless maintains a fairly high level with resolution = 3, which indicates that small resolution does tend to large target information. As shown in Table 5, our proposed dual-scale semantic aggregation module takes into account the detection of different scales of ore, and has a significant improvement in performance. Interestingly, when aggregating two smaller resolution feature maps, we achieved a smaller model size while sacrificing only a tiny amount of speed. #### 4.2.5 Visualization To further illustrate the effect of the RG Block, we visualize the features before and after it. As shown in Fig. 9, after passing through the module, the query features get activated to focus on more important features, rather than the previously cluttered state. Our proposed RG Block fully establishes the spatial correlation and channel correlation between support and query features, which significantly improve the guidance performance. Moreover, the visualization of the effect of the DSA module is presented in Fig. 10. Ores of different sizes have different sensitivities to resolution, which leads to different confidence scores. DSA combines two resolutions to take into account the detection of different sizes of ore and achieved better performance. Results demonstrate that DSA can effectively retrieve detailed features at different resolutions to contribute with the prediction process. Finally, we visualize different results of few-shot object detection from ore images. Compared with other detection methods, the detection results obtained by OreFSDet have high confidence and no excessive miss detection. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Method** & \(AP^{box}\) & \(AP^{box}_{50}\) & \(AP^{box}_{75}\) & FPS & **Model Size** (MB) \\ \hline CBAM [38] & 49.8 & 75.1 & 59.7 & 43 & 18.6 \\ CoTNet [15] & 51.7 & 76.7 & 62.2 & 48 & 21.0 \\ Polarized Self-Attention [19] & 51.5 & 76.6 & 61.3 & 45 & 19.8 \\ **OrefSDet(ours)** & **52.5** & **77.9** & **63.0** & **50** & **19.4** \\ \hline \hline \end{tabular} \end{table} Table 3: The effectiveness of support feature mining block \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Method** & \(AP^{box}\) & \(AP^{box}_{50}\) & \(AP^{box}_{75}\) & \(AP^{m}\) & \(AP^{l}\) & FPS \\ \hline AttentionRPN [6] & 50.8 & 76.3 & 60.4 & 51.1 & 55.5 & 37 \\ FGN [7] & 51.7 & 77.4 & 61.8 & 52.0 & 55.4 & **50** \\ DAnA [3] & 45.2 & 72.9 & 51.5 & 45.4 & 53.9 & 40 \\ **OrefSDet(ours)** & **52.5** & **77.9** & **63.0** & **52.7** & **56.0** & **50** \\ \hline \hline \end{tabular} \end{table} Table 4: The effectiveness of relationship guidance block ### Comparison with State-of-the-Art Methods On the ore dataset, we compare the performance of several FSOD algorithms in terms of \(AP^{box}\), \(AP^{box}_{50}\) and \(AP^{box}_{75}\). TFA [36] trains the detector on MS COCO and then fine-tunes the last layer of the detector on a small balanced ore dataset while freezing other parameters of the model. FSCE [31] provides a comparative learning method into the two-stage fine-tuning method to reduce the intra-class differences and increase the inter-class difference. Support features and query features are fused after the RPN for guiding detection in two-stage methods MetaRCNN [41] and FsDetView [40], while AttentionRPN [7] provided a feature fusion module to exclude proposals by class category before RPN. MPSR [39] presented a multi-scale positive sample refining technique to enrich the object scale. Our proposed OreFSDet is also a fine-tuning method. Different from the above methods, it mines the important features of ore images through SM Block, thereby removing the detection interference caused by the introduction of background noise. Then, the spatial and channel correlation for precise guidance between the support image and query image are fully established through the attention mechanism. Finally, the dual-scale semantic aggregation module retrieves detailed features at different resolutions to contribute with the prediction process. As shown in Table 6, we conducted Comparative experiments for different FSOD algorithms on the ore dataset under different shots. With 25 shots provided for training and 1060 ore images for assessment, our method surpasses the baseline [6] with great advantage by 23.3/31.1/27.7 respectively on \(AP^{box}\)/\(AP^{box}_{50}\)/\(AP^{box}_{75}\) metrics. Additionally, with fewer shots, the suggested model remains excellent performance, proving that OreFSDet effectively retrieves the information of support images for guidance through SG Block, RG Block, and the dual-scale semantic aggregation module. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Resolution & \(AP^{box}_{75}\) & \(AP^{box}_{50}\) & \(AP^{box}_{75}\) & \(AP^{m}\) & \(AP^{l}\) & FPS & \begin{tabular}{c} **Model Size** \\ (MB) \\ \end{tabular} \\ \hline 3 & 51.4 & 77.2 & 61.6 & 51.5 & 55.4 & 48 & 15.9 \\ 4 & 51.5 & 77.0 & 61.7 & 51.6 & **56.5** & 48 & **16.4** \\ 5 & 51.6 & 76.7 & 61.7 & 51.9 & 54.1 & 48 & 16.9 \\ 8 & 52.0 & 76.8 & 62.1 & 52.2 & 56.1 & 48 & 19.4 \\ **4\&8** & **52.5** & **77.9** & **63.0** & **52.7** & 56.0 & **50** & 19.4 \\ **3\&5** & **52.3** & **78.1** & **62.6** & **52.4** & 56.3 & **48** & 16.5 \\ \hline \hline \end{tabular} \end{table} Table 5: The effectiveness of dual-scale semantic aggregation module Figure 8: The performance of different resolutions on ore dataset. The comparison of each approach under FSOD and general object detection is shown in Table 7. General object detection methods with sufficient training data tend to achieve better performance than few-shot object detection methods by learning a large amount of information between categories. In particular, many transformer-based methods have recently been proposed to solve various computer vision tasks such as Swin transformers [23]. To obtain a lightweight model and faster detection speed, we develop a few-shot object detector on CenterNet2 [44] instead of traditional Faster RCNN [28]. In Table 7, OreFSDet not only rivals the best detector of general object detection (even better than YOLOF by 5.7/10.4 on \(AP^{box}/AP^{box}_{75}\) metrics), but also consistently beats the few-shot detectors with a \begin{table} \begin{tabular}{c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**5-shot**} & \multicolumn{3}{c|}{**15-shot**} & \multicolumn{3}{c}{**25-shot**} \\ \cline{2-10} & \(AP^{box}\) & \(AP^{box}_{50}\) & \(AP^{box}_{75}\) & \(AP^{box}\) & \(AP^{box}_{50}\) & \(AP^{box}_{75}\) & \(AP^{box}\) & \(AP^{box}_{50}\) & \(AP^{box}_{75}\) \\ \hline TFA [36] & 28.9 & 48 & 33.6 & 30.5 & 49 & 35.6 & 31.5 & 49.7 & 36.5 \\ FSCE [31] & 33.5 & 47.2 & 40.8 & 35.3 & 49 & 42.8 & 37.0 & 50.4 & 44.5 \\ Meta R-CNN [41] & 15.4 & 34.7 & 9.4 & 17.6 & 37.3 & 13.0 & 22.0 & 39.0 & 24.1 \\ FSDetView [40] & 18.2 & 36.1 & 15.4 & 20.1 & 38.6 & 17.7 & 25.4 & 41.1 & 29.8 \\ MPSR [39] & 32.4 & 45.8 & 39.7 & 34.2 & 47.6 & 41.8 & 37.0 & 52.1 & 44.3 \\ AttentionRPN [6] & 25.1 & 44.0 & 27.0 & 29.2 & 45.9 & 34.5 & 30.8 & 47.3 & 37.0 \\ OreFSDet(ours) & **48.5** & **74.1** & **57.6** & **52.1** & **77.2** & **62.5** & **54.1** & **78.4** & **64.7** \\ \hline \hline \end{tabular} \end{table} Table 6: Comparative experiments for different FSOD algorithms on the ore dataset under different shots Figure 9: The visualization of before and after RG Block. great performance gap on all metrics. In addition, OreFSDet performs best at a model size of 19MB as well as being competitive at 50 FPS among general object detectors. Figure 11: The different visual results of few-shot object detection from ore images. Figure 10: The visualization of the effect of DSA. ## 5 Conclusion In this work, we observe that the FSOD task for ore images is extremely difficult to be utilized in a real-world scenario deployment due to high computational overhead and memory requirements. To this end, we present a lightweight and effective framework OreFSDet to solve the problems. On the ore dataset, our model exceeds the best in the FSOD algorithms with great advantage by 25.5/21 on \(AP^{box}\)/\(FPS\) metrics, respectively. Moreover, our OreFSDet performs best with the model size of 19MB and is competitive at 50 FPS among general object detectors. Our method is advantageous in a lightweight model and ultra-fast detection speed and can be effectively deployed in \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Method** & **Backbone** & \(AP^{box}\) & \(AP^{box}_{50}\) & \(AP^{box}_{75}\) & **FPS** & \begin{tabular}{c} **Model Size** \\ (MB) \\ \end{tabular} & \begin{tabular}{c} **Inf.Memory** \\ (MB) \\ \end{tabular} \\ \hline _General object detection_ & & & & & & \\ Faster R-CNN [28] & ResNet50 [8] & 42.6 & 51.7 & 48.6 & 11 & 264 & 5221 \\ Faster R-CNN [28] & ResNet18 [8] & 42.6 & 51.6 & 48.6 & 43 & 93 & 1277 \\ Faster R-CNN [28] & V-19-Slim[12] & 40.6 & 51.3 & 47.2 & 50 & **51** & 1269 \\ SSD [20] & VGG16[30] & 41.7 & 50.8 & 48.3 & **77** & 190 & 9787 \\ RetinaNet [17] & ResNet50 [8] & 41.1 & 57.7 & 46.9 & 26 & 290 & 1337 \\ RetinaNet [17] & SwinT [23] & 42.1 & 56.5 & 47.7 & 41 & 282 & 1757 \\ RetinaNet [17] & PVTv2 [35] & 41.6 & 55.4 & 47.4 & 51 & 130 & 1587 \\ RetinaNet [17] & ResNet18 [8] & 41.7 & 57.3 & 47.3 & 50 & 91 & 1543 \\ Cascade R-CNN [1] & ResNet50 [8] & 43.6 & 51.0 & 49.6 & 21 & 553 & 1699 \\ YOLOv3 [27] & Darknet[27] & 37.4 & 50.7 & 47.3 & 48 & 493 & 1555 \\ Grid R-CNN [24] & ResNet50 [8] & 43.8 & 54.0 & **50.0** & 22 & 515 & 1713 \\ CenterNet2 [44] & DLA [42] & 43.6 & 55.6 & 47.4 & 38 & 357 & 1339 \\ FCOS [33] & ResNet50 [8] & 42.8 & 55.4 & 48.4 & 29 & 256 & **1303** \\ VarifocalNet [43] & ResNet50 [8] & 45.0 & 63.6 & 48.6 & 24 & 261 & 1479 \\ YOLOF [2] & ResNet50 [8] & **46.8** & **67.5** & 49.3 & 45 & 338 & 1433 \\ DDOD [4] & ResNet50 [8] & 44.0 & 53.9 & 49.4 & 50 & 245 & 1719 \\ GFL [14] & ResNet50 [8] & 45.8 & 63.0 & 49.6 & 50 & 246 & 1709 \\ PAA [11] & ResNet50 [8] & 46.0 & 65.2 & 49.1 & 22 & 245 & 1661 \\ \hline _Few-shot object detection_* & & & & & & & \\ TFA [36] & ResNet101 [8] & 31.5 & 49.7 & 36.5 & 16 & 230 & 2013 \\ Meta R-CNN [41] & ResNet101 [8] & 22.0 & 39.0 & 24.1 & 28 & 148 & 2639 \\ FSDetView [40] & ResNet101 [8] & 25.4 & 41.1 & 29.8 & 29 & 157 & 2665 \\ AttentionRPN [6] & ResNet50 [8] & 30.8 & 47.3 & 37.0 & 28 & 211 & 1667 \\ MPSR [39] & ResNet101 [8] & 37.1 & 52.1 & 44.3 & 21 & 462 & 3821 \\ MPSR [39] & V-19-Slim[12] & 24.4 & 46.2 & 23.9 & 36 & 117 & 1941 \\ FSCE [31] & ResNet101 [8] & 37.0 & 50.4 & 44.5 & 18 & 298 & 2200 \\ FSCE [31] & ResNet18 [8] & 23.9 & 54.3 & 18.4 & **55** & 51 & 1637 \\ FSCE [31] & V-19-Slim[12] & 28.9 & 55.2 & 29.1 & 43 & 51 & 1611 \\ OreFSDet(ours) & V-19-Slim[12] & **54.1** & **78.4** & **64.7** & 50 & **19** & **1059** \\ \hline \hline \end{tabular} *: The methods are implemented under the 25-shot setting. \end{table} Table 7: Comparison with the state-of-the-art methods on ore dataset mineral processing operations. Furthermore, with only a few samples, our method can be extended to some objects that share similar properties to the ores. For future work, we will focus on extending our method to the more challenging few-shot instance segmentation and one-shot object detection by new mechanisms and networks. **CRediT authorship contribution statement** **Yang Zhang**: Conceptualization, Methodology, Formal analysis, Data curation, Resources, Writing - original draft, Writing - review & editing. **Le Cheng**: Writing - original draft, Writing - review & editing, Software, Data curation, Formal analysis, Validation, Visualization. **Yuting Peng**: Writing - original draft, Data curation. **Chengming Xu**: Writing - original draft & review & editing, Formal analysis, Data curation. **Yanwei Fu**: Writing - review & editing, Formal analysis, Data curation. **Bo Wu**: Writing - review & editing, Formal analysis, Data curation. **Guodong Sun**: Supervision, Funding acquisition, Writing - review & editing **Acknowledgement** This work was supported in part by the National Natural Science Foundation of China (Grant 51775177), State Key Laboratory of Novel Software Technology (Grant KFKT2022B38), Hubei Key Laboratory of Modern Manufacturing Quality Engineering (Grant KFJJ-2022014), and the PhD early development program of Hubei University of Technology (Grant XJ2021003801). **Declaration of Competing Interest** The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2301.13070
The density-density response function in time-dependent density functional theory: mathematical foundations and pole shifting
We establish existence and uniqueness of the solution to the Dyson equation for the density-density response function in time-dependent density functional theory (TDDFT) in the random phase approximation (RPA). We show that the poles of the RPA density-density response function are forward-shifted with respect to those of the non-interacting response function, thereby explaining mathematically the well known empirical fact that the non-interacting poles (given by the spectral gaps of the time-independent Kohn-Sham equations) underestimate the true transition frequencies. Moreover we show that the RPA poles are solutions to an eigenvalue problem, justifying the approach commonly used in the physics community to compute these poles.
Thiago Carvalho Corso, Mi-Song Dupuy, Gero Friesecke
2023-01-30T17:11:43Z
http://arxiv.org/abs/2301.13070v2
The density-density response function in time-dependent density functional theory: mathematical foundations and pole shifting ###### Abstract We establish existence and uniqueness of the solution to the Dyson equation for the density-density response function in time-dependent density functional theory (TDDFT) in the random phase approximation (RPA). We show that the poles of the RPA density-density response function are forward-shifted with respect to those of the non-interacting response function, thereby explaining mathematically the well known empirical fact that the non-interacting poles (given by the spectral gaps of the time-independent Kohn-Sham equations) underestimate the true transition frequencies. Moreover we show that the RPA poles are solutions to an eigenvalue problem, justifying the approach commonly used in the physics community to compute these poles. ###### Contents * 1 Introduction * 1.1 Linear response theory * 1.2 Time-dependent density functional theory * 1.3 Main results * 1.3.1 Assumptions * 1.3.2 Solution to the Dyson equation * 1.3.3 Poles of the RPA density-density response function * 2 The ground-state density-density response function * 2.1 Derivation and Kubo formula * 2.2 Regularity of the density-density response function * 2.3 Fourier transform of the density-density response function and its poles * 3 The RPA Dyson equation * 3.1 Well-posedness of the Dyson equation * 3.2 Bijection of the RPA-Dyson solution map * 4 Symmetrized density-density response function * 4.1 Proof of Proposition 4.2 * 4.1.1 Inverse of \(1-\widehat{\chi}_{s}(z)\) for \(\operatorname{Im}(z)\neq 0\) or \(|z|<\omega_{1}\) * 4.1.2 Inverse of \(1-\widehat{\chi}_{s}(\omega)\) away from the poles of \(\widehat{\chi}_{s}\) * 4.1.3 Inverse of \(1-\widehat{\chi}_{s}(\omega)\) at the poles of \(\widehat{\chi}_{s}\) * 4.2 Proof of Proposition 4.3 * 5 The Fourier transform of \(\chi^{\text{RPA}}\) * A Time-dependent density functional theory * A.1 Formal derivation of the Dyson equation * A.2 Common approximations * B Poles of the density-density response function of a non-interacting Hamiltonian * C Spectral theory of bounded operators ## 1 Introduction While ground state properties of molecules are very successfully captured by time-independent Kohn-Sham density functional theory (KS-DFT), excitation energies provide a much greater challenge. In particular, the excitation energies of the time-independent Kohn-Sham equations do not accurately capture the true excitation energies, and _have no theoretically supported meaning_. Instead, time-dependent density functional theory (TDDFT) in the linear response regime has been found to capture a molecule's excitation spectrum much more accurately (see e.g. [13]). The underlying Dyson equation for the density-density response function of TDDFT has been derived as a meaningful approximation for this task (see e.g. [10, 11]), and the overall approach has a huge physics literature. Our goal in this paper is to put TDDFT in the linear response regime and its connection with excitation spectra on a firm mathematical footing. We * establish existence and uniqueness of a solution to the Dyson equation for the density-density response function, in the basic case of the random phase approximation (RPA) 2. mathematically clarify the relationship between the density-density response function and excitation spectra, by proving that the 'exact' response function (coming from the evolution of the one-body density under full many-body quantum dynamics) has poles precisely at the excitation frequencies of the many-body Hamiltonian 3. show that the excitation frequencies obtained from the RPA Dyson equation are always forward shifted with respect to those of the time-independent Kohn-Sham equations. Here, (1) proceeds by naturally viewing the density-density response function (DDRF) at a given time \(t\) as a linear operator between one-body potentials and identifying a suitable class of potentials on which the Dyson equation is well posed. (2) is considered 'well known' in the physics literature. But the underlying Lehmann representation of the DDRF is not strictly speaking applicable to the molecular Hamiltonians to which one seeks to apply it in practice, as it tacitly assumes purely discrete spectrum and misses contributions from the continuous spectrum. Our advance is to provide a rigorous Lehmann representation of the DDRF which applies to molecular Hamiltonians and captures the contributions from the continuous spectrum. Finally, (3) proceeds by characterizing the RPA poles as solutions to a certain eigenvalue problem, and carefully analyzing this eigenvalue problem. Before stating our main results in more detail, let us introduce some background on linear response theory and on TDDFT. ### Linear response theory Linear response theory allows to compute first-order corrections to quantities of interest of a molecule at equilibrium which is perturbed by an external potential. The exact wave function \(\Psi(t)\) encoding the behaviour of the electrons of the molecules is the solution to the time-dependent Schrodinger equation \[\begin{cases}i\partial_{t}\Psi(t)=H(t)\Psi(t),\ \ t>0\\ \Psi(0)=\Psi_{0},\end{cases} \tag{1.1}\] where \[H(t)=H+\varepsilon f(t)V_{\mathcal{P}} \tag{1.2}\] with \(V_{\mathcal{P}}\) a time-independent bounded multiplicative potential (the _probe potential_) and \(f\) a bounded scalar function of time (the _time profile_). \(\Psi_{0}\) is the ground state of the rest Hamiltonian \(H\), which for a molecule has the form \[H=-\tfrac{1}{2}\Delta+\sum_{1\leq i<j\leq N}w(r_{i}-r_{j})+\sum_{i=1}^{N}v(r_{ i}), \tag{1.3}\] where \(v,w\in L^{2}(\mathbb{R}^{3})+L^{\infty}(\mathbb{R}^{3})\) and real-valued. For an observable \(V_{\mathcal{O}}\) we are interested in the expectation value \[\langle V_{\mathcal{O}}\rangle_{t}=\langle\Psi(t),V_{\mathcal{O}}\Psi(t)\rangle. \tag{1.4}\] Since the perturbation is small, at first order in \(\varepsilon\) the variation of \(\langle V_{\mathcal{O}}\rangle_{t}\) is \[\langle V_{\mathcal{O}}\rangle_{t}=\langle V_{\mathcal{O}}\rangle_{0}+\varepsilon (\mathcal{X}_{V_{\mathcal{O}}V_{\mathcal{P}}}\star f)(t)+\mathcal{O}(\varepsilon ^{2}), \tag{1.5}\] where \(\mathcal{X}_{V_{\mathcal{O}}V_{\mathcal{P}}}\) is the function given by the Kubo formula (see Proposition 2.1) \[\mathcal{X}_{V_{\mathcal{O}}V_{\mathcal{P}}}(\tau)=-i\theta(\tau)\Big{\langle} V_{\mathcal{O}}\Psi_{0},e^{-i(H-E_{0})\tau}V_{\mathcal{P}}\Psi_{0}\Big{\rangle}+ \text{c.c.}\] This function has a Fourier transform, at least in the distributional sense, \[\widehat{\mathcal{X}}_{V_{\mathcal{O}}V_{\mathcal{P}}}(\omega) =\lim_{\eta\to 0^{+}}\left\langle\Psi_{0},V_{\mathcal{O}}\Big{(} \omega+i\eta-(H-E_{0})\Big{)}^{-1}V_{\mathcal{P}}\Psi_{0}\right\rangle\\ -\left\langle\Psi_{0},V_{\mathcal{P}}\Big{(}\omega+i\eta+(H-E_{ 0})\Big{)}^{-1}V_{\mathcal{O}}\Psi_{0}\right\rangle, \tag{1.6}\] where \(\eta\to 0^{+}\) means the one-sided limit as \(\eta\) converges to zero from above. This formula relates the singularities of the Fourier transform of \(\widehat{\mathcal{X}}\) to the spectrum of \(H\). More precisely, \(\widehat{\mathcal{X}}\) has a pole if \(|\omega|\) is an eigenvalue of \(H-E_{0}\). When \(|\omega|\) belongs to the essential spectrum of \(H-E_{0}\), \(\widehat{\mathcal{X}}\) is regular for a wide class of potentials \(v\) and \(w\) as a result of the limiting absorption principle. We refer to [1, 1, 2] for more information on this topic. The location of the poles of the Fourier transform \(\widehat{\mathcal{X}}\) provides access to the spectrum of \(H\) and in particular to its low-lying eigenvalues. At first sight, evaluating \(\widehat{\mathcal{X}}\) from Equation (1.6) is by no mean simpler than diagonalising the many-body operator \(H\). However a major simplification can be achieved, at least formally, by ("exact" and approximate) time-dependent density functional theory (TDDFT), which are time-dependent versions of static Hohenberg-Kohn density functional theory [11, 12] respectively static Kohn-Sham density functional theory [13]. Provided the perturbing potential \(V_{\mathcal{P}}\) and the observable \(V_{\mathcal{O}}\) are _one-body potentials_ \[V_{\mathcal{P}}=\sum_{k=1}^{N}v_{\mathcal{P}}(r_{k}),\quad V_{\mathcal{O}}= \sum_{k=1}^{N}v_{\mathcal{O}}(r_{k}), \tag{1.7}\] the expression for \(\mathcal{X}\) becomes \[\mathcal{X}_{V_{\mathcal{O}}V_{\mathcal{P}}}(\tau)=\langle v_{\mathcal{O}}, \chi(\tau)v_{\mathcal{P}}\rangle, \tag{1.8}\] for some universal operator-valued function \(\chi\) which is independent of \(v_{\mathcal{O}}\) and \(v_{\mathcal{P}}\) and only depends on the static Hamiltonian \(H\) in eq. (1.3). This function, called _density-density response function_ of \(H\), is rigorously constructed in section 2.1. In the physics literature \(\chi\) is usually postulated to have an integral kernel, obtained formally by taking \(v_{\mathcal{O}}\) and \(v_{\mathcal{P}}\) to be delta functions located at \(r\) respectively \(r^{\prime}\), \[\chi(r,r^{\prime},\tau)=\big{\langle}\delta_{r},\,\chi(\tau)\delta_{r^{\prime} }\big{\rangle},\] and this kernel is known as the density-density response function. The operators \(V_{\mathcal{P}}\) and \(V_{\mathcal{O}}\) in (1.7) are then the density operators \(\widehat{\rho}_{r}=\sum_{i=1}^{N}\delta_{r}(r_{i})\) and \(\widehat{\rho}_{r^{\prime}}=\sum_{i=1}^{N}\delta_{r^{\prime}}(r_{i})\), explaining the name. Mathematically, there is no need for - or advantage from - such an integral representation; \(\chi\) is simply an _operator-valued function of time acting on one-body potentials_. The key observation which allows to bring to bear TDDFT is now the following: with the restriction (1.7) to one-body perturbing potentials and one-body observables, the expectation value \(\langle V_{\mathcal{O}}\rangle_{t}\) only depends on the electronic density \(\rho^{\Psi}\) at time \(t\), \[\rho^{\Psi}(t,r)=N\int_{\mathbb{R}^{3(N-1)}}|\Psi(t,r,r_{2},\ldots,r_{N})|^{2} \mathrm{d}r_{2}\ldots\mathrm{d}r_{N}. \tag{1.9}\] In the following we write \(\rho^{\Psi}(t)\) for the function \(\rho^{\Psi}(t,\cdot)\,:\,\mathbb{R}^{3}\to\mathbb{R}\). We have \[\langle V_{\mathcal{O}}\rangle_{t}=\langle v_{\mathcal{O}},\rho^{\Psi}(t) \rangle=\langle v_{\mathcal{O}},\rho^{\Psi}(0)\rangle+\varepsilon\langle v_{ \mathcal{O}},(\chi v_{\mathcal{P}}\star f)(t)\rangle+\mathcal{O}(\varepsilon ^{2}), \tag{1.10}\] so by identification, \(\chi\) gives the variation of the electronic density to the first order in \(\varepsilon\) \[\rho^{\Psi}(t)=\rho^{\Psi}(0)+\varepsilon(\chi v_{\mathcal{P}}\star f)(t)+ \mathcal{O}(\varepsilon^{2}) \tag{1.11}\] where throughout this paper \(a\star f\) denotes convolution in time, \[(a\star f)(t)=\int_{0}^{t}a(t-s)f(s)\,ds. \tag{1.12}\] If it is possible to efficiently approximate the density evolution, and hence the density-density response function \(\chi\), then we have a way to obtain the variation of the expectation value \(\langle V_{\mathcal{O}}\rangle_{t}\) in the linear response regime. Such approximations are provided by time-dependent density functional theory (TDDFT). In the next section we only introduce the simplest - and commonly used - such approximation, referring to Appendix A for more details and references. ### Time-dependent density functional theory TDDFT aims to reproduce or approximate the evolution of the electronic density \(\rho^{\Psi}\), which is governed by the many-body Hamiltonian \(H\), by the evolution of the density of a non-interacting system. More precisely, the electronic density \(\rho^{\Psi}\) is approximated by the electronic density \(\rho^{\Phi}\), where \(\Phi\) is the solution to \[\begin{cases}i\partial_{t}\Phi(t)=H_{\mathrm{eff}}(t)\Phi(t),\quad t>0\\ \Phi(0)=\Phi_{0}\end{cases}, \tag{1.13}\] for some suitable effective non-interacting Hamiltonian \(H_{\mathrm{eff}}\). The initial condition \(\Phi_{0}\) is a Slater determinant whose density \(\rho^{\Phi_{0}}\) approximates the electronic density \(\rho^{\Psi}(0)\) of the exact ground state. In practice it is taken to be the Kohn-Sham determinant, i.e. the ground state of the Hamiltonian (1.16). In this paper we focus on the _random phase approximation (RPA),_ which corresponds to the Hamiltonian \[H_{\mathrm{eff}}(t)=-\tfrac{1}{2}\Delta+\sum_{i=1}^{N}v(r_{i})+v_{\mathrm{xc} }^{\mathrm{static}}[\rho^{\Phi}(0)](r_{i})+\rho^{\Phi}(t)\ast\frac{1}{|\cdot|}( r_{i})+\varepsilon f(t)v_{\mathcal{P}}(r_{i}). \tag{1.14}\] Here \(\Phi\) is the solution to (1.13), \(\rho^{\Phi}\) is its density (eq. (1.9) with \(\Psi\) replaced by \(\Phi\)), \(v_{\rm xc}[\rho^{\Phi}(0)]\) is the static Kohn-Sham exchange-correlation potential of the initial density, and \(\frac{1}{|\cdot|}*\cdot\) is the Hartree operator, i.e. the convolution with the Coulomb potential. Thus in the RPA, the Hartree potential \(\rho^{\Phi}(t)*\frac{1}{|\cdot|}\) (being the dominant part of the interaction) is dynamically updated whereas the exchange-correlation potential is frozen at the initial density. Also updating it dynamically would correspond to the adiabatic local density approximation (ALDA) provided \(v_{\rm xc}[\rho]\) is given by the LDA. Since we are interested in the linear response regime, it is not necessary to solve the nonlinear system (1.13),(1.14). Instead, assuming that \(\rho^{\Phi}\) has a Taylor expansion to order \(1\) in \(\varepsilon\), it can be shown (see Appendix A) that the variation of \(\langle V_{\mathcal{O}}\rangle_{t}\) is given by the formula (1.10) with \(\chi\) replaced by the solution \(\chi^{\rm RPA}\) to the following Dyson equation \[\chi^{\rm RPA}(t)=\chi_{0}(t)+\Big{(}\chi_{0}*\big{(}\frac{1}{|\cdot|}*\chi^{ \rm RPA}\big{)}\Big{)}(t). \tag{1.15}\] The operator-valued function \(\chi_{0}\) is the density-density response function of the frozen (or static) Hamiltonian \[H_{0}=-\frac{1}{2}\Delta+\sum_{i=1}^{N}v(r_{i})+v_{\rm xc}^{\rm static}[\rho^ {\Phi}(0)](r_{i})+\rho^{\Phi}(0)*\frac{1}{|\cdot|}(r_{i}). \tag{1.16}\] Equation (1.15) is called the Dyson equation in the random phase approximation (RPA), and its solution is the RPA density-density response function [11, 12]. This is the simplest approximation of the density-density response function \(\chi\) in TDDFT. More sophisticated approximations (see e.g. [11]) are beyond the scope of the present paper. The interested reader may consult Appendix A for a compact introduction to TDDFT and the derivation of (1.15), and the monographs [11, 12] for an overview of the field from a physics perspective. ### Main results The main focus of the paper is to establish fundamental properties of the solution \(\chi^{\rm RPA}\) to the Dyson equation (1.15) (including but not limited to existence and uniqueness), under natural assumptions on the Hamiltonian \(H\) associated to the reference density-density response function \(\chi_{0}\). #### 1.3.1 Assumptions We recall here that the rest Hamiltonian considered is \[H=-\tfrac{1}{2}\Delta+\sum_{1\leq i<j\leq N}w(r_{i}-r_{j})+\sum_{i=1}^{N}v(r_{ i}) \tag{1.17}\] acting on the anti-symmetric \(L^{2}\)-space \[\mathcal{H}_{N}=\bigwedge_{i=1}^{N}L^{2}(\mathbb{R}^{3}), \tag{1.18}\] where \(v,w\in L^{2}(\mathbb{R}^{3})+L^{\infty}(\mathbb{R}^{3})\) are real-valued. Under this condition on \(v\) and \(w\), it is well-known that the Hamiltonian \(H\) is a self-adjoint operator with domain \(H^{2}(\mathbb{R}^{3N})\cap\mathcal{H}_{N}\) which is bounded from below [10]. Throughout this paper, we shall also assume that \(H\) satisfies the following general assumption. **Assumption 1**.: _Let \(v,w\in L^{2}(\mathbb{R}^{3})+L^{\infty}(\mathbb{R}^{3})\) be real-valued functions and \(H\) be defined as in (1.17). Then we assume that_ 1. _the ground state energy_ \(E_{0}\) _of_ \(H\) _is a simple isolated eigenvalue;_ 2. _the electronic density_ \(\rho^{\Psi_{0}}\) _of the ground state_ \(\Psi_{0}\) _is bounded;_ Note that assumption 1 is in fact a consequence of 1 for Schrodinger operators of the form (1.17) (see for instance [14]). The reason we promote it to an independent assumption here is that all the results from this paper ultimately rely on these two assumptions and not on the specific form of the rest Hamiltonian \(H\). Under the above assumptions we note that the ionization threshold \(\Omega\), defined as \[\Omega=\inf\sigma_{\mathrm{ess}}(H)-E_{0}, \tag{1.19}\] is positive. This is a simple but important observation, since most of the results discussed next concern the behaviour of \(\widehat{\chi^{\mathrm{RPA}}}\) inside the interval \((-\Omega,\Omega)\). #### 1.3.2 Solution to the Dyson equation We first show that the Dyson equation (1.15) has a unique solution in the space of strongly continuous maps from \(\mathbb{R}_{+}\) to \(\mathcal{B}(L^{2}(\mathbb{R}^{3})+L^{\infty}(\mathbb{R}^{3}),L^{1}(\mathbb{R} ^{3})\cap L^{2}(\mathbb{R}^{3}))\), denoted here by \[C_{s}\big{(}\mathbb{R}_{+};\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2})\big{)}. \tag{1.20}\] This is the space of density-density response functions of Hamiltonians (1.3) as seen in Proposition 2.4. The proof is an application of the Banach fixed point theorem in an appropriate space. **Theorem 1.1** (Existence of the solution \(\chi^{\mathrm{RPA}}\)).: _Let \(\chi_{0}\in C_{s}\big{(}\mathbb{R}_{+};\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L ^{2})\big{)}\). Then the following assertions are true:_ 1. _there is a unique solution_ \(\chi^{\mathrm{RPA}}\in C_{s}\big{(}\mathbb{R}_{+};\mathcal{B}(L^{2}+L^{\infty },L^{1}\cap L^{2})\big{)}\) _to the RPA Dyson equation (_1.15_);_ 2. _the solution map_ \[\mathcal{S}^{RPA}:\left\{\begin{aligned} C_{s}\big{(} \mathbb{R}_{+};\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2})\big{)}& \to C_{s}\big{(}\mathbb{R}_{+};\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2}) \big{)}\\ \chi_{0}&\mapsto\chi^{\mathrm{RPA}}\end{aligned}\right.\] _is a bijection._ #### 1.3.3 Poles of the RPA density-density response function From eq. (1.6) and eq. (1.8), it can be shown (see section 2.3) that the Fourier transform of the density-density response function \(\chi\), defined on \(\{\mathrm{Im}(z)>0\}\) as \(\int_{0}^{\infty}\chi(t)e^{izt}dt\), is an analytic family of operators whose meromorphic extension has simple real poles. This means that for \(\omega\in\mathbb{R}\) a pole of \(\widehat{\chi}\), there is a neighborhood of \(\omega\), an analytic family of operators defined in this neighborhood \(z\mapsto K_{0}(z)\), and a finite-rank operator \(K_{-1}\), such that \[\widehat{\chi}(z)=K_{0}(z)+\frac{K_{-1}}{z-\omega}. \tag{1.21}\] We denote the set of poles of \(\widehat{\chi}\) by \(\mathcal{P}(\widehat{\chi})\) and define the rank of a pole \(\omega\) as \[\mathrm{rank}_{\omega}(\widehat{\chi})=\mathrm{rank}\,K_{-1}. \tag{1.22}\] The main result of this paper shows that the RPA density-density response function has a similar structure. More precisely, we show that the Fourier transform of \(\chi^{\mathrm{RPA}}\) also admits a meromorphic extension whose poles are located along the real axis. We also prove that these poles are forward-shifted compared to the poles of the Fourier transform of the reference density-density response function \(\chi_{0}\). Note that even the existence of the Fourier transform in the upper half plane is not clear a priori. Indeed, if we take \(\chi_{0}(t)=K\) for any \(t\geq 0\) and some bounded operator \(K\in\mathcal{B}\big{(}L^{2}(\mathbb{R}^{3})+L^{\infty}(\mathbb{R}^{3});L^{1}( \mathbb{R}^{3})\cap L^{2}(\mathbb{R}^{3})\big{)}\), then \(\chi_{0}\) is a strongly continuous map from \(\mathbb{R}_{+}\) to \(\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2})\) but the solution to the Dyson equation (1.15) in this case is given by \(e^{tKF_{H}}K\), whose Fourier transform diverges in \(\{\mathrm{Im}(z)<1\}\). **Theorem 1.2** (Poles of \(\widehat{\chi^{\mathrm{RPA}}}\)).: _Let \(\chi_{0}\) be the density-density response function (see the definition in Proposition 2.3) of a Hamiltonian \(H\) satisfying Assumption 1. Let \(\chi^{\mathrm{RPA}}\) be the solution of the RPA Dyson equation (1.15) and \(\Omega=\inf\sigma_{\mathrm{ess}}(H)-E_{0}\) be the ionization threshold of \(H\). Let \(\mathcal{D}_{\Omega}\) be the set_ \[\mathcal{D}_{\Omega}=\mathbb{C}\setminus\big{(}(-\infty,-\Omega]\cup[\Omega, \infty)\big{)}; \tag{1.23}\] _Then the following holds:_ * _(Meromorphic) The Fourier transform_ \(\widehat{\chi^{\mathrm{RPA}}}:\mathcal{D}_{\Omega}\to\mathcal{B}\big{(}L^{2}( \mathbb{R}^{3})+L^{\infty}(\mathbb{R}^{3}),L^{1}(\mathbb{R}^{3})\cap L^{2}( \mathbb{R}^{3})\big{)}\) _is a meromorphic family of operators with simple poles contained in_ \((-\Omega,\Omega)\)_;_ * _(Forward shift of poles) Let_ \(\mathcal{P}(\widehat{\chi_{0}})\) _be the set of poles of_ \(\widehat{\chi_{0}}\)_, then the poles of_ \(\widehat{\chi^{\mathrm{RPA}}}\) _are forward shifted with respect to the poles of_ \(\widehat{\chi_{0}}\) _in the sense that, for any_ \(0<\omega<\Omega\)_,_ \[\sum_{\widetilde{\omega}\in\mathcal{P}(\widehat{\chi^{\mathrm{RPA}}})\atop| \widetilde{\omega}|<\omega}\mathrm{rank}_{\widetilde{\omega}}(\widehat{\chi^{ \mathrm{RPA}}})\leq\sum_{\widetilde{\omega}\in\mathcal{P}(\widehat{\chi_{0}}) \atop|\widetilde{\omega}|<\omega}\mathrm{rank}_{\widetilde{\omega}}(\widehat{ \chi_{0}}).\] **Remark 1.3**.: _While the theorem is insensitive to the precise form of the Hamiltonian \(H\) governing the reference density-density response function \(\chi_{0}\), the standard choice given in eq. (1.16) is of particular physical interest. In this case, the poles are located precisely at the spectral gaps \(\epsilon_{a}-\epsilon_{j}\) between occupied and unoccupied eigenvalues of the (one-body) Kohn-Sham Hamiltonian_ \[h_{\rm KS}=-\frac{1}{2}\Delta+v+v_{\rm xc}[\rho^{\Phi}(0)]+\rho^{\Phi}(0)*\frac {1}{|\cdot|} \tag{1.24}\] _of time-independent density functional theory (see Appendix B). Thus statement (ii) implies that the Kohn-Sham spectral gaps (accounting for multiplicities) always underestimate the excitation frequencies predicted by the RPA. This can be taken as a mathematical explanation of the empirical fact that the Kohn-Sham gaps are commonly too low compared to the experimental excitation frequencies (see e.g. [20])._ As a last result, we give a rigorous way to find the poles of \(\widehat{\chi^{\rm RPA}}\), and compute its rank, by solving an eigenvalue problem. This rigorously justifies the standard approach in the quantum chemistry community to find the poles of \(\widehat{\chi^{\rm RPA}}\), as long as they lie outside the set of poles of \(\widehat{\chi}_{0}\). For the mutual poles of \(\widehat{\chi}_{0}\) and \(\widehat{\chi^{\rm RPA}}\), the rank can also be computed by solving a similar eigenvalue problem in a reduced space. More precisely, we have the following criteria. **Theorem 1.4** (Characterization of the poles of \(\widehat{\chi^{\rm RPA}}\)).: _Let \(|\omega|<\Omega\) be a pole of \(\widehat{\chi^{\rm RPA}}\), then the following holds:_ * _If_ \(\omega\) _is not a pole of_ \(\widehat{\chi_{0}}\)_, then the rank of_ \(\omega\) _as a pole of_ \(\widehat{\chi^{\rm RPA}}\) _is given by the number of linearly independent solutions in_ \(L^{2}(\mathbb{R}^{3})\) _of the equation_ \[F_{H}^{\frac{1}{2}}\widehat{\chi_{0}}(\omega)F_{H}^{\frac{1}{2}}f=f.\] (1.25) * _If_ \(\omega\) _is a pole of_ \(\widehat{\chi_{0}}\)_, then the rank of_ \(\omega\) _as a pole of_ \(\widehat{\chi^{\rm RPA}}\) _is given by the number of linearly independent solutions in_ \(L^{2}(\mathbb{R}^{3})\) _of_ \[P_{V}^{\perp}F_{H}^{\frac{1}{2}}\widehat{\chi_{0}}(\omega)F_{H}^{\frac{1}{2}}P _{V}^{\perp}f=f,\] (1.26) _where_ \(F_{H}^{\frac{1}{2}}\) _is the square root of_ \(F_{H}\) _given up to a normalization constant by convolution against_ \(\frac{1}{|\cdot|^{2}}\) _and_ \(P_{V}^{\perp}\) _is the orthogonal projection on the orthogonal complement of_ \(V=F_{H}^{\frac{1}{2}}\big{(}\ker(H-E_{0}+|\omega|)\big{)}\)_._ **Remark 1.5**.: _Note that for any solution \(f\) of (1.25), the non-zero function \(g=\widehat{\chi_{0}}(\omega)F_{H}^{\frac{1}{2}}f\in L^{1}(\mathbb{R}^{3})\cap L ^{2}(\mathbb{R}^{3})\) satisfies the equation \(\widehat{\chi}_{0}(\omega)F_{H}g=g\). Likewise, a solution \(g\) of the latter yields a solution of the former by taking \(f=F_{H}^{\frac{1}{2}}g\in L^{2}(\mathbb{R}^{3})\). In particular, solving (1.25) is equivalent to solving \(g=\widehat{\chi}_{0}(\omega)F_{H}g\), which is the starting point of the Casida formalism in TDDFT [19, Section 3.8]._ Strategy of the proof.The proofs of Theorems 1.2 and 1.4 consist of two main steps. The first and most involved step is a detailed spectral analysis of the symmetrized operator \(\widehat{\chi}_{s}=F_{H}^{\frac{1}{2}}\widehat{\chi}F_{H}^{\frac{1}{2}}\). This analysis leads to the characterization of the poles of \((1-\widehat{\chi}_{s})^{-1}\) in Propositions 4.2 and 4.3 below. With these propositions, the second step in our proof is to show that \(\widehat{\chi^{\mathrm{RPA}}}\) is a meromorphic family of operators with simple real poles, and then prove the identity \(\mathrm{rank}_{\omega}(\widehat{\chi^{\mathrm{RPA}}})=\mathrm{rank}_{\omega} \big{(}(1-\widehat{\chi}_{s})^{-1}\big{)}\) for any \(\omega\in(-\Omega,\Omega)\). The key properties of \(F_{H}\) used in our proofs are its positivity and its \(L^{p}\) mapping properties, given by the Hardy-Littlewood-Sobolev inequality. In particular, we expect that the results here can be extended to other types of adiabatic approximations used in TDDFT. Structure of the paper.We start by introducing some notation in the next paragraph. In Section 2, we introduce the (exact) density-density response function \(\chi\) of a general Hamiltonian and relate it to the celebrated Kubo formula from linear response theory. We also derive some \(L^{p}\) smoothing properties of \(\chi\) (Lemma 2.4) and give the formula of its Fourier transform, which can be viewed as a meromorphic family of operators. In Section 3, we prove the existence and uniqueness of solutions to the RPA-Dyson equation (1.15) in the setting of Theorem 1.1. In Section 4, we study the symmetrized operator \(\widehat{\chi}_{s}=F_{H}^{\frac{1}{2}}\widehat{\chi}_{0}F_{H}^{\frac{1}{2}}\) and characterize the poles of \((1-\widehat{\chi}_{s})^{-1}\) in Propositions 4.2 and 4.3. We then use these propositions to prove Theorems 1.2 and 1.4 in Section 5. ### Notation The set \(\mathbb{R}_{+}=[0,\infty)\) denotes the set of non-negative real numbers. For \(A\) and \(B\) nonnegative scalar quantities, \(A\lesssim B\) means that there is an irrelevant positive constant \(C\) such that \(A\leq CB\). We use the following convention for the Fourier transform of functions \(f:\mathbb{R}\to F\) where \(F\) is a Banach space \[\widehat{f}(\omega)=\int_{\mathbb{R}}f(t)e^{it\omega}\mathrm{d}t. \tag{1.27}\] Let \(F,G\) be Banach spaces, we will denote their respective norms by \(\|\cdot\|_{F}\) and \(\|\cdot\|_{G}\). Moreover, we denote the set of linear continuous operators from \(F\) to \(G\) by \(\mathcal{B}(F,G)\). If \(F=G\), we simply use \(\mathcal{B}(F)\). The operator norm on \(\mathcal{B}(F,G)\) is denoted by \[\|T\|_{F,G}=\sup_{\begin{subarray}{c}g\in G\\ \|g\|_{G}=1\end{subarray}}\|Tg\|_{F}.\] Whenever it is clear from the context to which operator space \(T\) belongs, we shall use only \(\|T\|\) for the operator norm. For an operator \(T:F\to G\) on Banach spaces \(F\),\(G\), we denote its kernel and range by \(\ker T\subset F\) and \(\operatorname{ran}T\subset G\). We also use \(\operatorname{rank}T=\dim\operatorname{ran}T\) for the rank of \(T\). For \(1\leq p\leq\infty\), \(L^{p}(\mathbb{R}^{3})\) (or just \(L^{p}\)) denotes the standard \(L^{p}\) spaces with respect to Lebesgue measure. We use the notation \[\langle f,g\rangle=\int_{\mathbb{R}^{n}}\overline{f(r)}g(r)\mathrm{d}r\] for the standard inner-product on \(L^{2}(\mathbb{R}^{n})\). We also use \(L^{p}(\mathbb{R}^{n})+L^{q}(\mathbb{R}^{n})\) and \(L^{p}(\mathbb{R}^{n})\cap L^{q}(\mathbb{R}^{n})\) for the Banach spaces of measurable functions with finite norms \[\|f\|_{L^{p}+L^{q}} =\inf_{f=f_{p}+f_{q}}\{\|f_{p}\|_{L^{p}}+\|f_{q}\|_{L^{q}}\}\] \[\|f\|_{L^{p}\cap L^{q}} =\max\{\|f\|_{L^{p}},\|f\|_{L^{q}}\}.\] Note if \(p\) and \(q\) are conjugate exponents, that is to say \(p^{-1}+q^{-1}=1\), \(L^{p}+L^{q}\) is the dual of \(L^{p}\cap L^{q}\). For a projection \(P\in\mathcal{B}(F)\) on a Banach space \(F\), i.e. \(P^{2}=P\), we say that an operator \(B\in\mathcal{B}(F)\) is invertible with respect to \(P\) if \(PBP=B\) and there exists an operator \(B^{-1}\in\mathcal{B}(F)\) such that \[PB^{-1}P=B^{-1}\quad\text{and}\quad B^{-1}B=BB^{-1}=P. \tag{1.28}\] Note that the inverse \(B^{-1}\) is unique. ## 2 The ground-state density-density response function In this section we recall the basics of linear response theory and give a derivation of the density-density response function. We then highlight a few properties of this operator-valued function and give a representation of its Fourier transform. ### Derivation and Kubo formula Let us start with the definition of the linear response function and its connection to the first order variation in the dynamics with respect to some perturbation. Let \(H\) be the static Hamiltonian defined in (1.17) and consider the time-dependent family of self-adjoint operators \(H(t)\) \[H(t)=H+\varepsilon\theta(t)f(t)V_{\mathcal{P}}, \tag{2.1}\] where the perturbing potential \(V_{\mathcal{P}}\) is a bounded multiplication operator \(V_{\mathcal{P}}:\mathcal{H}_{N}\to\mathcal{H}_{N}\), the function \(f\in L^{\infty}(\mathbb{R})\), \(\varepsilon\in\mathbb{R}\), and \(\theta\) is the Heaviside function \[\theta(t)=\begin{cases}0,&\text{ if }t<0,\\ 1,&\text{ otherwise.}\end{cases}\] Next, suppose that \(V_{\mathcal{O}}:\mathcal{H}_{N}\to\mathcal{H}_{N}\) is an observable of interest. We are interested in the expectation value \(\langle V_{\mathcal{O}}\rangle_{t}\coloneqq\langle\Psi(t),V_{\mathcal{O}} \Psi(t)\rangle\) for small time \(t\) where \(\Psi\) is the solution of the time-dependent Schrodinger equation \[\begin{cases}i\partial_{t}\Psi(t)=H(t)\Psi(t),&t>0\\ \quad\Psi(0)=\Psi_{0}\end{cases} \tag{2.2}\] with \(\Psi_{0}\) being the ground-state wave function of \(H\). **Proposition 2.1**.: _Let \(H(t)\) be the family of self-adjoint operators defined in (2.1). Let \(\Psi(t)\) be the solution of (2.2) and \(V_{\mathcal{O}}:\mathcal{H}_{N}\to\mathcal{H}_{N}\) be a bounded multiplication operator. Then \(\langle V_{\mathcal{O}}\rangle_{t}=\langle\Psi(t),V_{\mathcal{O}}\Psi(t)\rangle\) has the following expansion_ \[\langle V_{\mathcal{O}}\rangle_{t}-\langle V_{\mathcal{O}}\rangle_{0}=i \varepsilon\int_{-\infty}^{\infty}\theta(t-t^{\prime})f(t^{\prime})\langle \psi_{0},[V_{\mathcal{P}},(V_{\mathcal{O}})_{I}(t-t^{\prime})]\,\psi_{0} \rangle\mathrm{d}t^{\prime}+\mathcal{O}(\varepsilon^{2}), \tag{2.3}\] _where \((V_{\mathcal{O}})_{I}(\tau)=e^{i\tau H}V_{\mathcal{O}}e^{-i\tau H}\) and \([A,B]=AB-BA\) denotes the commutator._ Proof.: Let \(H_{1}(t)=\varepsilon\theta(t)f(t)V_{\mathcal{P}}\). Under the assumption on \(V_{\mathcal{P}}\) and \(f\), it is clear that \(t\mapsto H_{1}(t)\in\mathcal{B}\big{(}\mathcal{H}_{N},\mathcal{H}_{N}\big{)}\) is uniformly bounded. Hence, by a standard evolution equations argument, the solution to the time-dependent Schrodinger equation exists, is unique, and satisfies the Duhamel formula \[\Psi(t)=e^{-itH}\Psi_{0}-i\int_{0}^{t}e^{-i(t-s)H}H_{1}(s)\Psi(s)\mathrm{d}s. \tag{2.4}\] In particular, iterating (2.4) twice yields \[\Psi(t)=e^{-itE_{0}}\Psi_{0}-i\varepsilon\int_{0}^{t}f(s)\theta(s)e^{-i(t-s)H }V_{\mathcal{P}}e^{-isH}\Psi_{0}\mathrm{d}s+\mathcal{O}(\varepsilon^{2}). \tag{2.5}\] Hence plugging (2.5) into the definition of \(\langle V_{\mathcal{O}}\rangle_{t}\) completes the proof. If the perturbation operator \(V_{\mathcal{P}}\) as well as the observable \(V_{\mathcal{O}}\) are given by one-body potentials \[V_{\mathcal{P}}=\sum_{k=1}^{N}v_{\mathcal{P}}(r_{k}),\quad V_{\mathcal{O}}= \sum_{k=1}^{N}v_{\mathcal{O}}(r_{k}), \tag{2.6}\] with \(v_{\mathcal{O}}\) and \(v_{\mathcal{P}}\) real-valued bounded functions, then the integrand in (2.3) is bilinear in the potentials, providing the following bilinear form on \(L^{\infty}(\mathbb{R}^{3})\times L^{\infty}(\mathbb{R}^{3})\): \[\langle v_{\mathcal{O}},\chi(\tau)v_{\mathcal{P}}\rangle=i\theta(\tau)\left< \Psi_{0},\left[\sum_{k=1}^{N}v_{\mathcal{P}}(r_{k}),\Big{(}\sum_{k=1}^{N}v_{ \mathcal{O}}(r_{k})\Big{)}_{I}(\tau)\right]\Psi_{0}\right>, \tag{2.7}\] where the inner product on the left is the \(L^{2}\) inner product. The operator-valued function \(\tau\mapsto\chi(\tau)\) defined by eq. (2.7) is the _density-density response function_ associated with the Hamiltonian \(H\). The reason behind this definition is the celebrated Kubo formula below, which gives the first order variation of \(\langle V_{\mathcal{O}}\rangle_{t}\) caused by \(V_{\mathcal{P}}\), as a convolution against \(\langle v_{\mathcal{O}},\chi(t)v_{\mathcal{P}}\rangle\). **Corollary 2.2** (Kubo formula).: _Let \(\chi\) be as defined in (2.7). Then_ \[\langle V_{\mathcal{O}}\rangle_{t}-\langle V_{\mathcal{O}}\rangle_{0}= \varepsilon\int_{0}^{\infty}\langle v_{\mathcal{O}},\chi(t-t^{\prime})v_{ \mathcal{P}}\rangle f(t^{\prime})\,\mathrm{d}t^{\prime}+\mathcal{O}( \varepsilon^{2}). \tag{2.8}\] ### Regularity of the density-density response function We now give an alternative representation of the density-density response function of \(H\) and derive a few \(L^{p}\)-mapping properties that will be useful in the next sections. **Proposition 2.3**.: _Let \(H\) be a Hamiltonian satisfying Assumption 1. Let \(E_{0}\) be the lowest eigenvalue of \(H\) and \(\Psi_{0}\) an associated normalized eigenfunction. Define \(S:\mathcal{H}_{N}\to L^{1}(\mathbb{R}^{3})\) to be the mapping from a many-body wavefunction \(\Phi\) to the diagonal of the mixed one-body reduced density matrix \(\gamma^{\Phi,\Psi_{0}}\), that is to say_ \[(S\Phi)(r)=N\int_{(\mathbb{R}^{3})^{N-1}}\Phi(r,r_{2},\ldots,r_{N})\overline{ \Psi_{0}(r,r_{2},...,r_{N})}\,\mathrm{d}r_{2}...\mathrm{d}r_{N}. \tag{2.9}\] _Let \(\chi\) be the density-density response function of \(H\) (as defined in (2.7)), then for any \(f,g\in L^{\infty}(\mathbb{R}^{3})\) real-valued, one has_ \[\langle f,\chi(t)g\rangle=\langle f,2\theta(t)S\sin\bigl{(}t(E_{0}-H)\bigr{)} S^{*}g\rangle,\] _where \(S^{*}:L^{\infty}(\mathbb{R}^{3})\to\mathcal{H}_{N}\) is the adjoint of \(S\) given by_ \[(S^{*}v)(r_{1},..,r_{N})=\sum_{k=1}^{N}v(r_{k})\Psi_{0}(r_{1},...,r_{N}). \tag{2.10}\] Proof.: First note that the Hamiltonian \(H\) commutes with complex conjugation, i.e., \[\overline{H\Phi}(r_{1},...,r_{N})=H\overline{\Phi}(r_{1},...,r_{N}).\] Hence, the projection-valued (spectral) measure \(P_{\lambda}^{H}\) associated to \(H\) also commutes with complex conjugation, and therefore, \[\overline{\bigl{(}e^{itH}\Phi\bigr{)}}(r_{1},...,r_{N})= \bigl{(}e^{-itH}\Phi\bigr{)}(r_{1},...,r_{N}),\] for any real-valued wave function \(\Phi\in\mathcal{H}_{N}\). Moreover, due to the uniqueness assumption on the ground state of \(H\), we can take \(\Psi_{0}\) to be real-valued. Therefore, using the identity \(e^{itH_{N}}\Psi_{0}=e^{itE_{0}}\Psi_{0}\) we conclude that \[\langle f,\chi(t)g\rangle =i\theta(t)\bigl{(}\langle S^{*}f,e^{it(H-E_{0})}S^{*}g\rangle- \langle S^{*}g,e^{-it(H-E_{0})}S^{*}f\rangle\bigr{)}\] \[=i\theta(t)\bigl{(}\langle S^{*}f,e^{it(H-E_{0})}S^{*}g\rangle- \langle S^{*}f,\overline{e^{it(H-E_{0})}S^{*}g}\rangle\bigr{)}\] \[=-2\theta(t)\langle S^{*}f,\sin\bigl{(}t(H-E_{0})\bigr{)}S^{*}g \rangle=\langle f,\chi_{0}(t)g\rangle,\] for any real-valued functions \(f,g\in L^{\infty}(\mathbb{R}^{3})\). Using the boundedness of the electronic density \(\rho^{\Psi_{0}}\), we can show that \(\chi\) has more regularity (in terms of \(L^{p}\) spaces) than simply mapping \(L^{\infty}\) to \(L^{1}\). **Proposition 2.4** (\(L^{p}\)-regularity of \(\chi\)).: _Let \(S\) and \(S^{*}\) be defined by (2.9) and (2.10), for some \(H\) satisfying Assumption 1. Then, \(S\in\mathcal{B}(\mathcal{H}_{N},L^{1}(\mathbb{R}^{3})\cap L^{2}(\mathbb{R}^{3}))\) and \(S^{*}\in\mathcal{B}(L^{2}(\mathbb{R}^{3})+L^{\infty}(\mathbb{R}^{3}),\mathcal{ H}_{N})\). In particular, \(t\mapsto\chi(t)\) is a strongly continuous family of operators in \(\mathcal{B}\big{(}L^{2}(\mathbb{R}^{3})+L^{\infty}(\mathbb{R}^{3}),L^{1}( \mathbb{R}^{3})\cap L^{2}(\mathbb{R}^{3})\big{)}\), and the map \(t\mapsto\|\chi(t)\|_{L^{2}+L^{\infty},L^{1}\cap L^{2}}\) is uniformly bounded in \(\mathbb{R}_{+}\)._ Proof.: From the boundedness of \(\rho^{\Psi_{0}}\) and the Cauchy-Schwarz inequality we have \[\int_{\mathbb{R}^{3}}|S\Phi(r)|^{2}\mathrm{d}r\leq N\int_{\mathbb{R}^{3}}\rho ^{\Psi_{0}}(r)\int_{\mathbb{R}^{3(N-1)}}|\Phi(r,r_{2},\ldots,r_{N})|^{2}\, \mathrm{d}r_{2}\ldots\mathrm{d}r_{N}\mathrm{d}r\leq N\|\rho^{\Psi_{0}}\|_{L^{ \infty}}\|\Phi\|_{L^{2}(\mathbb{R}^{3N})},\] and \[\int_{\mathbb{R}^{3}}|S\Phi(r)|\mathrm{d}r \leq N\int_{(\mathbb{R}^{3})^{N}}|\Psi_{0}(r,r_{2},..,r_{N})\Phi( r,r_{2},...,r_{N})|\mathrm{d}r_{2}...\mathrm{d}r_{N}\] \[\leq N\|\Psi_{0}\|_{L^{2}(\mathbb{R}^{3N})}\|\Phi\|_{L^{2}( \mathbb{R}^{3N})}=N\|\Phi\|_{L^{2}(\mathbb{R}^{3N})}.\] Hence, \(S\) maps \(\mathcal{H}_{N}\) to \(L^{1}(\mathbb{R}^{3})\cap L^{2}(\mathbb{R}^{3})\). Since \(S^{*}\) is the adjoint of \(S\), \(S^{*}\) is bounded from \(\big{(}L^{1}(\mathbb{R}^{3})\cap L^{2}(\mathbb{R}^{3})\big{)}^{*}=L^{2}( \mathbb{R}^{3})+L^{\infty}(\mathbb{R}^{3})\) to \(\big{(}\mathcal{H}_{N}\big{)}^{*}=\mathcal{H}_{N}\) (where we use the Riesz representation for the second identification). The properties of \(\chi\) now follows because \(\sin\big{(}t(H-E_{0})\big{)}\) is strongly continuous in \(\mathcal{B}(\mathcal{H}_{N},\mathcal{H}_{N})\) and goes to \(0\) strongly as \(t\to 0\). ### Fourier transform of the density-density response function and its poles Here we give a representation of the Fourier transform of the density-density response function \(\chi\) in terms of the resolvent of \(H\). This can be viewed as a mathematically rigorous version of the celebrated Lehmann representation, to which it reduces under the - in the physics literature tacitly made but for the physical Hamiltonian (1.17) incorrect - assumption of purely discrete spectrum. We also introduce the definition of a meromorphic family of operators with poles of finite rank, borrowed from [21, Appendix C], and show that the poles of \(\widehat{\chi}\) are located at the spectrum of \(H\). Let us start with the Fourier transform of \(\chi\), defined on the upper half plane \(\{\mathrm{Im}(z)>0\}\) by \[\widehat{\chi}(z)=\int_{0}^{\infty}\chi(t)e^{izt}dt.\] **Proposition 2.5** (Fourier transform of density-density response function).: _Let \(\chi\) be the density-density response function defined in (2.7) for some \(H\) satisfying Assumption 1. Then the Fourier transform of \(\chi\) is given by_ \[\widehat{\chi}(z)=S(1-P_{E_{0}}^{H})\big{(}(E_{0}-z-H)^{-1}+(E_{0}+z-H)^{-1} \big{)}(1-P_{E_{0}}^{H})S^{*}\quad\text{for any $\mathrm{Im}(z)>0$,} \tag{2.11}\] _where the operators \(S\) and \(S^{*}\) are defined in (2.9),(2.10), \(E_{0}\) is the ground state of \(H\), and \(P_{E_{0}}^{H}\) is the orthogonal projection onto the space spanned by \(\Psi_{0}\). In particular, the Fourier transform of \(\chi\) along the real line is the tempered distribution given by_ \[\widehat{\chi}(\omega)=\lim_{\eta\to 0^{+}}S(1-P_{E_{0}}^{H})\big{(}(E_{0}- \omega-i\eta-H)^{-1}+(E_{0}+\omega+i\eta-H)^{-1}\big{)}(1-P_{E_{0}}^{H})S^{*}, \tag{2.12}\] _where the limit is taken in the distributional sense._ Proof.: Since \(H\) is self-adjoint, from Proposition 2.3 and the spectral theorem we find that \[\int_{0}^{\infty}\chi(t)e^{i(\omega+i\eta)t}\mathrm{d}t =S\int_{0}^{\infty}\int_{E_{0}}^{\infty}2\sin\bigl{(}t(E_{0}-\lambda )\bigr{)}e^{i(\omega+i\eta)t}\mathrm{d}P_{\lambda}^{H}\mathrm{d}tS^{*}\] \[=S\int_{E_{0}}^{\infty}\frac{1}{E_{0}-\omega-i\eta-\lambda}+ \frac{1}{E_{0}+\omega+i\eta-\lambda}\mathrm{d}P_{\lambda}^{H}S^{*}\] \[=S(1-P_{E_{0}}^{H})\big{(}(E_{0}-\omega-i\eta-H)^{-1}+(E_{0}+ \omega+i\eta-H)^{-1}\big{)}(1-P_{E_{0}}^{H})S^{*},\] where \(P_{\lambda}^{H}\) is the spectral projection of \(H\) and we have used that \((E_{0}+\omega+i\eta-H)^{-1}\Psi_{0}+(E_{0}-\omega-i\eta-H)^{-1}\Psi_{0}=0\). Next, we make formula (2.12) more explicit in terms of the spectrum of \(H\). **Proposition 2.6** (Rigorous Lehmann representation).: _Under the assumptions of Proposition 2.5, for \(\mathrm{Im}(z)>0\) we have_ \[\widehat{\chi}(z)=\sum_{E_{j}\in\sigma_{d}(H)\setminus\{E_{0}\}} SP_{E_{j}}^{H}\Bigl{(}\frac{1}{E_{0}-z-E_{j}}+\frac{1}{E_{0}+z-E_{j}}\Bigr{)}P_{E_{ j}}^{H}S^{*}\\ +S\int_{\sigma_{\mathrm{ess}}(H)}\frac{1}{E_{0}-z-\lambda}+\frac {1}{E_{0}+z-\lambda}\,\mathrm{d}P_{\lambda}^{H}S^{*}, \tag{2.13}\] _where \(P_{\lambda}^{H}\) is the spectral projection of \(H\) and \(\sigma_{d}\) and \(\sigma_{ess}\) denote the discrete respectively essential spectrum of \(H\)._ Proof.: This follows immediately from Proposition 2.5 and the spectral theorem for selfadjoint operators. **Remark 2.7** (Lehmann representation in the physics literature).: _When \(H\) has purely discrete spectrum, as happens e.g. when \(v\) is a trapping potential, formula (2.16) simplifies further. Let \(\{\Psi_{j}\}_{j=0}^{\infty}\) be an orthonormal basis consisting of eigenstates of \(H\), corresponding to eigenvalues \(E_{j}\), and use that_ \[(1-P_{E_{0}}^{H})\Phi=\sum_{j=1}^{\infty}\Psi_{j}\langle\Psi_{j},\,\Phi\rangle.\] _Introduce the excitation frequencies \(\omega_{j}=E_{j}-E_{0}\), and calculate_ \[\langle v_{\mathcal{O}},\widehat{\chi}(z)v_{\mathcal{P}}\rangle\] \[=\sum_{j\geq 1}N\int v_{\mathcal{O}}(r_{1})\Big{(}\widehat{\chi}(z)v _{\mathcal{P}}\Big{)}(r_{1})\,\mathrm{d}r_{1}\] \[=\sum_{j\geq 1}N\int v_{\mathcal{O}}(r_{1})\int\overline{\Psi_{0}(r_{ 1},...,r_{N})}\Psi_{j}(r_{1},...,r_{N})\mathrm{d}r_{2}...\mathrm{d}r_{N} \mathrm{d}r_{1}\Big{(}\frac{1}{-z-\omega_{j}}+\frac{1}{z-\omega_{j}}\Big{)} \big{\langle}\Psi_{j},S^{*}v_{\mathcal{P}}\big{\rangle}\] _and therefore, decomposing \(z\) into its real and imaginary part, i.e. \(z=\omega+i\eta\) (\(\eta>0\)),_ \[\langle v_{\mathcal{O}},\widehat{\chi}(\omega+i\eta)v_{\mathcal{P}}\rangle= \sum_{j\geq 1}\bigl{\langle}\Psi_{0},V_{\mathcal{O}}\Psi_{j}\bigr{\rangle} \,\bigl{\langle}\Psi_{j},V_{\mathcal{P}}\Psi_{0}\bigr{\rangle}\,\Bigl{(}\frac {1}{-(\omega+i\eta)-\omega_{j}}+\frac{1}{(\omega+i\eta)-\omega_{j}}\Bigr{)}. \tag{2.14}\] _This is precisely the Lehmann representation of the density-density response function familiar from the physics literature. This representation beautifully reveals how a frequency-dependent perturbation couples to the excitation spectrum of the system._ _Note, however, that this representation, unlike ours above (eq. (2.13)), is not strictly speaking applicable to the standard atomic and molecular Hamiltonians ((1.17) with \(v(r)=-\sum_{\alpha=1}^{M}Z_{\alpha}/|r-R_{\alpha}|\)) which contain continuous spectrum, as it misses the integral term in (2.13)._ Let us now recall the definition of a meromorphic family of operators as defined in [2, Appendix C]. **Definition 2.8** (Meromorphic family of operators).: _Let \(\mathcal{D}\subset\mathbb{C}\) be an open set and \(E,F\) be Banach spaces. We say that \(K:\mathcal{D}\to\mathcal{B}(E,F)\) is a meromorphic family of operators if in a neighborhood of any \(z_{0}\in\mathcal{D}\), there exist finite rank operators \(K_{-j}\in\mathcal{B}(E,F)\), for \(1\leq j\leq k\), such that_ \[K(z)=K_{0}(z)+\sum_{j=1}^{k}\frac{K_{-j}}{(z-z_{0})^{j}},\] _where \(K_{0}(z)\) is holomorphic near \(z_{0}\). If \(k=1\), we say that \(z_{0}\) is a simple pole and define its rank as \(\operatorname{rank}_{z_{0}}(K)=\operatorname{rank}K_{-1}\)._ Then we can relate the definition above with the representation in Proposition 2.5. **Proposition 2.9** (Poles of \(\widehat{\chi}\)).: _Let \(\chi\) be the density-density response function defined in Proposition 2.3 for some Hamiltonian \(H\) satisfying Assumption 1. Let \(D_{\Omega}\subset\mathbb{C}\) be the set_ \[\mathcal{D}_{\Omega}=\mathbb{C}\setminus\bigl{(}(-\infty,-\Omega]\cup[\Omega,+\infty)\bigr{)}. \tag{2.15}\] _Then \(\widehat{\chi}:\mathcal{D}_{\Omega}\to\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L ^{2})\) is a meromorphic family of operators with simple poles contained in \((-\Omega,\Omega)\). Moreover, the set of poles of \(\widehat{\chi}\) is_ \[\mathcal{P}(\widehat{\chi})=\{\omega\in\mathbb{R}:E_{0}+|\omega|\in\sigma(H) \setminus E_{0}\text{ and }SP^{H}_{E_{0}+|\omega|}\neq 0\}, \tag{2.16}\] _and the rank of a pole \(\omega\in\mathcal{P}(\widehat{\chi})\) is given by_ \[\operatorname{rank}_{\omega}(\widehat{\chi})=\operatorname{rank}SP^{H}_{E_{0}+| \omega|}, \tag{2.17}\] _where \(P^{H}_{E_{0}+|\omega|}\) is the spectral projection of \(H\) onto the eigenspace \(\ker(H-E_{0}-|\omega|)\)._ Proof.: For \(\operatorname{Im}(z)>0\), we start from the representation of \(\widehat{\chi}(z)\) in Proposition 2.6. As \(\Omega>0\), we can extend \(\widehat{\chi}(z)\) analytically to the lower half plane \(\operatorname{Im}(z)<0\) and we directly obtain that it is a meromorphic family of operators with simple real poles with the characterization (2.16). For the statement on the rank of the poles, note that \[\langle f,SP^{H}_{E_{0}+\omega}S^{*}f\rangle=\langle P^{H}_{E_{0}+\omega}S^{*} f,P^{H}_{E_{0}+\omega}S^{*}f\rangle=\|P^{H}_{E_{0}+\omega}S^{*}f\|^{2}_{L^{2}( \mathbb{R}^{3N})},\] for any \(f\in L^{2}(\mathbb{R}^{3})+L^{\infty}(\mathbb{R}^{3})\). Thus, \(\operatorname{rank}_{\omega}(\widehat{\chi})=\operatorname{rank}(SP^{H}_{E_{ 0}+\omega}S^{*})\geq\operatorname{rank}(P^{H}_{E_{0}+\omega}S^{*})\). But \(\operatorname{rank}(P^{H}_{E_{0}+\omega}S^{*})=\operatorname{rank}(SP^{H}_{ E_{0}+\omega})\) and we already have \(\operatorname{rank}(SP^{H}_{E_{0}+\omega}S^{*})\leq\operatorname{rank}(SP^{H}_{ E_{0}+\omega})\), hence \(\operatorname{rank}_{\omega}(\widehat{\chi})=\operatorname{rank}(SP^{H}_{E_{0}+ \omega})\). **Remark 2.10**.: _If the Hamiltonian \(H\) has purely discrete spectrum (for instance when \(v\) is a trapping potential), \(\mathcal{D}_{\Omega}=\mathbb{C}\) and \(\mathcal{P}(\widehat{\chi})\) is the whole set of singular points of the Fourier transform \(\widehat{\chi}\). However, past the ionization threshold (see (1.19)) it is not clear how singular \(\widehat{\chi}\) is. For instance, under suitable assumptions on \(v\) and \(w\) one can use the celebrated limiting absorption principle [13, 1, 1] to show that \(\widehat{\chi}\) is continuous - or even differentiable - above the ionization threshold (and away from embedded eigenvalues)._ ## 3 The RPA Dyson equation The goal of this section is to prove Theorem 1.1. We start with existence and uniqueness, and then prove the bijection property. To shorten the notation, for any \(T>0\) we define \[\|\chi\|_{T}\coloneqq\operatorname*{ess\,sup}_{t\in(0,T]}\|\chi(t)\|_{L^{2}+L ^{\infty},L^{1}\cap L^{2}}, \tag{3.1}\] where \(\chi\in L^{\infty}\big{(}[0,T);\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2}) \big{)}\). ### Well-posedness of the Dyson equation We now turn to the well-posedness of the RPA-Dyson equation \[\chi^{\text{RPA}}(t)=\chi_{0}(t)+\int_{0}^{t}\chi_{0}(t-s)F_{H}\chi^{\text{ RPA}}(s)\mathrm{d}s, \tag{3.2}\] where \(F_{H}\) is the Hartree operator defined as the convolution with \(\frac{1}{|\cdot|}\). Here the reference response function \(\chi_{0}\) can be a general operator-valued function of time, only required to satisfy mild regularity conditions. The starting point is to show that the convolution map \[(\chi_{0},\chi)\mapsto\mathcal{C}(\chi_{0},\chi)(t)= \big{(}\chi_{0}\star F_{H}\chi\big{)}(t)\coloneqq\int_{0}^{t} \chi_{0}(t-s)F_{H}\chi(s)\mathrm{d}s \tag{3.3}\] is continuous in appropriate spaces. More precisely, we have the following lemma. **Lemma 3.1** (Continuity of convolution map).: _Let \(\chi\in L^{\infty}((0,T],\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2}))\) and \(\chi_{0}\in L^{\infty}((0,T],\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2}))\). Then, the function \(t\mapsto\chi_{0}(t-s)F_{H}\chi(s)\) belongs to \(L^{\infty}((0,T],\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2}))\) and it holds_ \[\|\mathcal{C}(\chi_{0},\chi)\|_{T}\lesssim T\|\chi_{0}\|_{T}\|\chi\|_{T} \tag{3.4}\] _Moreover, if either \(\chi\) or \(\chi_{0}\) is strongly continuous, then so is \(\mathcal{C}(\chi_{0},\chi)\)._ Proof.: Since \(F_{H}=4\pi(-\frac{1}{2}\Delta)^{-1}\) and we have the continuous inclusions \(L^{p}(\mathbb{R}^{3})\subset L^{2}(\mathbb{R}^{3})+L^{\infty}(\mathbb{R}^{3})\) and \(L^{1}(\mathbb{R}^{3})\cap L^{2}(\mathbb{R}^{3})\subset L^{q}(\mathbb{R}^{3})\), for \(2\leq p\leq\infty\) and \(1\leq q\leq 2\), estimate (3.4) follows directly from the Hardy-Littlewood-Sobolev inequality, which in \(\mathbb{R}^{3}\) reads \[||I_{\alpha}f||_{p}\leq C||f||_{q},\qquad I_{\alpha}=\big{(}-\Delta\big{)}^{ -\nicefrac{{\alpha}}{{2}}},\] for \(\frac{1}{q}=\frac{1}{p}+\frac{\alpha}{3}\) with \(1<p,q<\infty\). The strong continuity follows by observing that \(\chi_{0}(t-s)F_{H}\chi(s)\) is strongly continuous in \(t\) and uniformly bounded (in \(s\)) in the \(\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2})\)-operator norm. Hence, by dominated convergence we find that \(\mathcal{C}(\chi_{0},\chi)\) is also strongly continuous. On the other hand, if \(\chi\) is strongly continuous we can use the change of variables \(s\mapsto t-s\) and the same argument to show that \(\mathcal{C}(\chi_{0},\chi)\) is strongly continuous. We can now use the above estimate to show the well-posedness on the space of strongly continuous \(\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2})\)-valued functions. Proof of item (i) from Theorem 1.1.: First note that by inequality (3.4), for \(T\) sufficiently small we know that the map \[\mathcal{C}^{T}(\chi_{0},\cdot)\ :\ \left\{\begin{aligned} \mathcal{L}^{ \infty}((0,T],\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2}))& \to L^{\infty}((0,T],\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2}))\\ &\chi&\mapsto\mathcal{C}(\chi_{0},\chi)(T)\end{aligned}\right.\] is a contraction. Therefore, by the Banach fixed point theorem, there exists a unique solution \(\chi_{1}\in L^{\infty}\big{(}(0,T];\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{ 2})\big{)}\) satisfying \(\chi_{1}=\chi_{0}+\mathcal{C}^{T}(\chi_{0},\chi_{1})\). Hence, we just need to extend this solution to the whole \(\mathbb{R}_{+}\). For this, note that, as \(\chi_{0}\in L^{\infty}((0,2T],\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2}))\), there exists some \(\delta=\delta(\|\chi_{0}\|_{Y_{2T}})>0\) such that the map \[\mathcal{C}^{T+\delta}_{T_{0}}(\chi_{0},\chi)(t)\coloneqq\int_{T_{0}}^{t} \chi_{0}(t-s)F_{H}\chi(s)\mathrm{d}s\] is also a contraction in \(L^{\infty}([T_{0}-\delta,T_{0}+\delta];\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L ^{2}))\) for any \(0<T_{0}\leq 2T-\delta\). Hence, let \(\chi_{1}(t)\) be the solution in \(L^{\infty}\big{(}(0,T];\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2})\big{)}\), then the map \[\chi\mapsto\chi_{0}+\int_{0}^{T}\chi_{0}(t-s)F_{H}\chi_{1}(s)\mathrm{d}s+ \mathcal{C}^{T+\delta}_{T}(\chi_{0},\chi)\] is again a contraction and we can find a unique fixed point \(\chi_{2}\). Moreover, we have \[\chi_{1}(t)-\chi_{2}(t)=\int_{T}^{t}\chi_{0}(t-s)F_{H}\big{(}\chi_{1}(s)-\chi _{2}(s)\big{)}\mathrm{d}s=\mathcal{C}^{T+\delta}_{T}(\chi_{0},\chi_{1}-\chi_{ 2})(t),\] for any \(T-\delta<t\leq T+\delta\). But because \(0\) is the unique fixed point of \(\mathcal{C}_{T}^{T+\delta}(\chi_{0},\cdot)\), we must have \(\chi_{1}(t)=\chi_{2}(t)\) for a.e \(T-\delta<t\leq T+\delta\). We have thus extended \(\chi_{1}\) to \((0,T+\delta]\). To conclude, note that since \(\delta\) is uniform in the interval \((0,2T]\), we can iterate the argument to extend \(\chi_{1}\) to the interval \((0,2T]\). Repeating the same steps, we can further extend the solution to the whole \(\mathbb{R}_{+}\). The strong continuity follows from the strong continuity in Lemma 3.1. ### Bijection of the RPA-Dyson solution map In virtue of Theorem 1.1 (i), we can define the solution map \[\mathcal{S}^{RPA}\ :\ \left\{\begin{aligned} C_{s}\big{(}\mathbb{R}_{+};\mathcal{B}(L^{2}+L^{ \infty},L^{1}\cap L^{2})\big{)}&\to C_{s}\big{(}\mathbb{R}_{+}; \mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2})\big{)}\\ \chi_{0}&\mapsto\chi^{\text{RPA}}\in\ker\{\chi_{0} +\mathcal{C}(\chi_{0},\cdot)-\cdot\}.\end{aligned}\right.\] To complete the proof of Theorem 1.1, we now show that \(\mathcal{S}^{RPA}\) is bijective in \(C_{s}\big{(}\mathbb{R}_{+};\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2})\big{)}\). Proof of item (ii) of Theorem 1.1.: Note that, by repeating the arguments in the proof of item (i) of Theorem 1.1, for any \(\chi\in C_{s}\big{(}\mathbb{R}_{+};\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{ 2})\big{)}\), we can find a unique \(\chi_{0}\in C_{s}\big{(}\mathbb{R}_{+};\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L ^{2})\big{)}\) satisfying \(\chi_{0}=\chi-\mathcal{C}(\chi_{0},\chi)\). In particular, \(\chi=\mathcal{S}^{RPA}(\chi_{0})\) is the unique solution of the RPA-Dyson equation, which implies that \(\mathcal{S}^{RPA}\) is surjective. Similarly, by the uniqueness of the solution \(\chi_{0}\) of \(\chi_{0}=\chi-\mathcal{C}(\chi_{0},\chi)\), we also have injectivity of \(\mathcal{S}^{RPA}\) and the proof is complete. ## 4 Symmetrized density-density response function In this section, we want to characterize the poles of \((1-\widehat{\chi}_{s}(z))^{-1}\) for operators of the form \[\widehat{\chi}_{s}(z)=T\bigg{(}\int_{\sigma(H)}2\frac{E_{0}- \lambda}{(E_{0}-\lambda)^{2}-z^{2}}\mathrm{d}P_{\lambda}^{H}\bigg{)}T^{*}, \tag{4.1}\] where \(T\) is a bounded operator from \(\mathcal{H}_{N}\) to a Hilbert space \(\mathcal{H}\), and \(P_{\lambda}^{H}\) is the projection-valued measure of a Hamiltonian satisfying Assumption 1. This characterization is essential for the proof of Theorem 1.2. We start by introducing some new notation. In virtue of Proposition 2.9, we define the relevant excitations \(0<\omega_{1}<\omega_{2}<\dots\) as the set of positive poles of \(\widehat{\chi}_{s}\), i.e., \[\{\omega_{j}\}_{j=1}^{m}=\{0<\omega<\Omega:TP_{E_{0}+\omega}^{H} \neq 0\}=\mathcal{P}(\widehat{\chi}_{s})\cap(0,\Omega) \tag{4.2}\] where \(P_{E_{0}+\omega}^{H}\) is the spectral projection of \(H\) at the eigenvalue \(E_{0}+\omega\). For finite \(m\), we set \(\omega_{m+1}=\Omega\), where we recall that \(\Omega\) is the ionization threshold defined in (1.19). We also call the excitation-free intervals, the intervals \((\omega_{j},\omega_{j+1})\), and define the finite-dimensional subspaces \[V_{j}\coloneqq(\ker P_{E_{0}+\omega_{j}}^{H}T^{*})^{\perp}= \operatorname{ran}TP_{E_{0}+\omega_{j}}^{H}. \tag{4.3}\] Then by Proposition 2.9 (which also holds for \(\widehat{\chi}_{s}\) in the place of \(\widehat{\chi}\)), the rank of \(\omega_{j}\) as a pole of \(\widehat{\chi}_{s}\) is given by \(\dim V_{j}\). **Remark 4.1**.: _Note that we can assume \(\ker(1-P_{E_{0}}^{H})T^{*}=\{0\}\), as otherwise, we could simply set \(\widetilde{\mathcal{H}}=(\ker(1-P_{E_{0}}^{H})T^{*})^{\perp}=\overline{\operatorname {ran}(T(1-P_{E_{0}}^{H}))}\subset\mathcal{H}\) and consider \(\widehat{\chi}_{s}=P_{\widehat{\mathcal{H}}}\widehat{\chi}_{s}P_{\widehat{ \mathcal{H}}}\) as an operator in \(\widetilde{\mathcal{H}}\)._ The main goal of this section is then to prove the following propositions. **Proposition 4.2** (Characterization of the poles of \((1-\widehat{\chi}_{s})^{-1}\)).: _Let \(\widehat{\chi}_{s}(z)\) be defined by (4.1) and \(\mathcal{D}_{\Omega}=\mathbb{C}\setminus\big{(}(-\infty,-\Omega]\cup[\Omega, \infty)\big{)}\). Then \((1-\widehat{\chi}_{s})^{-1}:\mathcal{D}_{\Omega}\mapsto\mathcal{B}(\mathcal{ H},\mathcal{H})\) is a meromorphic family of operators with simple real poles with rank given by_ \[\operatorname{rank}_{\omega}\!\big{(}(1-\widehat{\chi}_{s})^{-1}\big{)}= \begin{cases}\dim\ker\!\big{(}1-P_{V_{j}}^{\perp}\widehat{\chi}_{s}(\omega_{j} )P_{V_{j}}^{\perp}\big{)},&\text{ if }\omega=\omega_{j}\text{ for some }j\leq m,\\ \dim\ker\!\big{(}1-\widehat{\chi}_{s}(\omega)\big{)},&\text{ otherwise.}\end{cases} \tag{4.4}\] _Moreover, for \(z\) close to \(\omega_{j}\) we have_ \[\big{(}1-\widehat{\chi}_{s}(z)\big{)}^{-1}=(z-\omega_{j})^{-1}K_{-1}+K_{0}+ \mathcal{O}(|z-\omega_{j}|), \tag{4.5}\] _where \(K_{-1}P_{V_{j}}=0\) and \(\operatorname{ran}K_{0}P_{V_{j}}\subset\operatorname{ran}K_{-1}\)._ **Proposition 4.3** (Forward shift of the poles of \((1-\widehat{\chi}_{s})^{-1}\)).: _Let \(\widehat{\chi}_{s}\) be the operator defined in (4.1). Then, the poles of \((1-\widehat{\chi}_{s})^{-1}\) are forward shifted with respect to the poles of \(\widehat{\chi}_{s}\) in the sense that for any \(0<\omega<\Omega\), we have_ \[\sum_{\begin{subarray}{c}\widetilde{\omega}\in\mathcal{P}((1-\widehat{\chi}_{ s})^{-1})\\ |\widetilde{\omega}|<\omega\end{subarray}}\operatorname{rank}_{\widetilde{ \omega}}\!\big{(}(1-\widehat{\chi}_{s})^{-1}\big{)}\leq\sum_{\begin{subarray} {c}\widetilde{\omega}\in\mathcal{P}(\widehat{\chi}_{s})\\ |\widetilde{\omega}|<\omega\end{subarray}}\operatorname{rank}_{\widetilde{ \omega}}(\widehat{\chi}_{s}). \tag{4.6}\] The plan for the rest of the section is to first prove Proposition 4.2 and then use it to prove Proposition 4.3. The proof of Proposition 4.2 consists of three main steps. First we show that \(1-\widehat{\chi}_{s}(z)\) is invertible for \(\operatorname{Im}(z)\neq 0\) or \(|\omega|<\omega_{1}\) and find an explicit estimate for the blow-up of the inverse as \(\operatorname{Im}(z)\to 0\) (Lemma 4.6). We then use this estimate to conclude that all poles in the excitation-free intervals \((\omega_{j},\omega_{j+1})\) are simple and give their rank (Lemma 4.7). Finally we isolate the singularity of \(1-\widehat{\chi}_{s}(\omega)\) at \(\omega=\omega_{j}\) and deal with the blowing-up and vanishing part of \(1-\widehat{\chi}_{s}(\omega)\) separately. ### Proof of Proposition 4.2 The starting point of our analysis is the spectral decomposition \[\widehat{\chi}_{s}(z)=\sum_{j\leq m}\frac{2\omega_{j}}{z^{2}-\omega_{j}^{2}} \underbrace{TP_{E_{0}+\omega_{j}}^{H}T^{*}}_{=B_{j}}+\underbrace{T\bigg{(}\int _{\Omega}\frac{2\lambda}{z^{2}-\lambda^{2}}\mathrm{d}P_{E_{0}+\lambda}^{H} \bigg{)}T^{*}}_{:=B_{\operatorname{ess}(z)}}, \tag{4.7}\] where \(P_{E_{0}+\omega_{j}}^{H}\) is the spectral projection in the eigenspace \(\ker(H-E_{0}-|\omega_{j}|)\), and \(\omega_{j}\) are the excitations defined in (4.2). Note that since \(TP_{E_{0}+\omega_{j}}^{H}T^{*}=TP_{E_{0}+\omega_{j}}^{H}(TP_{E_{0}+\omega_{j} }^{H})^{*}\), the operators \(B_{j}\) are invertible with respect to the orthogonal projection \(P_{V_{j}}\). This follows from the identity \(\ker A=(\operatorname{ran}A)^{\perp}\) valid for any symmetric operator \(A\in\mathcal{B}(\mathcal{H},\mathcal{H})\). The first step to prove Proposition 4.2 is to show that the positive spectra of \(\widehat{\chi}_{s}\) is discrete. This follows from the following proposition. **Proposition 4.4** (Essential spectrum of \(\widehat{\chi}_{s}\)).: _Let \(\widehat{\chi}_{s}(z)\) be defined by (4.1). Then, for any \(z\in\mathcal{D}_{\Omega}\setminus\mathcal{P}(\widehat{\chi}_{s})\), the operator \(\widehat{\chi}_{s}(z)\) satisfies_ \[\widehat{\chi}_{s}(z)=\widehat{\chi}_{s}(\overline{z})^{*}=\widehat{\chi}_{s}(- z). \tag{4.8}\] _In particular, \(\widehat{\chi}_{s}(z)\) is self-adjoint for real \(z\). Furthermore, we have_ \[\sigma_{\rm ess}\big{(}\widehat{\chi}_{s}(\omega)\big{)}\subset(-\infty,0], \tag{4.9}\] _for any \(\omega\in(-\Omega,\Omega)\setminus\mathcal{P}(\widehat{\chi}_{s})\)._ Proof.: The symmetries in (4.8) are immediate from the definition in (4.1) and the identity \(\big{(}(z-H)^{-1}\big{)}^{*}=(\overline{z}-H)^{-1}\). For the essential spectrum part, note that \(\frac{2\lambda}{\omega^{2}-\lambda^{2}}<0\) for \(|\omega|<\lambda\). This together with the fact that \(B_{j}=(TP_{E_{0}+\omega_{j}}^{H})(TP_{E_{0}+\omega_{j}}^{H})^{*}\) is non-negative implies that \[\langle f,\frac{2\omega_{j}}{\omega_{j}^{2}-\lambda^{2}}B_{j}f \rangle\leq 0\quad\text{ for any $0<\omega<\omega_{j}$, and} \tag{4.10}\] \[\langle f,B_{\rm ess}(\omega)f\rangle\leq 0\quad\text{ for any $0<\omega<\Omega$}. \tag{4.11}\] In addition, since all \(B_{j}\)'s are finite rank operators, from Weyl's criterion we have \[\sigma_{\rm ess}\big{(}\widehat{\chi}_{s}(\omega)\big{)}=\sigma_{\rm ess} \bigg{(}\sum_{j\geq k}\frac{2\omega_{j}}{\omega^{2}-\omega_{j}^{2}}B_{j}+B_{ \rm ess}(\omega)\bigg{)}\quad\text{ for any integer $k\leq m$ }. \tag{4.12}\] The result now follows from (4.10), (4.11) and (4.12) by the Rayleigh Ritz principle. #### 4.1.1 Inverse of \(1-\widehat{\chi}_{s}(z)\) for \({\rm Im}(z)\neq 0\) or \(|z|<\omega_{1}\). Next, we want to show that \(1-\widehat{\chi}_{s}(z)\) is invertible for any \(z\) with \({\rm Im}(z)\neq 0\) or \(|{\rm Re}(z)|<\omega_{1}\). For this, we shall use the following inequality between the real and imaginary part of \(\langle f,\widehat{\chi}_{s}(z)f\rangle\). **Lemma 4.5** (Real to imaginary ratio).: _Let \(\widehat{\chi}_{s}(z)\) be defined by (4.1), then for any \(z\in\mathcal{D}_{\Omega}\backslash\big{(}(-\Omega,-\omega_{1}]\cup[\omega_{1}, \Omega)\big{)}\) and \(f\in\mathcal{H}\) we have_ \[{\rm Re}\big{(}\langle f,\widehat{\chi}_{s}(z)f\rangle\big{)}\leq\max\biggl{\{} 0,\frac{\omega^{2}-\eta^{2}-\omega_{1}^{2}}{|\omega\eta|}\biggr{\}}\big{|}{\rm Im }\big{(}\langle f,\widehat{\chi}_{s}(z)f\rangle\big{)}\big{|}. \tag{4.13}\] Proof.: Let \(z=\omega+i\eta\) and suppose that \(\omega^{2}-\eta^{2}\leq\omega_{1}^{2}\). Then since the integrand in (4.1) vanishes for \(\lambda=E_{0}\), by making the translation \(\lambda\mapsto\lambda-E_{0}\), we find that \[{\rm Re}\langle f,\widehat{\chi}_{s}(z)f\rangle=2\int_{\omega_{1}}^{\infty} \frac{\overbrace{\lambda(\omega^{2}-\eta^{2}-\lambda^{2})}^{\leq 0}}{| \lambda^{2}+z^{2}|^{2}}{\rm d}\|P_{E_{0}+\lambda}^{H}Tf\|^{2}\leq 0\] which gives estimate (4.13) in this case. On the other hand, if \(\omega^{2}-\eta^{2}>\omega_{1}^{2}\) we have \[\mathrm{Re}\langle f,\widehat{\chi}_{s}(z)f\rangle \leq 2\int_{\omega_{1}}^{\sqrt{\omega^{2}-\eta^{2}}}\frac{\lambda( \omega^{2}-\eta^{2}-\lambda^{2})}{|\lambda^{2}-z^{2}|^{2}}\mathrm{d}\|P_{E_{0} +\lambda}^{H}Tf\|^{2}\] \[\leq 2\int_{\omega_{1}}^{\sqrt{\omega^{2}-\eta^{2}}}\frac{\lambda |\omega\eta|}{|\lambda^{2}-z^{2}|^{2}}\bigg{(}\frac{\omega^{2}-\eta^{2}- \lambda^{2}}{|\omega\eta|}\bigg{)}\mathrm{d}\|P_{E_{0}+\lambda}^{H}Tf\|^{2}\] \[\leq\frac{\omega^{2}-\eta^{2}-\omega_{1}^{2}}{|\omega\eta|}\bigg{|} 2\int_{\omega_{1}}^{\sqrt{\omega^{2}-\eta^{2}}}\frac{\lambda\omega\eta}{| \lambda^{2}-z^{2}|^{2}}\mathrm{d}\|P_{E_{0}+\lambda}^{H}Tf\|^{2}\bigg{|}\] \[\leq\frac{\omega^{2}-\eta^{2}-\omega_{1}^{2}}{|\omega\eta|}| \mathrm{Im}\langle f,\widehat{\chi}_{s}(z)f\rangle|.\] Now we can use estimate (4.13) to show that \(1-\widehat{\chi}_{s}(z)\) is invertible away from the real axis and before the first excitation \(\omega_{1}\). In addition, we obtain an explicit upper bound on the blow-up rate of the inverse as \(z\) approaches the real axis. This bound will be useful to show that the poles of \((1-\widehat{\chi}_{s})^{-1}\) are simple. **Lemma 4.6** (Inverse away of the real axis and before \(\omega_{1}\)).: _Let \(\widehat{\chi}_{s}(z)\) be defined in (4.1) and \(\mu_{0}>0\). Then \(\mu_{0}-\widehat{\chi}_{s}(z)\) is invertible in the set \(\{z=\omega+i\eta\in\mathbb{C}:\eta\neq 0\text{ or }|\omega|\leq\omega_{1}\}\). Moreover, we have_ \[\|(\mu_{0}-\widehat{\chi}_{s}(\omega+i\eta))^{-1}\|\lesssim\mu_{0}^{-1}|z|| \eta|^{-1}, \tag{4.14}\] _for any \(z\in\mathbb{C}\setminus\mathbb{R}\)._ Proof.: Let \(g\,:\,\mathbb{C}\setminus\{\omega_{1},-\omega_{1}\}\to\mathbb{R}\cup\{+\infty\}\) be the function \[g(\omega+i\eta)=\max\biggl{\{}0,\frac{\omega^{2}-\eta^{2}-\omega_{1}^{2}}{| \omega\eta|}\biggr{\}},\quad(\omega,\eta)\in\mathbb{R}^{2}\setminus\{(\omega _{1},0),(0,\omega_{1})\}. \tag{4.15}\] Then, for \(f\neq 0\) with \(\mathrm{Re}\langle f,\widehat{\chi}_{s}(z)f\rangle\leq 0\), we have \[\|(\mu_{0}-\widehat{\chi}_{s}(z))f\|\geq\|f\|^{-1}|\mathrm{Re} \langle f,\mu_{0}-\widehat{\chi}_{s}(z)f\rangle|\geq\mu_{0}\|f\|\] On the other hand, by estimate (4.13), for any \(f\in\mathcal{H}\) with \(\mathrm{Re}\langle f,\widehat{\chi}_{s}(z)f\rangle\geq 0\) and \(\|f\|=1\), we find \[\|(\mu_{0}-\widehat{\chi}_{s}(z))f\|^{2} \geq|\langle f,\bigl{(}\mu_{0}-\widehat{\chi}_{s}(z)\bigr{)}f \rangle|^{2}= \bigl{(}\mathrm{Re}\langle f,\mu_{0}-\widehat{\chi}_{s}(z)f\rangle \bigr{)}^{2}+(\mathrm{Im}\langle f,\widehat{\chi}_{s}(z)f\rangle)^{2}\] \[\geq(\mathrm{Re}\langle f,\widehat{\chi}_{s}(z)f\rangle)^{2}(1+g (z)^{-2})-2\mu_{0}\mathrm{Re}\langle f,\widehat{\chi}_{s}(z)f\rangle+\mu_{0}^{ 2}.\] Thus minimizing the function \(\tau\mapsto\tau^{2}(1+g(z)^{-2})-2\mu_{0}\tau+\mu_{0}^{2}\) we obtain \[\|(\mu_{0}-\widehat{\chi}_{s}(z))f\|\geq\frac{\mu_{0}}{\sqrt{1+g(z)^{2}}}\|f\| \tag{4.16}\] for any such \(f\). We thus conclude that (4.16) holds for any \(f\in\mathcal{H}\) with \(\|f\|=1\) (as \(1+g(z)^{2}\geq 1\)). Therefore, \(\mu_{0}-\widehat{\chi}_{s}(z)\) is injective and the range is closed whenever \(g(z)<+\infty\), which is precisely the set \(\{z=\omega+i\eta:\eta\neq 0\text{ or }|\omega|<\omega_{1}\}\). Moreover, since \(\widehat{\chi}_{s}(z)^{*}=\widehat{\chi}_{s}(\bar{z})\) (see Proposition 4.4) and \(g(z)=g(\bar{z})\), the adjoint \((\mu_{0}-\widehat{\chi}_{s}(z))^{*}=\mu_{0}-\widehat{\chi}_{s}(\bar{z})\) is also injective, which implies that \(\mu_{0}-\widehat{\chi}_{s}(z)\) is invertible. Estimate (4.14) now follows from (4.16) and the estimate \(g(z)=\max\{0,(\omega^{2}-\eta^{2}-\omega_{1}^{2})/|\omega\eta|\}\leq|\omega|/|\eta|\). #### 4.1.2 Inverse of \(1-\widehat{\chi}_{s}(\omega)\) away from the poles of \(\widehat{\chi}_{s}\). We now prove a lemma that will be useful to show that all poles of \(\big{(}1-\widehat{\chi}_{s}\big{)}^{-1}\) are simple. **Lemma 4.7** (Simple poles at discrete spectrum).: _Let \(K:B_{\epsilon}(z_{0})\to\mathcal{B}(\mathcal{H})\) be a holomorphic family of operators such that \(K(z_{0})\) is normal, \(0\) is an isolated point in the spectrum of \(K(z_{0})\), and \(M=\ker K(z_{0})<\infty\). Suppose that there is a constant \(C>0\) such that_ \[\|K(z_{0}+i\eta)f\|\geq C|\eta|\|f\|,\quad\text{ for any }f\in \mathcal{H}\text{ and }\eta>0\text{ close to }0. \tag{4.17}\] _Then \(K(z)\) is invertible for \(z\neq z_{0}\) close enough to \(z_{0}\) and_ \[K(z)^{-1}=\frac{K_{-1}}{z-z_{0}}+K_{0}(z),\] _where \(\operatorname{rank}K_{-1}=M\) and \(K_{0}(z)\) is holomorphic in \(B_{\epsilon}(z_{0})\) (for some possibly smaller \(\epsilon>0\))._ Proof.: First, since \(0\in\sigma_{d}\big{(}K(z_{0})\big{)}\), we know from standard perturbation theory (see Lemma C.1 in the appendix) that for any \(\delta>0\) small enough, the projection \[Q(z)=\frac{1}{2\pi i}\oint_{\partial B_{\delta}(0)}(\xi-K(z))^{-1}\mathrm{d}\xi\] is holomorphic for \(z\) close enough to \(z_{0}\). Moreover, as \(K(z_{0})\) is normal, the projection \(Q(z_{0})\) is the orthogonal projection on \(\ker K(z_{0})\). Hence, \[\lim_{z\to z_{0}}Q(z)K(z)Q(z)\to P_{\ker K(z_{0})}K(z_{0})P_{\ker K(z_{0})}=0,\] and therefore, \[Q(z)K(z)Q(z)=(z-z_{0})K_{1}+\mathcal{O}(|z-z_{0}|^{2}). \tag{4.18}\] Hence, from (4.17) and the fact that \(Q(z)\) commutes with \(K(z)\), we have \[\|K_{1}v\|\gtrsim\|v\|,\quad\text{for any }v\in\operatorname{ran}Q(z_{0}+i\eta) \text{ and }\eta>0\text{ small enough.} \tag{4.19}\] This implies that \(\operatorname{rank}K_{1}\geq\operatorname{rank}Q(z)=\operatorname{rank}Q(z_{0} )=M\). Moreover, since \[K_{1} =\lim_{z\to z_{0}}\frac{1}{z-z_{0}}Q(z)K(z)Q(z)\] \[=\lim_{z\to z_{0}}Q(z)\Big{(}\frac{1}{z-z_{0}}Q(z)K(z)Q(z)\Big{)}Q(z)\] \[=Q(z_{0})K_{1}Q(z_{0}),\] we conclude that \(\operatorname{rank}K_{1}=\operatorname{rank}Q(z_{0})=M<\infty\). But since \(\operatorname{ran}Q(z_{0})\) is finite dimensional and \(Q(z_{0})\) commutes with \(K_{1}\), we see that \(K_{1}\) is invertible with respect to \(Q(z_{0})\). This in turn implies that \(Q(z)K(z)Q(z)\) is invertible with respect to \(Q(z)\), for \(z\) close to \(z_{0}\) excluding \(z=z_{0}\). Indeed, this follows from (4.18), the fact that \(Q(z)\) commutes with \(Q(z)K(z)Q(z)\), and \(\operatorname{rank}Q(z)K(z)Q(z)=\dim\operatorname{ran}Q(z)<\infty\). Hence, we have the decomposition \[K(z)^{-1}= \big{(}Q(z)K(z)Q(z)\big{)}^{-1}+\big{(}\widetilde{Q}(z)K(z) \widetilde{Q}(z)\big{)}^{-1},\] where \(\widetilde{Q}(z)=1-Q(z)\), and \(\big{(}Q(z)K(z)Q(z)\big{)}^{-1}\) respectively \(\big{(}\widetilde{Q}(z)K(z)\widetilde{Q}(z)\big{)}^{-1}\) are the inverses with respect to \(Q(z)\) and \(\widetilde{Q}(z)\). Moreover, by the definition of \(Q(z)\), we see that \(0\not\in\sigma(K(z)\big{|}_{\ker\widetilde{Q}(z)})\) for any \(z\) close to \(z_{0}\). Hence, the inverse \(\big{(}\widetilde{Q}(z)K(z)\widetilde{Q}(z)\big{)}^{-1}\) exists and is uniformly bounded (by continuity) around \(z=z_{0}\). We are thus left with computing the pole of \(\big{(}Q(z)K(z)Q(z)\big{)}^{-1}\). To compute the pole of \(\big{(}Q(z)K(z)Q(z)\big{)}^{-1}\), first note that from the expansion (4.18) and the bound (4.19), we find that \(\|\big{(}Q(z)K(z)Q(z)\big{)}^{-1}\|\lesssim|z-z_{0}|^{-1}\). Thus, if we multiply (4.18) by \(\big{(}Q(z)K(z)Q(z)\big{)}^{-1}\) on the left and by \(K_{1}^{-1}\) on the right, we obtain \[Q(z)K_{1}^{-1}=(z-z_{0})\big{(}Q(z)K(z)Q(z)\big{)}^{-1}Q(z_{0}) +\mathcal{O}(|z-z_{0}|).\] Hence, using that \(Q(z)=Q(z_{0})+\mathcal{O}(z-z_{0})\) (which holds since \(Q(z)\) is holomorphic), we conclude that \[\big{(}Q(z)K(z)Q(z)\big{)}^{-1}=(z-z_{0})^{-1}K_{1}^{-1}+ \mathcal{O}(1), \tag{4.20}\] which completes the proof. Note that the remainder \(\mathcal{O}(1)\) is holomorphic since the inverse of an holomorphic operator-valued function (whenever defined) is also holomorphic and any holomorphic operator which is uniformly bounded around \(z_{0}\) can be extended to \(z_{0}\) (by Cauchy's formula). #### 4.1.3 Inverse of \(1-\widehat{\chi}_{s}(\omega)\) at the poles of \(\widehat{\chi}_{s}\). We now come to the last difficulty of the proof, namely, dealing with the points \(\omega=\omega_{j}\). The key idea here is to use the operator \[\widehat{\chi}_{s,j}(z)\coloneqq P_{V_{j}}^{\perp}\widehat{ \chi}_{s}(z)P_{V_{j}}^{\perp}\in\mathcal{B}(V_{j}^{\perp},V_{j}^{\perp}) \tag{4.21}\] as a reference for separating the spectra of \(\widehat{\chi}_{s}\). Precisely, we show here that \(\mu_{0}>0\) is an eigenvalue of \(\widehat{\chi}_{s,j}(\omega_{j})\) with multiplicity \(M\) if and only if the spectra of \(\widehat{\chi}_{s}(z)\) close to \(\mu_{0}\) converges to \(\mu_{0}\) as \(z\to\omega_{j}\) and the associated Riesz projection has rank \(M\). Before we show this however, we need one technical lemma. This lemma provides an asymptotic expansion for an continuous operator-valued function close to one of its poles. **Lemma 4.8** (Inverse of operator-valued function around a pole).: _Let \(V\subset\mathcal{H}\) be a closed subspace, and let \(B\in\mathcal{B}(\mathcal{H})\) be invertible with respect to the orthogonal projection \(P_{V}\). Let \(z\mapsto A(z)\in\mathcal{B}(\mathcal{H})\) be an analytic family of operators and suppose that \(P_{V}^{\perp}A(z)P_{V}^{\perp}\) is invertible with respect to \(P_{V}^{\perp}\), with a uniform bound. Then, for \(z\in\mathbb{C}\) small enough, the operator \(A(z)+z^{-1}B\) is invertible and we have_ \[(A(z)+z^{-1}B)^{-1}=(P_{V}^{\perp}A(z)P_{V}^{\perp})^{-1}+\mathcal{O}(|z|). \tag{4.22}\] Proof.: The proof relies on a Schur complement. Writing the operator \(A(z)+z^{-1}B\) by blocks, we have \[\begin{pmatrix}z^{-1}B+P_{V}A(z)P_{V}&P_{V}A(z)P_{V}^{\perp}\\ P_{V}^{\perp}A(z)P_{V}&P_{V}^{\perp}A(z)P_{V}^{\perp}\end{pmatrix}=:\begin{pmatrix} \mathcal{A}&\mathcal{B}\\ \mathcal{C}&\mathcal{D}\end{pmatrix}.\] The Schur complement is the operator \(\mathcal{A}-\mathcal{B}\mathcal{D}^{-1}\mathcal{C}\). Using that \(z\mathcal{A}\) and \(\mathcal{B}\mathcal{D}^{-1}\mathcal{C}\) are uniformly bounded, the Schur complement is invertible for \(|z|\) sufficiently small: \[(\mathcal{A}-\mathcal{B}\mathcal{D}^{-1}\mathcal{C})^{-1}=\mathcal{A}^{-1} \sum_{k\geq 0}\big{(}z\mathcal{B}\mathcal{D}^{-1}\mathcal{C}(z\mathcal{A})^{-1} \big{)}.\] The result then follows from the formula of the inverse of \(A(z)+z^{-1}B\) in terms of the blocks \[(A(z)+z^{-1}B)^{-1} =\begin{pmatrix}(\mathcal{A}-\mathcal{B}\mathcal{D}^{-1}\mathcal{ C})^{-1}&-(\mathcal{A}-\mathcal{B}\mathcal{D}^{-1}\mathcal{C})^{-1} \mathcal{B}\mathcal{D}^{-1}\\ -\mathcal{D}^{-1}\mathcal{C}(\mathcal{A}-\mathcal{B}\mathcal{D}^{-1}\mathcal{C })^{-1}&\mathcal{D}^{-1}+\mathcal{D}^{-1}\mathcal{C}(\mathcal{A}-\mathcal{B} \mathcal{D}^{-1}\mathcal{C})^{-1}\mathcal{B}\mathcal{D}^{-1}\end{pmatrix}\] \[=\begin{pmatrix}0&0\\ 0&(P_{V}^{\perp}A(z)P_{V}^{\perp})^{-1},\end{pmatrix}+\mathcal{O}(|z|).\] We can now prove the previously mentioned correspondence between the spectra of \(\widehat{\chi}_{s}(z)\) and \(\widehat{\chi}_{s,j}(\omega_{j})\) as \(z\) approaches \(\omega_{j}\). **Lemma 4.9** (Convergence of discrete spectra).: _Let \(\widehat{\chi}_{s,j}(z)\) be the operator defined in (4.21). Then, for any \(\mu_{0}>0\) and \(\delta>0\) small enough there exists a neighborhood \(U_{\delta}\) or \(\omega_{j}\) such that \(\partial B_{\delta}(\mu_{0})\cap\sigma\big{(}\widehat{\chi}_{s}(z)\big{)}=\emptyset\) for any \(z\in U_{\delta}\) and_ \[Q(z)=\frac{1}{2\pi i}\oint_{\partial B_{\delta}(\mu_{0})}(\mu-\widehat{\chi}_ {s}(z))^{-1}\mathrm{d}\mu=\oint_{\partial B_{\delta}(\mu_{0})}\big{(}\mu P_{V_ {j}}^{\perp}-\widehat{\chi}_{s,j}(z)\big{)}^{-1}\mathrm{d}\mu+\mathcal{O}(|z- \omega_{j}|), \tag{4.23}\] _where \(\big{(}\mu P_{V_{j}}^{\perp}-\widehat{\chi}_{s,j}(z)\big{)}^{-1}\) is the inverse with respect to \(P_{V_{j}}^{\perp}\). In particular,_ \[\operatorname{rank}Q(z)=\dim\ker(\mu-\widehat{\chi}_{s,j}(\omega_{j})),\quad \text{for any $z\in U_{\delta}$}\] _and_ \[Q(z)P_{V_{j}}=\mathcal{O}(|z-\omega_{j}|)\quad\text{and}\quad P_{V_{j}}Q(z)= \mathcal{O}(|z-\omega_{j}|) \tag{4.24}\] _for any \(z\in U_{\delta}\)._ Proof.: The first step is to prove the following claim: for any \(\mu_{0}\in\mathbb{C}\) in the resolvent set of \(\widehat{\chi}_{s,j}(\omega_{j})\) (where the resolvent/spectra is with respect to \(\mathcal{B}(V_{j}^{\perp})\)), there exist neighbourhoods \(U\) of \(\omega_{j}\) and \(W\) of \(\mu_{0}\) such that \(W\cap\sigma\big{(}\widehat{\chi}_{s}(z)\big{)}=\emptyset\) for any \(z\in U\). So let \(\mu_{0}\in\mathbb{C}\) belong to the resolvent set of \(\widehat{\chi}_{s,j}(\omega_{j})\). Then, from standard perturbation theory (see Lemma C.1) and the continuity of \(z\mapsto\widehat{\chi}_{s,j}(z)\), we can find \(\delta>0\) small enough such that \(B_{\delta}(\mu_{0})\subset\mathbb{C}\) lies on the resolvent set of \(\widehat{\chi}_{s,j}(z)\) for any \(z\) close enough to \(\omega_{j}\). In particular, the inverse \(\big{(}\mu P_{V_{j}}^{\perp}-\widehat{\chi}_{s,j}(z)\big{)}^{-1}\) exists and is uniformly bounded for \(z\) close enough to \(\omega_{j}\) and \(\mu\in W\coloneqq B_{\delta/2}(\mu_{0})\). Therefore, we can apply Lemma 4.8 to \(\mu-\widehat{\chi}_{s}(z)=A(z,\mu)+(z-\omega_{j})^{-1}B\), where \[A(z,\mu)=\mu-\widehat{\chi}_{s}(z)+(z-\omega_{j})^{-1}B_{j},\quad B=B_{j}, \quad\text{and}\quad P_{V_{j}}^{\perp}A(z,\mu)P_{V_{j}}^{\perp}=\mu P_{V_{j}}^ {\perp}-\widehat{\chi}_{s,j}(z),\] to conclude that \(\mu-\widehat{\chi}_{s}(z)\) is invertible and \[\big{(}\mu-\widehat{\chi}_{s}(z)\big{)}^{-1}= \big{(}\mu P_{V_{j}}^{\perp}-\widehat{\chi}_{s,j}(z)\big{)}^{-1} +\mathcal{O}(|z-\omega_{j}|),,\quad\text{for any $\mu\in W$ and $z\in U$,} \tag{4.25}\] where \((\mu P_{V_{j}}^{\perp}-\widehat{\chi}_{s,j})^{-1}\) is the inverse with respect to \(P_{V_{j}}^{\perp}\). Next, let \(\mu_{0}>0\in\sigma\big{(}\widehat{\chi}_{s,j}(\omega_{j})\big{)}\). So, by the same arguments in the proof of Proposition 4.4, we find that \(\mu_{0}\) belongs to the discrete spectrum of \(\widehat{\chi}_{s,j}(\omega_{j})\). Thus, from classical perturbation theory again, we can find an annulus \[\overline{B_{\delta}(\mu_{0})\setminus B_{\delta/2}(\mu_{0})}\subset\{z\in \mathbb{C}:\operatorname{Re}(z)>0\}\] contained in the resolvent set of \(\widehat{\chi}_{s,j}(z)\) for any \(z\) sufficiently close to \(\omega_{j}\). In particular, from our first claim and a compactness argument, we can take a small enough neighbourhood of \(z=\omega_{j}\) such that the annulus \(\overline{B_{\delta}(\mu_{0})\setminus B_{\delta/2}(\mu_{0})}\) also lies inside the resolvent set of \(\widehat{\chi}_{s}(z)\). In this neighbourhood, the spectral (Riesz) projection defined by \[Q(z)=\frac{1}{2\pi i}\oint_{\partial B_{\delta}(\mu_{0})}\big{(}\mu-\widehat{ \chi}_{s}(z)\big{)}^{-1}\mathrm{d}\mu\] is holomorphic and has constant rank. Moreover, substituting \(\big{(}\mu-\widehat{\chi}_{s}(z)\big{)}^{-1}\) by (4.25) in the above we obtain \[Q(z)=\frac{1}{2\pi i}\oint_{\partial B_{\delta}(\mu_{0})}\big{(}\mu P_{V_{j}}^ {\perp}-\widehat{\chi}_{s,j}(z)\big{)}^{-1}\mathrm{d}\mu+\mathcal{O}(|z- \omega_{j}|). \tag{4.26}\] Moreover, since \(\big{(}\mu P_{V_{j}}^{\perp}-\widehat{\chi}_{s,j}(z)\big{)}^{-1}P_{V_{j}}=P_{V_ {j}}\big{(}\mu P_{V_{j}}^{\perp}-\widehat{\chi}_{s,j}(z)\big{)}^{-1}=0\) we obtain (4.24). To complete the proof we just need to show that \(\operatorname{rank}Q(z)=\dim\ker\mu_{0}-\widehat{\chi}_{s,j}(\omega_{j})\). This follows from the fact that \(Q(z)\) is a continuous family of projections, hence \(\operatorname{rank}Q(z)\) is constant, and \(Q(z)\to\frac{1}{2\pi i}\oint\big{(}\mu P_{V_{j}}^{\perp}-\widehat{\chi}_{s,j} (\omega_{j})\big{)}^{-1}\), which is the orthogonal projection on \(\ker\mu_{0}-\widehat{\chi}_{s,j}(\omega_{j})\) (as \(\widehat{\chi}_{s,j}(\omega_{j})\) is symmetric). We are now in position to prove Proposition 4.2. Proof of Proposition 4.2.: Note that the set \(\{z\in\mathcal{D}_{\Omega}\setminus\mathcal{P}(\widehat{\chi}_{s}):1\not\in \sigma(\widehat{\chi}(z)\}\subset\mathbb{C}\) is open by continuity. Hence, \((1-\widehat{\chi}_{s}(z))^{-1}\) is well-defined and holomorphic on this set, and we just need to worry about the points where \(1-\widehat{\chi}_{s}\) is not invertible, and the points \(\omega_{j}\) where \(\chi_{s}\) blows-up. By Lemma 4.6, the set of points where \(1-\widehat{\chi}_{s}\) is not invertible is contained in the intervals \(\omega\in(-\Omega,-\omega_{1}]\cap[\omega_{1},\Omega)\). Moreover, by the symmetries of \(\widehat{\chi}_{s}\), it is enough to look on the positive interval \([\omega_{1},\Omega)\). So first, let us consider the points \(\omega\in(\omega_{j},\omega_{j+1})\) for some \(j\leq m\), where \(1-\widehat{\chi}_{s}(\omega)\) is not invertible. For these points, we know from Proposition 4.4 that \(1\) belongs to the discrete spectrum of \(\widehat{\chi}_{s}(\omega)\). Hence, from Lemma 4.7, estimate (4.14), and the fact that \(\widehat{\chi}_{s}(\omega)\) is self-adjoint, we conclude that this set of points is discrete, and that \(\big{(}1-\widehat{\chi}_{s}(\omega)\big{)}^{-1}\) has a pole with rank equals to \(\dim\ker\bigl{(}1-\widehat{\chi}_{s}(\omega)\bigr{)}\) at any such point. Next, we want to show that any excitation \(\omega_{j}\) is a pole of \(\big{(}1-\widehat{\chi}_{s}\big{)}^{-1}\) with rank equals to \(\dim\ker\bigl{(}1-P_{V_{j}}^{\perp}\widehat{\chi}_{s}(\omega_{j})P_{V_{j}}^{ \perp}\bigr{)}\). So let \(\omega_{j}\) be a pole of \(\widehat{\chi}_{s}\), and let \(z\) be in a neighborhood of \(\omega_{j}\) such that the projection \[Q(z)=\frac{1}{2\pi i}\oint_{\partial B_{\delta}(1)}(\xi-\widehat{\chi}_{s}(z) )^{-1}\mathrm{d}\xi, \tag{4.27}\] has rank \(M_{j}=\dim\ker\bigl{(}1-P_{V_{j}}^{\perp}\widehat{\chi}_{s}(\omega_{j})P_{V_{ j}}^{\perp}\bigr{)}\). That this projection is well-defined and holomorphic for \(z\) close to \(\omega_{j}\) is a consequence of Lemma 4.9. Thus, since \(\widehat{\chi}_{s}(z)\) commutes with \(Q(z)\), we can deal with the operators \[1-\widehat{\chi}_{s}(z)=\underbrace{Q(z)\big{(}1-\widehat{\chi}_{s}(z)\big{)} Q(z)}_{:=K(z)}+\underbrace{\widetilde{Q}(z)\big{(}1-\widehat{\chi}_{s}(z) \big{)}\widetilde{Q}(z)}_{:=\widehat{K}(z)}\] separately. Let us start with \(K(z)\). From the definition of \(Q(z)\) and Lemma 4.9, we see that \(K(z)\to 0\) as \(z\to\omega_{j}\). Hence, \(K(z)\) can be expanded as \[K(z)=(z-\omega_{j})K_{1}+\mathcal{O}(|z-\omega_{j}|),\] for some \(K_{1}\in\mathcal{B}(\mathcal{H})\). Moreover, from the blow-up estimate (4.14), we also have \[\|K(\omega_{j}+i\eta)f\|\gtrsim|\eta|\|f\|,\quad\text{ for any $f\in\operatorname{ran}Q(z)$ and $\eta$ small.}\] Hence, the same arguments from the proof of Lemma 4.7 leads to the conclusion that \(K_{1}\) is invertible with respect to \(Q(\omega_{j})\), that \(K(z)\) is invertible with respect to \(Q(z)\) for \(z\neq\omega_{j}\), and that \[K(z)^{-1}=(z-\omega_{j})^{-1}K_{-1}+\mathcal{O}(1), \tag{4.28}\] for some operator \(K_{-1}\) with \(\operatorname{rank}K_{-1}=M=\dim\ker(1-P_{V_{j}}^{\perp}\widehat{\chi}_{s}( \omega_{j})P_{V_{j}}^{\perp})\). The big-O term here is with respect to the limit \(z\to\omega_{j}\). For \(\widetilde{K}(z)\) we can use formula (C.2) in the appendix. Indeed, introducing the Riesz projection \(Q_{j}(z)=\oint_{\partial B_{\delta}(\mu_{0})}\bigl{(}\mu P_{V_{j}}^{\perp}- \widehat{\chi}_{s,j}(z)\bigr{)}^{-1}\mathrm{d}\mu\) of \(\widehat{\chi}_{s,j}(z)\) around \(1\) then from the definition of \(\widetilde{Q}(z)\), formula (C.2), and Lemma 4.8, we find that \[\widetilde{K}(z)^{-1} =\frac{1}{2\pi i}\oint_{\partial B_{\delta}(1)}\frac{1}{\mu-1} \bigl{(}\mu-\widehat{\chi}_{s}(z)\bigr{)}^{-1}\mathrm{d}\mu\] \[=\frac{1}{2\pi i}\oint_{\partial B_{\delta}(1)}\frac{1}{\mu-1} \bigl{(}\mu P_{V_{j}}^{\perp}-\widehat{\chi}_{s,j}(z)\bigr{)}^{-1}\mathrm{d} \mu+\mathcal{O}(|z-\omega_{j}|)\] \[=\biggr{(}\bigl{(}P_{V_{j}}^{\perp}-Q_{j}(z)\bigr{)}\bigl{(}1- \widehat{\chi}_{s,j}(z)\bigr{)}\bigl{(}P_{V_{j}}^{\perp}-Q_{j}(z)\bigr{)} \biggr{)}^{-1}+\mathcal{O}(|z-\omega_{j}|)=\mathcal{O}(1). \tag{4.29}\] Combining (4.28) and (4.29), we have shown that \[(1-\widehat{\chi}_{s}(z))^{-1}=K^{-1}+\widetilde{K}^{-1}=\frac{K_{-1}}{z- \omega_{j}}+K_{0}(z),\] where \(K_{-1}\) is invertible with respect to the orthogonal projection on \(\ker 1-\widehat{\chi}_{s,j}(\omega_{j})\) and the operator \(K_{0}(z)\) is holomorphic and uniformly bounded around \(z=\omega_{j}\). To complete the proof, it is enough to show that \(\operatorname{ran}K_{0}(\omega_{j})\subset\operatorname{ran}K_{-1}\). This follows from the identity \[K_{0}(\omega_{j})P_{V_{j}} =\lim_{z\to\omega_{j}}\bigl{(}1-\widehat{\chi}_{s}(z)\bigr{)}^{- 1}P_{V_{j}}=\lim_{z\to\omega_{j}}K(z)^{-1}P_{V_{j}}+\underbrace{\lim_{z\to \omega_{j}}\bigl{(}\widetilde{K}(z)\bigr{)}^{-1}P_{V_{j}}}_{=0}\] \[=\lim_{z\to\omega_{j}}Q(z)K(z)^{-1}P_{V_{j}}=Q(\omega_{j})\lim_{z \to\omega_{j}}K(z)^{-1}P_{V_{j}},\] where we used that \(K_{-1}P_{V_{j}}=0\), and \(\widetilde{K}(z)^{-1}P_{V_{j}}=\mathcal{O}(|z-\omega_{j}|)\) (see (4.29)), and the fact that \(\operatorname{ran}Q(\omega_{j})=\operatorname{ran}K_{-1}\). ### Proof of Proposition 4.3 The strategy here is to show that the eigenvalues of \(\widehat{\chi}_{s}(\omega)\), as function of \(\omega\), are strictly decreasing along the intervals \((\omega_{j},\omega_{j+1})\), and then analyze what happens when \(\omega\) crosses \(\omega_{j}\). For this, let us introduce the number of eigenvalues, counting multiplicity, greater than \(\mu_{0}\) by \[n_{\mu_{0}}(\omega)\coloneqq\sum_{\mu>\mu_{0}}\dim\ker\bigl{(}\mu-\widehat{ \chi}_{s}(\omega)\bigr{)}. \tag{4.30}\] Then, the following lemma holds. **Lemma 4.10** (Strictly decreasing eigenvalues).: _The positive eigenvalues of \(\widehat{\chi}_{s}(\omega)\) are decreasing functions of \(\omega\) in the interval \((\omega_{j},\omega_{j+1})\), \(j\geq 0\). Moreover, for any \(\mu_{0}>0\), we have_ \[n_{\mu_{0}}(\omega_{j}^{+})=n_{\mu_{0}}(\omega_{j}^{-})+\dim V_{j}-\dim\ker \bigl{(}\mu_{0}-P_{V_{j}}^{\perp}\widehat{\chi}_{s}(\omega_{j})P_{V_{j}}^{ \perp}\bigr{)}, \tag{4.31}\] _where \(n_{\mu_{0}}(\omega_{j}^{+})=\lim_{\omega\to\omega_{j},\omega>\omega_{j}}n_{\mu_{0}} (\omega)\) and \(n_{\mu_{0}}(\omega_{j}^{-})=\lim_{\omega\to\omega_{j},\omega<\omega_{j}}n_{\mu_{0 }}(\omega)\) are respectively the right and left limits of \(n_{\mu_{0}}(\omega)\) at \(\omega_{j}\)._ Proof.: We first note that since the function \(h(\lambda,\omega)\) defined in (10) is decreasing for \(\omega\) in the intervals \((-\infty,\lambda)\) and \((\lambda,\infty)\), and since the spectral measure \(\|P_{\lambda+E_{0}}T^{*}f\|^{2}\) is not identically zero for any \(f\in\mathcal{H}\) (see Remark 4.1), then for any \(\omega>\omega^{\prime}\) in \((\omega_{j},\omega_{j+1})\) \[\langle f,\widehat{\chi}_{s}(\omega^{\prime})f\rangle=\int_{ \omega_{1}}^{\infty}h(\lambda,\omega^{\prime})\mathrm{d}\|P_{\lambda+E_{0}}^{ H}T^{*}f\|^{2}>\int_{\omega_{1}}^{\infty}h(\lambda,\omega)\mathrm{d}\|P_{ \lambda+E_{0}}^{H}T^{*}f\|^{2}=\langle f,\widehat{\chi}_{s}(\omega)f\rangle. \tag{4.32}\] Hence by Rayleigh-Ritz principle, the positive eigenvalues of \(\widehat{\chi}_{s}(\omega)\) are decreasing functions of \(\omega\) in the interval \((\omega_{j},\omega_{j+1})\). The existence of the right and left limits of \(n_{\mu_{0}}(\omega)\) at \(\omega_{j}\) follows since \(n_{\mu_{0}}(\omega)\) is decreasing. By Lemma 4.9 and the decreasing property of the eigenvalues of \(\widehat{\chi}_{s}\), the left limit of \(n_{\mu_{0}}(\omega_{j}^{-})\), \(\omega<\omega_{j}\) is exactly the number of eigenvalues of \(P_{V_{j}}^{\perp}\widehat{\chi}_{s}(\omega_{j})P_{V_{j}}^{\perp}\) equal to or greater than \(\mu_{0}\). Indeed, by Lemma 4.9, an eigenvalue of \(P_{V_{j}}^{\perp}\widehat{\chi}_{s}(\omega_{j})P_{V_{j}}^{\perp}\) equal to or greater than \(\mu_{0}\) corresponds to another eigenvalue of \(\widehat{\chi}_{s}(\omega)\), for \(\omega\) in a neighborhood of \(\omega_{j}\). Since \(\omega<\omega_{j}\), by the decreasing property, the corresponding eigenvalue is greater than \(\mu_{0}\). Conversely, if \(\mu(\omega)\) is an eigenvalue of \(\widehat{\chi}_{s}(\omega)\) such that \(\lim_{\omega\to\omega_{j},\omega<\omega_{j}}\mu(\omega)\geq\mu_{0}\), then as \(\lim_{\omega\to\omega_{j},\omega<\omega_{j}}\frac{2\omega_{j}}{\omega^{2}- \omega_{j}^{2}}=-\infty\) the corresponding family of eigenfunctions has a vanishing component on \(V_{j}\). Hence \(\mu(\omega)\) converges to an eigenvalue of \(P_{V_{j}}^{\perp}\widehat{\chi}_{s}(\omega_{j})P_{V_{j}}^{\perp}\). For the right limit \(n_{\mu_{0}}(\omega_{j}^{+})\), consider an eigenvalue \(\mu(\omega)\) of \(\widehat{\chi}_{s}(\omega)\), \(\omega>\omega_{j}\) greater than \(\mu_{0}\). Since \(\omega\mapsto\mu(\omega)\) is decreasing, it either diverges at \(\omega_{j}\) or it converges to an eigenvalue of \(P_{V_{j}}^{\perp}\widehat{\chi}_{s}(\omega_{j})P_{V_{j}}^{\perp}\) strictly greater than \(\mu_{0}\) by a similar argument as above. Finally there are exactly \(\dim V_{j}\) eigenvalues blowing up at \(\omega_{j}\) since for any \(f\in V_{j}\), we have \[\langle f,\widehat{\chi}_{s}(\omega)f\rangle\geq\langle f,\frac{2 \omega_{j}}{\omega^{2}-\omega_{j}^{2}}B_{j}f\rangle-\|\widehat{\chi}_{s}( \omega)-\frac{2\omega_{j}}{\omega^{2}-\omega_{j}^{2}}B_{j}\|\|f\|\gtrsim\frac{ 2\omega_{j}}{\omega^{2}-\omega_{j}^{2}}\|f\|^{2},\] so \(\lim_{\omega\to\omega_{j},\omega>\omega_{j}}\langle f,\widehat{\chi}_{s}( \omega)f\rangle=\infty\) and \(P_{V_{j}}^{\perp}\widehat{\chi}_{s}(\omega)P_{V_{j}}^{\perp}\) is bounded in a neighborhood of \(\omega_{j}\). Proof of Proposition 4.3.: From Proposition 4.2, \(1-\widehat{\chi}_{s}(\omega)\) is invertible for \(0<\omega<\omega_{1}\). Since \(P_{V_{1}}^{\perp}\widehat{\chi}_{s}(\omega)P_{V_{1}}^{\perp}\) is negative for \(|\omega|<\omega_{2}\), from Lemma 4.9, \(\omega_{1}\) is not a pole of \((1-\widehat{\chi}_{s})^{-1}\). By Proposition 2.9, we have \(\mathrm{rank}_{\omega_{j}}(\widehat{\chi}_{s})=\dim V_{j}\), so by Lemma 4.6 it is enough to show that \[\sum_{\omega_{1}<\omega<\omega_{j+1}}\mathrm{rank}_{\omega}\big{(} (1-\widehat{\chi}_{s})^{-1}\big{)}\leq\sum_{k\leq j}\dim V_{k}\quad\text{for any $j\leq m$}. \tag{4.33}\] By Proposition 4.2, \(\mathrm{rank}_{\omega}\big{(}(1-\widehat{\chi}_{s})^{-1}\big{)}=\dim\ker \bigl{(}1-\widehat{\chi}_{s}(\omega)\bigr{)}\) for any \(\omega\in(\omega_{j},\omega_{j+1})\). From the decreasing property of the eigenvalues in Lemma 4.10, the sum of the ranks of the poles in the interval \((\omega_{j},\omega_{j+1})\) is given by the number of eigenvalues of \(\widehat{\chi}_{s}(\omega)\) that cross \(1\). Hence for any \(j\geq 0\), \[\sum_{\omega_{j}<\omega<\omega_{j+1}}\mathrm{rank}_{\omega}\big{(} (1-\widehat{\chi}_{s})^{-1}\big{)}=n_{1}(\omega_{j}^{+})-n_{1}(\omega_{j+1}^{- }).\] As a result, combining Lemma 4.9, the estimate (4.31) and the rank charaterization in (4.4) we get \[\sum_{\omega_{1}<\omega<\omega_{j+1}}\operatorname{rank}_{\omega} \bigl{(}(1-\widehat{\chi}_{s})^{-1}\bigr{)} =\sum_{k=1}^{j}\biggl{(}\operatorname{rank}_{\omega_{k}}\bigl{(} (1-\widehat{\chi}_{s})^{-1}\bigr{)}+\sum_{\omega_{k}<\omega<\omega_{k+1}} \operatorname{rank}_{\omega}\bigl{(}(1-\widehat{\chi}_{s})^{-1}\bigr{)} \biggr{)}\] \[=\sum_{k=1}^{j}\biggl{(}\dim\ker\bigl{(}\mu_{0}-P_{V_{j}}^{\perp} \widehat{\chi}_{s}(\omega_{j})P_{V_{j}}^{\perp}\bigr{)}+n_{1}(\omega_{k}^{+})- n_{1}(\omega_{k+1}^{-})\biggr{)}\] \[=\sum_{k=1}^{j}n_{1}(\omega_{k}^{-})+\dim V_{k}-n_{1}(\omega_{k+1 }^{-})\] \[=n_{1}(\omega_{1}^{-})-n_{1}(\omega_{j+1}^{-})+\sum_{k=1}^{j} \dim V_{k}.\] But since \(n_{1}(\omega_{j+1}^{+})\geq 0\) and \(n_{1}(\omega_{1}^{-})=0\) as \(\widehat{\chi}_{s}(\omega)\) is nonpositive-semidefinite for \(\omega<\omega_{1}\), we obtain (4.33). ## 5 The Fourier transform of \(\chi^{\text{RPA}}\) We can now use Propositions 4.2 and 4.3 to prove Theorems 1.2 and 1.4 Proof of Theorem 1.2.: Let \(\chi_{0}\) be the density-density response function of some Hamiltonian satisfying Assumption 1 and let \(\chi^{\text{RPA}}=\mathcal{S}^{RPA}(\chi_{0})\) be the associated solution to the RPA-Dyson equation. Then since \(\|\chi_{0}(t)\|_{L^{2}+L^{\infty},L^{1}\cap L^{2}}\) is uniformly bounded by Proposition 2.4, from the Gronwall inequality we know that \(\|\chi^{\text{RPA}}(t)\|_{L^{2}+L^{\infty},L^{1}\cap L^{2}}\lesssim e^{Dt}\) for some \(D>0\). Hence, the Fourier transform \(\widehat{\chi^{\text{RPA}}}(z)\) is well-defined for \(\operatorname{Im}(z)>D\) and we have (by the convolution property of the Fourier transform) \[\widehat{\chi^{\text{RPA}}}(z)=\widehat{\chi_{0}}(z)+\widehat{\chi_{0}}(z)F_{ H}\widehat{\chi^{\text{RPA}}}(z),\quad\text{for }\operatorname{Im}(z)>D. \tag{5.1}\] Let \(\widetilde{\chi}\), \(\widetilde{\chi_{0}}\) and \(\widehat{\chi}_{s}\) be the operators defined by \[\widetilde{\chi}\coloneqq F_{H}^{\frac{1}{2}}\widehat{\chi^{\text{RPA}}}, \quad\widetilde{\chi_{0}}\coloneqq F_{H}^{\frac{1}{2}}\widehat{\chi_{0}}, \quad\text{and}\quad\widehat{\chi}_{s}\coloneqq F_{H}^{\frac{1}{2}}\widehat{ \chi}_{0}F_{H}^{\frac{1}{2}}, \tag{5.2}\] where \(F_{H}^{\frac{1}{2}}=\sqrt{4\pi}(-\Delta)^{-\frac{1}{2}}\) is up to a multiplicative constant the convolution against \(1/|\cdot|^{2}\). Then we have \[\widetilde{\chi}(z)=\widetilde{\chi}_{0}(z)+\widehat{\chi}_{s}(z)\widetilde{ \chi}(z)\quad\Rightarrow\quad\bigl{(}1-\widehat{\chi}_{s}(z)\bigr{)} \widetilde{\chi}(z)=\widetilde{\chi_{0}}(z).\] Moreover, we see from the Hardy-Littlewood-Sobolev inequality that \(\widetilde{\chi_{0}}(z)\in\mathcal{B}\bigl{(}L^{2}(\mathbb{R}^{3})+L^{\infty} (\mathbb{R}^{3}),L^{2}(\mathbb{R}^{3})\bigr{)}\), and that \(\widehat{\chi}_{s}\) is an operator of the form of Equation (4.1) with \(T=F_{H}^{\frac{1}{2}}S\in\mathcal{B}(\mathcal{H}_{N},L^{2}(\mathbb{R}^{3}))\). Therefore, from Proposition 4.2, the map \[z\mapsto \bigl{(}1-\widehat{\chi}_{s}(z)\bigr{)}^{-1}\widetilde{\chi}_{0}(z) \in\mathcal{B}\bigl{(}L^{1}(\mathbb{R}^{3})\cap L^{2}(\mathbb{R}^{3}),L^{2}( \mathbb{R}^{3})\bigr{)},\] is the unique meromorphic extension of \(\widetilde{\chi}\) to the domain \(\mathcal{D}_{\Omega}=\{z\in\mathbb{C}:\operatorname{Im}(z)\neq 0\text{ or }|\mathrm{Re}(z)|<\Omega\}\). Now going back to Equation (5.1), we see from (5.2) that \[\widehat{\chi^{\mathrm{RPA}}}(z)=\widehat{\chi_{0}}(z)+\widehat{\chi_{0}}(z)F _{H}^{\frac{1}{2}}\widetilde{\chi}(z)=\widehat{\chi}_{0}(z)+\widehat{\chi}_{ 0}(z)F_{H}^{\frac{1}{2}}\big{(}1-\widehat{\chi}_{s}(z)\big{)}^{-1}F_{H}^{\frac {1}{2}}\widehat{\chi}_{0}(z). \tag{5.3}\] In particular, \(\widehat{\chi^{\mathrm{RPA}}}\) has an unique meromorphic extension as a map from \(\mathcal{D}_{\Omega}\) to \(\mathcal{B}(L^{2}+L^{\infty},L^{1}\cap L^{2})\), which proves item (i) from Theorem 1.2. For the other item in Theorem 1.2, we want to relate the poles of \(\widehat{\chi^{\mathrm{RPA}}}\) to the poles of \((1-\widehat{\chi}_{s})^{-1}\). This can be done by observing that, since \(F_{H}^{\frac{1}{2}}:L^{1}(\mathbb{R}^{3})\cap L^{2}(\mathbb{R}^{3})\to L^{2}( \mathbb{R}^{3})\) is injective and bounded, a point \(z\in\mathcal{D}_{\Omega}\) is a pole of \(\widehat{\chi^{\mathrm{RPA}}}(z)\) with finite rank if and only if it is a pole of \(\widetilde{\chi}=F_{H}^{\frac{1}{2}}\widehat{\chi^{\mathrm{RPA}}}\) with the same rank. Therefore, if we can show that any pole of \(\widetilde{\chi}\) is simple and the inequality \[\mathrm{rank}_{\omega}(\widetilde{\chi})\leq\mathrm{rank}_{\omega}\big{(}(1- \widehat{\chi}_{s})^{-1}\big{)} \tag{5.4}\] holds for any \(0<\omega<\Omega\) (with the convention that the rank is zero if \(\omega\) is not a pole), then Theorem 1.2 follows from the forward shift property in Proposition 4.3. For \(\omega\in\mathcal{P}\big{(}(1-\widehat{\chi}_{s})^{-1}\big{)}\setminus \mathcal{P}(\widehat{\chi_{0}})\), since \(\widehat{\chi}_{0}\) are holomorphic in a neighborhood of \(\omega\), it is clear from (5.3) that \(\omega\) is at most a simple pole of \(\widehat{\chi^{\mathrm{RPA}}}\) and that inequality (5.4) holds. We are thus left with the points in \(\mathcal{P}(\widehat{\chi_{0}})\). Let \(\omega_{j}\in\mathcal{P}(\widehat{\chi_{0}})\). Since \(F_{H}^{\frac{1}{2}}\) is injective and bounded from \(L^{1}(\mathbb{R}^{3})\cap L^{2}(\mathbb{R}^{3})\) to \(L^{2}(\mathbb{R}^{3})\), we have \(\dim\mathrm{ran}\,F_{H}^{\frac{1}{2}}SP_{E_{0}+\omega_{j}}^{H}=\dim\mathrm{ ran}\,SP_{E_{0}+\omega_{j}}^{H}\). This implies that the poles of \(\widehat{\chi}_{s}\) are precisely the poles of \(\widehat{\chi_{0}}\) and they have the same rank. Hence using the spectral decomposition of \(H\), we have that \(\widetilde{\chi}_{0}(z)+2\frac{\omega_{j}}{\omega_{j}^{2}-z^{2}}F_{H}^{\frac {1}{2}}SP_{E_{0}+\omega_{j}}^{H}S^{*}\) is bounded. By the decomposition (4.5) of \((1-\widehat{\chi}_{s}(z))^{-1}\) for \(z\) in a neighborhood of \(\omega_{j}\), we obtain \[(z-\omega_{j})\widetilde{\chi}(z)= \bigg{(}K_{-1}+(z-\omega_{j})K_{0}\bigg{)}\bigg{(}\frac{F_{H}^{ \frac{1}{2}}SP_{E_{0}+\omega_{j}}^{H}S^{*}}{z-\omega_{j}}+\widetilde{\chi}_{0} -\frac{F_{H}^{\frac{1}{2}}SP_{E_{0}+\omega_{j}}^{H}S^{*}}{z-\omega_{j}}\bigg{)} +\mathcal{O}(|z-\omega_{j}|)\] \[=K_{-1}\frac{F_{H}^{\frac{1}{2}}SP_{E_{0}+\omega_{j}}^{H}S^{*}}{z- \omega_{j}}+K_{-1}\bigg{(}\widetilde{\chi}_{0}-\frac{F_{H}^{\frac{1}{2}}SP_{E _{0}+\omega_{j}}^{H}S^{*}}{z-\omega_{j}}\bigg{)}+K_{0}F_{H}^{\frac{1}{2}}SP_{E _{0}+\omega_{j}}^{H}S^{*}\] \[\qquad\qquad+\mathcal{O}(|z-\omega_{j}|).\] Recall that \(\widehat{\chi}_{s}\) is given by Equation (4.1) by taking \(T=F_{H}^{\frac{1}{2}}S\). Hence using the last statement in Proposition 4.2, we get that \(K_{-1}F_{H}^{\frac{1}{2}}SP_{E_{0}+\omega_{j}}^{H}S^{*}=0\). This shows that \(\omega_{j}\) is at most a simple pole of \(\widetilde{\chi}\). Moreover using again the last statement of Proposition 4.2, \(\mathrm{ran}\,K_{0}F_{H}^{\frac{1}{2}}SP_{E_{0}+\omega_{j}}^{H}\subset\mathrm{ ran}\,K_{-1}\). As by Equation (4.4) \(\dim\mathrm{ran}\,K_{-1}=\dim\big{(}1-P_{V_{j}}^{\perp}\widehat{\chi}_{s}( \omega_{j})P_{V_{j}}^{\perp}\big{)}=\mathrm{rank}_{\omega_{j}}\big{(}(1- \widehat{\chi}_{s})^{-1}\big{)}\), the inequality \(\mathrm{rank}_{\omega_{j}}(\widetilde{\chi})\leq\mathrm{rank}_{\omega_{j}} \big{(}(1-\widehat{\chi}_{s})^{-1}\big{)}\) holds for any \(j\leq m\) and the proof is complete. Notice that we have proved that \(\mathrm{rank}_{\omega}(\widehat{\chi^{\mathrm{RPA}}})=\mathrm{rank}_{\omega} \big{(}(1-\widehat{\chi}_{s})^{-1}\big{)}\) for any \(0<\omega<\Omega\), which will be useful in the proof of Theorem 1.4. Proof of Theorem 1.4.: By the rank characterization in Equation (4.4) from Proposition 4.2, it is enough to show that \(\mathrm{rank}_{\omega}(\widehat{\chi^{\mathrm{RPA}}})=\mathrm{rank}_{\omega} \big{(}(1-\widehat{\chi}_{s})^{-1}\big{)}\) for any \(0<\omega<\Omega\). The inequality \(\mathrm{rank}_{\omega}(\widehat{\chi^{\mathrm{RPA}}})=\mathrm{rank}_{\omega}( \widetilde{\chi})\leq\mathrm{rank}_{\omega}\big{(}(1-\widehat{\chi}_{s})\big{)}\) has already been proved. For the converse inequality, observe that, since \(\widetilde{\chi}:\mathcal{D}_{\Omega}\to\mathcal{B}(L^{2}+L^{\infty},L^{2})\) is meromorphic, the map \(\widetilde{\chi}F_{H}^{\frac{1}{2}}:\mathcal{D}_{\Omega}\to\mathcal{B}(L^{2},L ^{2})\) is also a meromorphic family of operators. Thus the inquality \(\mathrm{rank}_{\omega}(\widetilde{\chi}F_{H}^{\frac{1}{2}})\leq\mathrm{rank}_{ \omega}(\widetilde{\chi})\) holds since the composition of linear operators can only lower the rank. The proof is now complete because \[\widetilde{\chi}(z)F_{H}^{\frac{1}{2}}= \big{(}1-\widehat{\chi}_{s}(z)\big{)}^{-1}\underbrace{F_{H}^{ \frac{1}{2}}\widehat{\chi}_{0}(z)F_{H}^{\frac{1}{2}}}_{=\widehat{\chi}_{s}(z )}= \big{(}1-\widehat{\chi}_{s}(z)\big{)}^{-1}-1, \tag{5.5}\] which implies that \(\mathrm{rank}_{\omega}\big{(}(1-\widehat{\chi}_{s})^{-1}\big{)}=\mathrm{rank}_ {\omega}(\widetilde{\chi}F_{H}^{\frac{1}{2}})\leq\mathrm{rank}_{\omega}( \widehat{\chi^{\mathrm{RPA}}})\). **Remark 5.1**.: _Ideally, we would like to conclude the proof of Theorem 1.2 directly from identity (5.5). However, this is not possible because \(F_{H}^{\frac{1}{2}}:L^{2}\to L^{2}+L^{\infty}\) is not surjective (nor is the image dense in \(L^{2}+L^{\infty}\)), and some poles could in principle vanish when composing \(\widehat{\chi^{\mathrm{RPA}}}\) with \(F_{H}^{\frac{1}{2}}\) on the right._ ## Appendix A Time-dependent density functional theory In TDDFT, one postulates the existence of an (exact) time-dependent exchange-correlation potential1\(v_{\mathrm{xc}}\) such that the solution \(\Phi(t)\) of the time-dependent Schrodinger equation Footnote 1: For proofs of the existence respectively uniqueness up to a time-dependent constant of such a potential see [11, 12] and for contributions to the open question whether the required assumptions hold for systems with Coulomb interaction see [10][Section 4.4.2] and [13]. \[\begin{cases}i\partial_{t}\Phi(t)=H_{\mathrm{eff}}(t)\Phi(t)\quad\text{for }t>0,\\ \Phi(0)=\Phi_{0},\end{cases},\] (A.1) where the effective non-interacting Hamiltonian is given by \[H_{\mathrm{eff}}(t)=-\frac{1}{2}\Delta+\sum_{j=1}^{N}v(r_{j})+ \epsilon f(t)v\varphi(r_{j})+\rho^{\Phi}(t)\ast\frac{1}{|\cdot|}(r_{j})+v_{ \mathrm{xc}}[\rho^{\Phi};\Psi_{0};\Phi_{0}](t,r_{j}),\] (A.2) reproduces the time-dependent electronic density \(\rho^{\Psi}\) of the solution \(\Psi(t)\) of the interacting evolution (1.1). The potential \(v_{\mathrm{xc}}\) at time \(t\) depends not just on the density \(\rho^{\Psi}\) at time \(t\), but on its past history at all times \(t^{\prime}\in[0,t]\). The initial state of the non-interacting system \(\Phi_{0}\) can be chosen arbitrarily as long as [10, Chapter 4] it reproduces the initial density \(\rho^{\Psi_{0}}\) and the initial divergence of the current-density \[\nabla\boldsymbol{\cdot}j^{\Psi_{0}}(r)=N\nabla\boldsymbol{\cdot}\mathrm{Im} \int_{(\mathbb{R}^{3})^{N-1}}\overline{\Psi_{0}(r,r_{2},...,r_{N})}\nabla_{r} \Psi_{0}(r,r_{2},...,r_{N})\mathrm{d}r_{1}...\mathrm{d}r_{N}.\] Note however that the exact xc-potential depends on this choice. ### Formal derivation of the Dyson equation In typical applications of linear response theory, the state \(\Psi_{0}\) is the ground-state of the static interacting Hamiltonian governing evolution (1.1). In particular, for \(\varepsilon=0\) the time-dependent density \(\rho^{\Psi}\) satisfies \(\rho^{\Psi}(t)=\rho^{\Psi_{0}}\) for all times \(t\geq 0\). Consequently, if we choose the ground-state of the non-interacting system \(\Phi_{0}\) to be the exact Kohn-Sham ground-state reproducing the density \(\rho^{\Psi_{0}}\) (assuming it exists), the time-dependent xc-potential reduces to the exact xc-potential of static DFT, i.e. \[v_{\rm xc}[\rho^{\Psi};\Psi_{0};\Phi_{0}]=v_{\rm xc}^{\rm static}[\rho^{\Psi_{ 0}}],\] where \(\Phi_{0}\) is the ground-state of the non-interacting Kohn-Sham Hamiltonian \[H_{0}=-\frac{1}{2}\Delta+\sum_{j=1}^{N}v(r_{j})+\rho^{\Psi_{0}}* \frac{1}{|\cdot|}(r_{j})+v_{\rm xc}^{\rm static}[\rho^{\Psi_{0}}](r_{j}).\] (A.3) From this observation we can now derive the Dyson equation of TDDFT. To this end, let us assume that the function \(\rho\mapsto v_{\rm xc}[\rho;\Phi_{0};\Psi_{0}]\) is differentiable with respect to time-dependent densities at the stationary density \(\rho(t)=\rho^{\Psi_{0}}\) for all \(t\). Then using the expansion of the electronic density given in eq. (1.11) we have \[v_{\rm xc}[\rho^{\Psi};\Psi_{0};\Phi_{0}](r,t)=v_{\rm xc}^{\rm static}[\rho^{ \Psi_{0}}](r)+\varepsilon F_{\rm xc}(\chi\star fv_{\cal P})(r,t)+{\cal O}( \varepsilon^{2}),\] (A.4) where \(F_{\rm xc}=\frac{\delta v_{\rm xc}}{\delta\rho}[\rho^{\Psi_{0}};\Psi_{0};\Phi_ {0}]\) is a linear operator whose (Schwartz) kernel is called the (exact) exchange-correlation kernel of TDDFT. Similarly, the Hartree term can be expanded in powers of \(\varepsilon\) and the effective Hamiltonian (A.2) becomes \[H_{\rm eff}(t)=H_{0}+\sum_{j=1}^{N}\varepsilon\big{(}F_{H}+F_{ \rm xc}\big{)}(\chi\star fv_{\cal P})(r_{j},t)+\varepsilon f(t)v_{\cal P}(r_{ j})+{\cal O}(\varepsilon^{2}),\] where \(H_{0}\) is the Kohn-Sham Hamiltonian (A.3) and \(F_{H}\) is the Hartree operator, acting on time-dependent potentials \(v\) by time-instantaneous convolution in space against the Coulomb potential, \[(F_{H}v)(t,r)=\frac{1}{|\cdot|}*v(t,\cdot)(r)=\int_{\mathbb{R}^{3}}\frac{1}{| r-r^{\prime}|}v(t,r^{\prime})\,dr^{\prime}.\] The equivalence of the densities and (1.11) now yields \[(\chi\star fv_{\cal P})=\lim_{\varepsilon\to 0^{+}}\frac{\rho^{\Psi(t)}- \rho^{\Psi_{0}}}{\varepsilon}=\lim_{\varepsilon\to 0^{+}}\frac{\rho^{\Phi(t)}- \rho^{\Phi_{0}}}{\varepsilon}=\chi_{0}\star fv_{\cal P}+\chi_{0}\star\big{(}F _{H}+F_{\rm xc}\big{)}(\chi\star fv_{\cal P}),\] where \(\chi\) and \(\chi_{0}\) are the density-density response functions of the interacting Hamiltonian and the non-interacting Kohn-Sham Hamiltonian, respectively. As the identity above holds for every time-dependent potential \(f(t)v_{\cal P}(r)\), we obtain the celebrated Dyson equation of TDDFT \[\chi=\chi_{0}+\chi_{0}\star\big{(}F_{H}+F_{\rm xc}\big{)}\chi.\] (A.5) **Remark A.1**.: _Recalling that \(\chi\) is a time-dependent operator on potentials, and making the convolution in time in eq. (A.5) and the Hartree operator explicit, the equation says that for any time \(t\geq 0\) and any potential \(v_{\mathcal{P}}\in L^{\infty}(\mathbb{R}^{3})\),_ \[\chi(t)v_{\mathcal{P}}=\chi_{0}(t)v_{\mathcal{P}}+\int_{0}^{t}\chi_{0}(t-s) \frac{1}{|\cdot|}*(\chi(s)v_{\mathcal{P}})+\chi_{0}(t-s)F_{\mathrm{xc}}\big{(} \chi v_{\mathcal{P}})(s)\mathrm{d}s,\] _where \(\chi v_{\mathcal{P}}\) is the continuous \(L^{1}(\mathbb{R}^{3})\)-valued map \(s\mapsto\chi(s)v_{\mathcal{P}}\)._ ### Common approximations In the absence of any explicit description of the exact time-dependent \(\mathrm{xc}\) potential \(v_{\mathrm{xc}}\), all practical TDDFT calculations must resort to approximations. The two most common ones are: 1. **Random phase approximation (RPA)**: Electron-electron interaction is only taken into account on a mean field level, that is to say, in (A.1)-(A.2) one only keeps the Hartree term but takes \(v_{xc}=0\). It follows that \(F_{\mathrm{xc}}=0\) and the Dyson equation (A.5) reduces to the RPA-Dyson equation \[\chi=\chi_{0}+\chi_{0}\star F_{H}\chi\] (A.6) treated in this paper. 2. **Adiabatic local density approximation (ALDA):** in (A.1)-(A.2) one uses the instantaneous (or static) local density approximation at each time \(t\), \[v_{\mathrm{xc}}[\rho^{\Phi};\Psi_{0};\Phi_{0}](r,t)=v_{\mathrm{xc}}^{\mathrm{ LDA}}[\rho^{\Phi(t)}](r)\] (A.7) where \(v_{\mathrm{xc}}^{\mathrm{LDA}}\) is the LDA _exchange-correlation potential_, that is to say \(v_{\mathrm{xc}}^{\mathrm{LDA}}[\rho](r)=e^{\prime}_{\mathrm{xc}}(\rho(r))\), \(e_{xc}(\overline{\rho})\) is the exchange-correlation energy density per unit volume of a homogeneous electron gas with density \(\overline{\rho}\) (known accurately from asymptotic and numerical results [1, 2]), and \(e^{\prime}_{\mathrm{xc}}(\overline{\rho})=\frac{d}{d\overline{\rho}}e_{ \mathrm{xc}}(\overline{\rho})\). Since \(F_{xc}\) is the functional derivative of the exchange-correlation potential with respect to the density, it is given in this case by the multiplication operator \[F_{\mathrm{xc}}^{\mathrm{ALDA}}(fv_{\mathcal{P}})(t,r)=\frac{\delta v_{ \mathrm{xc}}^{\mathrm{LDA}}}{\delta\rho}[\rho^{\Phi_{0}}](fv_{\mathcal{P}})( r,t)=e^{\prime\prime}_{\mathrm{xc}}\big{(}\rho^{\Phi_{0}}(r)\big{)}f(t)v_{ \mathcal{P}}(r).\] (A.8) All the results in this paper (Theorems 1.1, 1.2 and 1.4) apply to eq. (A.6). Note that the particular approximation chosen for the static \(\mathrm{xc}\) potential \(v_{\mathrm{xc}}^{\mathrm{static}}\) underlying the reference response function \(\chi_{0}\) can be arbitrary, provided the Hamiltonian (A.3) satisfies Assumption 1. In the approximation (A.7), the time-dependent \(\mathrm{xc}\)-potential only depends instantaneously on the electronic density \(\rho(\cdot,t)\). This is an adiabatic approximation and it is one of the main challenging problems in TDDFT to improve on it in a systematic way [11, Chap. 8]. Poles of the density-density response function of a non-interacting Hamiltonian **Proposition B.1**.: _Let \(\chi\) be the density-density response function defined in (2.7) of a non-interacting Hamiltonian \(H\) satisfying Assumption 1, i.e. \(H=-\frac{1}{2}\Delta+\sum_{i=1}^{N}v(r_{i})\). Let \(h\) be the one-body Hamiltonian \(h=-\frac{1}{2}\Delta+v\) and \((\epsilon_{k},\psi_{k})_{1\leq k\leq N}\in\big{(}\mathbb{R}\times L^{2}( \mathbb{R}^{3})\big{)}^{N}\) be its lowest eigenpairs. Then we have for all \(v_{\mathcal{P}}\in L^{2}(\mathbb{R}^{3})+L^{\infty}(\mathbb{R}^{3})\) and \(z\in\mathbb{C}\) with \(\mathrm{Im}(z)>0\)_ \[\widehat{\chi}(z)v_{\mathcal{P}}(r)=\sum_{k=1}^{N}\overline{\psi_{k}}(r)\big{[} (\epsilon_{k}-z-h)^{-1}+(\epsilon_{k}+z-h)^{-1}\big{]}(v_{\mathcal{P}}\psi_{ k})(r).\] (B.1) Proof.: **Step 1.** We first show that \[\big{(}(E_{0}-z-H)^{-1}+(E_{0}+z-H)^{-1}\big{)}S^{*}v_{\mathcal{P}}\\ =\frac{1}{\sqrt{N!}}\sum_{j=1}^{N}\sum_{\sigma\in S_{N}}(-1)^{ \sigma}\left[(\epsilon_{\sigma(j)}-z-h)^{-1}+(\epsilon_{\sigma(j)}+z-h)^{-1} \right]v_{\mathcal{P}}(r_{j})\prod_{k=1}^{N}\psi_{\sigma(k)}(r_{k}).\] (B.2) Since \(H\Psi_{0}=E_{0}\Psi_{0}\), we have \[\big{(}(E_{0}-z-H)^{-1}+(E_{0}+z-H)^{-1}\big{)}S^{*}v_{\mathcal{P}}=\big{(}(E_ {0}-z-H)^{-1}+(E_{0}+z-H)^{-1}\big{)}\sum_{i=1}^{N}v_{\mathcal{P}}(r_{i})\Psi _{0}(r_{1},\ldots,r_{N}).\] (B.3) Using that for any bounded continuous \(g\), we have \[\int_{\mathbb{R}}g(\lambda)\,\mathrm{d}P_{\lambda}^{H}=\int_{\mathbb{R}^{N}}g (\mu_{1}+\cdots+\mu_{N})\,\mathrm{d}P_{\mu_{1}}^{h}\otimes\cdots\otimes \mathrm{d}P_{\mu_{N}}^{h},\] (B.4) and noticing that for \(\sigma\in S_{N}\), \(E_{0}=\sum\limits_{k=1}^{N}\epsilon_{\sigma(k)}\) we get (B.2). **Step 2.** We have \[S\big{(}(E_{0}-z-H)^{-1}+(E_{0}+z-H)^{-1}\big{)}S^{*}v_{\mathcal{ P}}(r)\] (B.5) \[=\frac{N}{\sqrt{N!}}\int_{\mathbb{R}^{3(N-1)}}\overline{\Psi_{0}} (r,r_{2},\ldots,r_{N})\] \[\qquad\sum_{\sigma\in S_{N}}(-1)^{\sigma}\left[(\epsilon_{\sigma( j)}-z-h)^{-1}+(\epsilon_{\sigma(j)}+z-h)^{-1}\right]\big{(}v_{\mathcal{P}}(r) \psi_{\sigma(1)}(r)\big{)}\prod_{k\geq 2}\psi_{\sigma(k)}(r_{k})\,\mathrm{d}r_{2} \ldots\,\mathrm{d}r_{N}\] \[\qquad-\Big{\langle}\Psi_{0},\frac{1}{\sqrt{N!}}\sum_{j=1}^{N} \sum_{\sigma\in S_{N}}(-1)^{\sigma}\left[(\epsilon_{\sigma(j)}-z-h)^{-1}+( \epsilon_{\sigma(j)}+z-h)^{-1}\right]v_{\mathcal{P}}(r_{j})\prod_{k=1}^{N} \psi_{\sigma(k)}(r_{k})\Big{\rangle}\rho_{0}(r).\] (B.6) The second term on the RHS of the above equation vanishes since \((h-\epsilon_{\sigma(j)})\psi_{\sigma(j)}=0\). Thus by orthonormality of \((\psi_{i})_{1\leq i\leq N}\), we obtain: \[\widehat{\chi}(z)v_{\mathcal{P}}(r)=\sum_{k=1}^{N}\overline{\psi_{k}}(r)\big{[} (\epsilon_{k}-z-h)^{-1}+(\epsilon_{k}+z-h)^{-1}\big{]}(v_{\mathcal{P}}\psi_{ k})(r).\] (B.7) We notice that the expression (B.1) is equivalent to the Lehmann representation typically found in the physics and chemistry literature if, as is often assumed in this literature (but not strictly speaking valid for molecular Hamiltonians due to the presence of continuous spectrum), \(h\) is diagonalizable in an orthonormal basis \((\psi_{i})_{i\in\mathbb{N}}\). Under this assumption, \[\big{[}(\epsilon_{k}-z-h)^{-1}+(\epsilon_{k}+z-h)^{-1}\big{]}(v_{ \mathcal{P}}\psi_{k})(r)\\ =\sum_{a=1}^{\infty}\Big{(}\frac{1}{\epsilon_{k}-\epsilon_{a}-z} +\frac{1}{\epsilon_{k}-\epsilon_{a}+z}\Big{)}\langle\psi_{a},v_{\mathcal{P}} \psi_{k}\rangle\psi_{a}(r).\] Hence for \(v_{\mathcal{O}}\in L^{2}(\mathbb{R})+L^{\infty}(\mathbb{R}^{3})\), we obtain \[\langle v_{\mathcal{O}},\widehat{\chi}(z)v_{\mathcal{P}}\rangle = \sum_{k=1}^{N}\sum_{a=1}^{\infty}\Big{(}\frac{1}{\epsilon_{k}- \epsilon_{a}-z}+\frac{1}{\epsilon_{k}-\epsilon_{a}+z}\Big{)}\langle\psi_{k}v_ {\mathcal{O}},\psi_{a}\rangle\langle\psi_{a},v_{\mathcal{P}}\psi_{k}\rangle\] (B.8) \[= \sum_{k=1}^{N}\sum_{a=N+1}^{\infty}\Big{(}\frac{1}{\epsilon_{k}- \epsilon_{a}-z}+\frac{1}{\epsilon_{k}-\epsilon_{a}+z}\Big{)}\langle\psi_{k}v_ {\mathcal{O}},\psi_{a}\rangle\langle\psi_{a},v_{\mathcal{P}}\psi_{k}\rangle.\] Note that the Lehmann representation (B.8) could also have been easily obtained from (2.14), by using the fact that the excited states of the non-interacting Hamiltonian \(H\) are given by the Slater determinants of the eigenstates of \(h\) containing at least one unoccupied eigenstate \(\psi_{a}\), \(a>N\), and only those Slater determinants containing exactly one unoccupied eigenstate yield nonzero matrix elements in (2.14). Eq. (B.8) shows that the poles of the density-density response function of a non-interacting Hamiltonian are located at the spectral gaps between occupied and virtual eigenvalues of the one-body Hamiltonian. ## Appendix C Spectral theory of bounded operators We collect here some results on the spectral theory of bounded operators on Banach spaces. These results are elementary and complete proofs can be found in [10]. For convenience of the reader, we briefly sketch these proofs here. **Lemma C.1** (Continuity of Spectra).: _Let \(A\in\mathcal{B}(E)\) where \(E\) is a Banach space and \(\mu\in\rho(A)\) (where \(\mathcal{B}(E)\) denotes the set of bounded linear operators on \(E\) and \(\rho(A)\) denotes the resolvent set of \(A\)). Then for any \(B\) with \(\|B\|<\|(\mu-A)^{-1}\|\) we have \(\mu\in\rho(A+B)\). In particular, if \(A:B_{\delta}(0)\subset\mathbb{C}\to\mathcal{B}(E)\) is continuous and \(W\subset\rho\big{(}A(0)\big{)}\) is compact, then \(W\subset\rho\big{(}A(z)\big{)}\) for any \(z\) close enough to \(0\)._ Proof.: For \(\mu\in\rho(A)\), we have \[(\mu-A)^{-1}(\mu-A-B)=I-(\mu-A)^{-1}B\quad\text{and}\quad(\mu-A-B)(\mu-A)^{-1} =I-B(\mu-A)^{-1}\] Thus, for \(\|B\|<\|(\mu-A)^{-1}\|^{-1}\) the operators above are of the form \(I-K\) with \(\|K\|<1\). Hence, they are invertible and the inverse is given by the Neumann series, \(\sum_{k\in\mathbb{N}}K^{n}\). For the second statement, note that since \(W\) is compact, and \(\left(\mu-A(0)\right)^{-1}\) is continuous with respect to \(\mu\), we have \(\epsilon=\inf_{\mu\in W}\|\big{(}\mu-A(0)\big{)}^{-1}\|^{-1}>0\). Thus, from continuity we have \(\|A(z)-A(0)\|\leq\epsilon\) for any \(z\) close to \(0\) and the result follows from the first statement. **Lemma C.2** (Riesz projection and separation of spectra).: _Let \(\gamma\subset\rho(A)\subset\mathbb{C}\) be a closed smooth curve separating the spectrum of \(A\). Then, the operator_ \[P=\frac{1}{2\pi i}\oint_{\gamma}(\mu-A)^{-1}d\mu\] _is a projection commuting with \(A\). Moreover, for \(\mu_{0}\not\in\gamma\), the operator_ \[S(\mu_{0})=\frac{1}{2\pi i}\oint_{\gamma}\frac{1}{\mu-\mu_{0}}(\mu-A)^{-1}(1- P)\mathrm{d}\mu.\] (C.1) _satisfies_ \[S(\mu_{0})=\begin{cases}\big{(}(1-P)(\mu_{0}-A)(1-P)\big{)}^{-1},&\text{for $\mu_{0}$ inside $\gamma$,}\\ \big{(}P(\mu_{0}-A)P\big{)}^{-1},&\text{for $\mu_{0}$ outside $\gamma$.}\end{cases}\] (C.2) _In particular, the spectrum of \(A\big{|}_{\operatorname{ran}P}\in\mathcal{B}(\operatorname{ran}P)\) and \(A\big{|}_{\operatorname{ker}P}\in\mathcal{B}(\operatorname{ker}P)\) is given respectively by the spectrum of \(A\) inside and outside of \(\gamma\)._ Proof.: That the operator \(P\) is well-defined and bounded is clear since \(\gamma\subset\rho(A)\) and \(\mu\mapsto(\mu-A)^{-1}\) is continuous in \(\mu\). To see that \(P\) is a projection, note that one can choose a curve \(\gamma_{1}\) inside \(\gamma\) such that all points lying between \(\gamma_{1}\) and \(\gamma\) are in the resolvent of \(A\). Thus from a standard argument of holomorphic function theory, \[P=\frac{1}{2\pi i}\oint_{\gamma_{1}}(\lambda-A)^{-1}\mathrm{d}\lambda.\] Hence, multiplying the above by the definition of \(P\) with a contour integral on \(\gamma\), using the resolvent identity \((\mu-A)^{-1}(\lambda-A)^{-1}=(\mu-\lambda)^{-1}\big{(}(\lambda-A)^{-1}-(\mu-A )^{-1}\) and using the Cauchy integral formula for holomorphic functions, one can show that \(P^{2}=P\). Since \(S(\mu_{0})\) commutes with \(A\), formula (C.2) follows from \[(\mu_{0}-A)S(\mu_{0})=\frac{1}{2\pi i}\oint_{\gamma}\frac{1}{\mu-\mu_{0}}- \frac{1}{2\pi i}\oint_{\gamma}(\mu-A)^{-1}\mathrm{d}\mu=\begin{cases}1-P,&\text {for $\mu_{0}$ inside $\gamma$,}\\ -P,&\text{for $\mu_{0}$ outside $\gamma$,}\end{cases}\] and \[S(\mu_{0})P=\frac{1}{2\pi i}\oint_{\gamma}\oint_{\gamma_{1}}\frac{(\mu-A)^{-1 }-(\lambda-A)^{-1}}{(\mu-\mu_{0})(\lambda-\mu)}\mathrm{d}\lambda\mathrm{d}\mu= \begin{cases}0,&\text{for $\mu_{0}$ inside $\gamma$,}\\ S(\mu_{0}),&\text{for $\mu_{0}$ outside $\gamma$.}\end{cases}\] Finally, the last statement follows from two observations. First, the existence of the inverses in (C.2) implies that \(\sigma(A\big{|}_{\operatorname{ran}P})\) lies inside \(\gamma\) and \(\sigma(A\big{|}_{\operatorname{ker}P})\) lies outside \(\gamma\). Second, from the decomposition \[A=\begin{pmatrix}A\big{|}_{\operatorname{ran}P}&0\\ 0&A\big{|}_{\operatorname{ker}P}\end{pmatrix}\] with respect to \(\mathcal{H}=\operatorname{ran}P\oplus\operatorname{ker}P\), we have \(\sigma(A)=\sigma(A\big{|}_{\operatorname{ker}P})\cup\sigma(A\big{|}_{ \operatorname{ran}P})\).
2306.11818
Pattern formation in a predator-prey model with Allee effect and hyperbolic mortality on networked and non-networked environments
With the development of network science, Turing pattern has been proven to be formed in discrete media such as complex networks, opening up the possibility of exploring it as a generation mechanism in the context of biology, chemistry, and physics. Turing instability in the predator-prey system has been widely studied in recent years. We hope to use the predator-prey interaction relationship in biological populations to explain the influence of network topology on pattern formation. In this paper, we establish a predator-prey model with weak Allee effect, analyze and verify the Turing instability conditions on the large ER (Erd\"{o}s-R\'{e}nyi) random network with the help of Turing stability theory and numerical experiments, and obtain the Turing instability region. The results indicate that diffusion plays a decisive role in the generation of spatial patterns, whether in continuous or discrete media. For spatiotemporal patterns, different initial values can also bring about changes in the pattern. When we analyze the model based on the network framework, we find that the average degree of the network has an important impact on the model, and different average degrees will lead to changes in the distribution pattern of the population.
Yong Ye, Jiaying Zhou
2023-06-20T18:23:06Z
http://arxiv.org/abs/2306.11818v2
Pattern formation in a predator-prey model with Allee effect and hyperbolic mortality on networked and non-networked environments ###### Abstract With the development of network science, the Turing pattern has been proven to be formed in discrete media such as complex networks, opening up the possibility of exploring it as a generation mechanism in the context of biology, chemistry, and physics. Turing instability in the predator-prey system has been widely studied in recent years. We hope to use the predator-prey interaction relationship in biological populations to explain the influence of network topology on pattern formation. In this paper, we establish a predator-prey model with a weak Allee effect, analyze and verify the Turing instability conditions on the large ER (Erdos-Renyi) random network with the help of Turing stability theory and numerical experiments, and obtain the Turing instability region. The results show that the diffusion coefficient plays a decisive role in the generation of patterns, and it is interesting that the appropriate initial value will also bring beautiful patterns. When we analyze the model based on the network framework, we find that the average degree of the network has an important impact on the model, and different average degrees will lead to changes in the distribution pattern of the population. keywords: Turing pattern, predator-prey, random network, average degree Msc: [2010] 92D25, 35K57 + Footnote †: journal: Nonlinear Analysis: Modelling and Control ## 1 Introduction Since the pioneering work of Lotka and Volterra, the dynamic characteristics between predator and prey populations have always been an important research topic in mathematical biology [1]. After that, many researchers worked to improve the model on this basis [2; 3; 4; 5; 6; 7; 8]. In 2012, Nagano and Maeda explored the spatial distribution of predators and prey using the well-known model of Rosenzweig and MacArthur. They gave the phase diagram of this predator-prey model and studied the lattice formation of this model [3]. However, in 2014, Zhang and his collaborators found that there would be no Turing pattern if only the death term of the predator was represented by a linear function. Therefore, they studied the pattern formation of the predator-prey model with hyperbolic mortality by selecting appropriate control parameters and found many interesting patterns, such as hexagonal spots and stripe patterns [4]. Considering that the Allee effect widely exists in biological systems, Liu et al. studied the influence of the Allee effect and delay on the Turing pattern based on this model, and found that the Allee effect and delay can shape the pattern [5]. Their model is as follows. \[\left\{\begin{array}{l}\frac{\partial u}{\partial t}-d_{1}\Delta u=\alpha u (1-u)\left(\frac{u}{u+A}\right)-\frac{\alpha uv}{1+\beta u},\\ \frac{\partial v}{\partial t}-d_{2}\Delta v=\frac{\beta uv}{1+\beta u}-\frac {\gamma v^{2}}{e+\eta v},\\ \frac{\partial u}{\partial n}=\frac{\partial v}{\partial n}=0,(x,y)\in\partial \Omega,t>0,\\ u(0,x,y)=u_{0}(x,y)\geq 0,v(0,x,y)=v_{0}(x,y)\geq 0.\end{array}\right. \tag{1}\] The biological significance of each parameter in the model can be found in [3; 4; 5]. As we all know, reaction-diffusion systems support many complex self-organization phenomenons [9]. The research of reaction-diffusion systems in continuous media, especially about Turing pattern has been well-developed. However, in discrete media, such as complex networks, there are many unknown possibilities. With the rapid development of network science, researchers began to try to explore more possibilities of such reaction-diffusion systems on the network platform. As early as 1971, Othmer and Scriven tried to study the influence of network structure on Turing instability on several regular planar and polyhedral networks [10]. Subsequently, a series of related works appeared, but most of them were based on regular lattice networks or small networks [11; 12; 13]. In 2010, Nakao and Mikhailov pointed out this problem. They took the Mimura-Murray model in the predator-prey population as an example to study the pattern formation on large random networks. The results showed that Turing instability would lead to spontaneous differentiation of network nodes into rich activator groups and weak activator groups, and multiple steady-state coexistences and hysteresis effects were observed [2]. After that, the formation of complex network patterns in different backgrounds has been studied by researchers [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26]. In 2019, Chang and his collaborators tried to introduce delay into the Leslie-Gower model and studied the effect of delay on the shape of the pattern based on several regular networks. The results showed that the introduction of delay would bring about many beautiful patterns [27]. In the following year, he and his colleagues also studied the Turing pattern on the multiplex network, taking into account the case of self-diffusion and cross-diffusion, and also found patterns with rich characteristics [28]. Over the years, they explored the pattern generation in different regular networks, random networks, and multiplex networks based on the predator-prey model and SIR model respectively [29, 30, 31, 32, 33]. Similarly, Zheng and his colleagues have done a lot of interesting work on pattern generation on the network in recent years [34, 35, 36, 37]. In addition, Tian and his collaborators have also carried out research on the mathematical theory of reaction-diffusion systems based on complex networks [38, 39, 40, 41, 42]. It is in this context that we hope to consider the discrete medium instead of the reaction-diffusion model on the continuous medium to study the Turing model of the predator-prey model with weak Allee effect under the complex network, and try to explore the influence of network topology on the pattern formation. The model is as follows: \[\begin{cases}\frac{\mathbf{d}u_{i}}{\mathbf{d}t}=f\left(u_{i},v_{i}\right)+d_ {1}\sum_{j=1}^{N}L_{ij}u_{j},\\ \frac{\mathbf{d}v_{i}}{\mathbf{d}t}=g\left(u_{i},v_{i}\right)+d_{2}\sum_{j=1}^ {N}L_{ij}v_{j}\\ u_{i}(0)\geq 0,\ v_{i}(0)\geq 0.\end{cases} \tag{2}\] The reaction term is expressed in the following form: \[f(u_{i},v_{i})=\alpha u_{i}(1-u_{i})\left(\frac{u_{i}}{u_{i}+A}\right)-\frac {\alpha u_{i}v_{i}}{1+\beta u_{i}},\quad g(u_{i},v_{i})=\frac{\beta u_{i}v_{i }}{1+\beta u_{i}}-\frac{\gamma v_{i}^{2}}{e+\eta v_{i}}.\] In our model, the prey population \(u_{i}\) and predator population \(v_{i}\) occupy every node in the network. The Interaction within each node, such as the predator-prey relationship between populations, is regarded as the reaction term in model (2), and the diffusion transmission between nodes is called the diffusion term. Here, the total number of nodes in the network is \(N\), and its topology is defined as a symmetric adjacency matrix whose elements \(A_{ij}\) satisfy \[A_{ij}=\begin{cases}1,&\text{if the nodes $i$ and $j$ are connected where $i,j=1,\ldots,N$, and $i\neq j$},\\ 0,&\text{otherwise}.\end{cases} \tag{3}\] Similar to [2], here we define the degree of node \(i\) as given by \(k_{i}=\sum_{j=1}^{N}A_{ij}\). To meet the condition \(k_{1}\geq k_{2}\geq\cdots k_{N}\) is satisfied, we sort network nodes \(i\) in decreasing order of their degrees \(k_{i}\). The diffusion of species \((u,\ v)\) at a node \(i\) is given by the sum of the incoming fluxes from other connecting nodes \(j\) to node \(i\). According to Fick's law, the flux is proportional to the concentration difference between nodes. Then consider the network Laplacian matrix \(L_{ij}=A_{ij}-k_{i}\delta_{ij}\), the diffusive flux of prey population \(u\) to node \(i\) is expressed as \(\sum_{j=1}^{N}L_{ij}u_{j}\) and the diffusive flux of predator population \(v\) to node \(i\) is expressed as \(\sum_{j=1}^{N}L_{ij}v_{j}\). The structure of this paper is as follows. In Sect. 2, with the help of Turing stability theory, we analyze the conditions of Turing instability region and use two sets of examples to verify our theoretical analysis. In Sect. 3, numerical experiments are carried out on pattern formation in continuous medium (model (1)) and discrete medium (model (2)) respectively. In Sect. 4, we discuss and analyze the results obtained in this paper. ## 2 Turing instability analysis This section mainly discusses the Turing instability of model (2). With the help of the theory of reaction-diffusion model in continuous space, it is necessary to ensure that the positive equilibrium of model (2) is locally stable when there is no diffusion. This requires us first to study the stability of the positive equilibrium in the corresponding ordinary differential model. ### Stability analysis of non-diffusion model In the beginning, we focus on the stability of the positive equilibrium of model (2). Clearly, the positive equilibrium \(E_{*}=\left(u_{*},v_{*}\right)\) of the ordinary differential equation (ODE) or the partial differential equation (PDE) model (2) satisfies \(f\left(u_{*},v_{*}\right)=0\) and \(g\left(u_{*},v_{*}\right)=0\) : \[\left\{\begin{array}{l}\frac{\mathrm{d}u}{\mathrm{d}t}=\alpha u(1-u)\left( \frac{u}{u+A}\right)-\frac{\alpha uv}{1+\beta u}=0,\\ \frac{\mathrm{d}v}{\mathrm{d}t}=\frac{\beta uv}{1+\beta u}-\frac{\gamma v^{2}} {e+\eta v}=0.\end{array}\right. \tag{4}\] To simplify the discussion, similar to [4; 5], we will focus on \(\eta=\gamma\) and \(e=1\). We clearly know that model (4) has two boundary equilibria \(E_{0}=(0,0)\) and \(E_{1}=(1,0)\). Not only that, but the model may also have some positive equilibria. Therefore, we define the positive equilibria of the model (4) as \(E_{*}^{(i)}=\left(u_{*}^{(i)},v_{*}^{(i)}\right),\;i=1,2\) and \(u_{*}^{(2)}>u_{*}^{(1)},\;v_{*}^{(2)}>v_{*}^{(1)}\). In addition, when \(\beta>\frac{\gamma}{1-\gamma}\), \(0<\gamma<1\), and \(\frac{\gamma}{\beta}<A<\frac{\left((\beta\gamma-\gamma-\beta)^{2}+4\beta \gamma^{2}\right)}{4\beta^{2}\gamma}\), model (4) exhibits two positive equilibria \(E_{*}^{(1)}=\left(u_{*}^{(1)},v_{*}^{(1)}\right)\) and \(E_{*}^{(2)}=\left(u_{*}^{(2)},v_{*}^{(2)}\right)\). Through simple calculation, the positive equilibria are \[u_{*}^{(i)}=\frac{\beta\gamma-\gamma-\beta\pm\sqrt{(\beta\gamma-\gamma-\beta )^{2}+4(\gamma-\beta A)\beta\gamma}}{2\beta\gamma},\;v_{*}^{(i)}=\frac{\beta} {\gamma}u_{*}^{(i)},\;i=1,2.\] It should be noted that when \(\beta>\frac{\gamma}{1-\gamma}\), \(0<\gamma<1\), and \(A<\frac{\gamma}{\beta}\), model (4) has only one positive equilibrium \(E_{*}^{(2)}=\left(u_{*}^{(2)},v_{*}^{(2)}\right)\). Let \(p=\beta\gamma-\gamma-\beta\), \(q=(\gamma-\beta A)\beta\gamma\), and calculate the Jacobian matrix of model (4) at \(E_{*}^{(i)}\), which is given by \(J_{E_{*}^{(i)}}=\left[\begin{array}{cc}a_{10}&a_{01}\\ b_{10}&b_{01}\end{array}\right]\), where \[a_{10}=\frac{\alpha v_{*}^{(i)}}{1+\beta u_{*}^{(i)}}-\frac{\alpha u_{*}^{(i) }v_{*}^{(i)}}{(1+\beta u_{*}^{(i)})(1-u_{*}^{(i)})}+\frac{\alpha Av_{*}^{(i)} }{(1+\beta u_{*}^{(i)})(A+u_{*}^{(i)})}-\frac{\alpha v_{*}^{(i)}}{(1+\beta u_{ *}^{(i)})^{2}},\ a_{01}=\frac{-\alpha u_{*}^{(i)}}{1+\beta u_{*}^{(i)}},\] \[b_{10}=\frac{\beta^{2}v_{*}^{(i)}}{\left(1+\beta u_{*}^{(i)}\right)^{2}},\ b_{01}=\frac{-\beta u_{*}^{(i)}}{ \left(1+\beta u_{*}^{(i)}\right)^{2}}.\] We can easily know that the characteristic polynomial is \[H^{(i)}(\lambda)=\lambda^{2}-T^{(i)}\lambda+D^{(i)},\ i=1,2.\] For the positive equilibrium \(E_{*}^{(1)}=\left(u_{*}^{(1)},v_{*}^{(1)}\right)\), where \[D^{(1)}=\frac{-\alpha\beta(u_{*}^{(1)})^{2}v_{*}^{(1)}}{2\beta\gamma^{2}(1+ \beta u_{*}^{(1)})^{4}(1-u_{*}^{(1)})(u_{*}^{(1)}+A)}\sqrt{p^{2}-4q}(2A\beta \gamma+p-\sqrt{p^{2}-4q})<0.\] Obviously, the equilibrium \(E_{*}^{(1)}\) is unstable which is a saddle. For the positive equilibrium \(E_{*}^{(2)}=\left(u_{*}^{(2)},v_{*}^{(2)}\right)\), where \[D^{(2)}=\frac{\alpha\beta(u_{*}^{(2)})^{2}v_{*}^{(2)}}{2\beta\gamma^{2}(1+ \beta u_{*}^{(2)})^{4}(1-u_{*}^{(2)})(u_{*}^{(2)}+A)}\sqrt{p^{2}-4q}(2A\beta \gamma+p+\sqrt{p^{2}-4q})>0,\] and suppose there is an \(\alpha_{H}\) that makes \(T^{(2)}=0\), so we have the following equation \[T^{(2)}=\alpha_{H}(\frac{v_{*}^{(2)}}{1+\beta u_{*}^{(2)}}-\frac{u_{*}^{(2)}v_ {*}^{(2)}}{(1+\beta u_{*}^{(2)})(1-u_{*}^{(2)})}+\frac{Av_{*}^{(2)}}{(1+\beta u _{*}^{(2)})(A+u_{*}^{(2)})}-\frac{v_{*}^{(2)}}{(1+\beta u_{*}^{(2)})^{2}})- \frac{\beta u_{*}^{(2)}}{\left(1+\beta u_{*}^{(2)}\right)^{2}}=0.\] Let \(M=\frac{v_{*}^{(2)}}{1+\beta u_{*}^{(2)}}-\frac{u_{*}^{(2)}v_{*}^{(2)}}{(1+ \beta u_{*}^{(2)})(1-u_{*}^{(2)})}+\frac{Av_{*}^{(2)}}{(1+\beta u_{*}^{(2)}) (A+u_{*}^{(2)})}-\frac{v_{*}^{(2)}}{(1+\beta u_{*}^{(2)})^{2}}>0\), which can be obtained \(\alpha_{H}=\frac{\beta u_{*}^{(2)}}{M\left(1+\beta u_{*}^{(2)}\right)^{2}}\). So we have the following conclusion: if \(\alpha<\alpha_{H}\), then \(T^{(2)}<0\) and the positive equilibrium \(E_{*}^{(2)}=\left(u_{*}^{(2)},v_{*}^{(2)}\right)\) is stable. If \(\alpha>\alpha_{H}\), then \(T^{(2)}>0\) and the positive equilibrium \(E_{*}^{(2)}=\left(u_{*}^{(2)},v_{*}^{(2)}\right)\) is unstable. Particularly, when \(\alpha=\alpha_{H}\) Hopf bifurcation occurs since \(\frac{\mathrm{d}T^{(2)}}{\mathrm{d}\alpha}=M>0\). Next, we give corresponding numerical experiments to verify our theoretical analysis. ### Example In this subsection, we provide a numerical example to illustrate the possible state of positive equilibrium \(E_{*}^{(2)}=\left(u_{*}^{(2)},v_{*}^{(2)}\right)\). Here, the parameters are set as \(A=0.01,\ \beta=6,\ \gamma=0.5,\ e=1,\ \eta=0.5\). Therefore, model (4) is in the following form: \[\left\{\begin{array}{l}\frac{\mathrm{d}u}{\mathrm{d}t}=\alpha u(1-u)\left( \frac{u}{u+0.01}\right)-\frac{\alpha uv}{1+6u}=0,\\ \frac{\mathrm{d}v}{\mathrm{d}t}=\frac{6uv}{1+6u}-\frac{0.5v^{2}}{1+0.5v}=0. \end{array}\right. \tag{5}\] According to the above analysis, we can know that \(\alpha_{H}=0.827381561513881\) under this group of parameters and choose different values for \(\alpha\). The phase portraits of the numerical example (5) are shown in Fig. 1. ### Stability analysis of the reaction-diffusion model on network In the case of classical continuous media, the non-uniform perturbation is usually decomposed into a set of spatial Fourier modes, representing plane waves with different wave numbers. With this idea, Othmer and Scriven noticed that the role of plane wave and wave number on the network can be reflected by the eigenvectors \(\Phi^{(\alpha)}=\left(\phi_{1}^{(\alpha)},\ldots,\phi_{N}^{(\alpha)}\right)\) and eigenvalues \(\Lambda_{\alpha}(\alpha=1,\ldots,N)\) of their Laplace matrix, where \(\sum_{j=1}^{N}L_{ij}\phi_{j}^{(\alpha)}=\Lambda_{\alpha}\phi_{i}^{(\alpha)}\)[10]. All eigenvalues of \(L_{ij}\) are non-positive real numbers. According to [2], we need to meet the following condition \(0=\Lambda_{1}\geq\Lambda_{2}\geq\cdots\geq\Lambda_{N}\). The eigenvectors are orthonormalized as \(\sum_{i=1}^{N}\phi_{i}^{(\alpha)}\phi_{i}^{(\beta)}=\delta_{\alpha,\beta}\) where \(\alpha,\beta=1,\ldots,N\). We introduce small perturbations \(\delta u_{i}\) and \(\delta v_{i}\) to the uniform state as \((u_{i},v_{i})=(u_{*}^{(2)},v_{*}^{(2)})+(\delta u_{i},\delta v_{i})\) and the following equation can be obtained by linearizing model (2): \[\begin{array}{l}\frac{\mathrm{d}\delta u_{i}}{\mathrm{d}t}=a_{10}\delta u_{i} +a_{01}\delta v_{i}+d_{1}\sum_{j=1}^{N}L_{ij}\delta u_{j},\\ \frac{\mathrm{d}\delta v_{i}}{\mathrm{d}t}=b_{10}\delta u_{i}+b_{01}\delta v_{ i}+d_{2}\sum_{j=1}^{N}L_{ij}\delta v_{j}.\end{array} \tag{6}\] Referring to previous work [2, 30], expand the perturbations \(\delta u_{i}\) and \(\delta v_{i}\) in \(\left(\phi_{1}^{(\alpha)},\ldots,\phi_{N}^{(\alpha)}\right)\), where \[\delta u_{i}(t)=\sum_{\alpha=1}^{N}c_{\alpha}^{1}e^{\lambda_{\alpha}t}\phi_{i} ^{(\alpha)},\ \delta v_{i}(t)=\sum_{\alpha=1}^{N}c_{\alpha}^{2}e^{\lambda_{\alpha}t}\phi_{i} ^{(\alpha)}. \tag{7}\] Substituting Eq. (7) into Eq. (6), and using \(\sum_{j=1}^{N}L_{ij}\phi_{j}^{(\alpha)}=\Lambda_{\alpha}\phi_{i}^{(\alpha)}\), we get the eigenvalue equations for each \(\alpha\) (\(\alpha=1,\ldots,N\)): \[\lambda_{\alpha}\left(\begin{array}{c}c_{\alpha}^{1}\\ c_{\alpha}^{2}\end{array}\right)=\left(\begin{array}{cc}a_{10}+d_{1}\Lambda_{ \alpha}&a_{01}\\ b_{10}&b_{01}+d_{2}\Lambda_{\alpha}\end{array}\right)\left(\begin{array}{c} c_{\alpha}^{1}\\ c_{\alpha}^{2}\end{array}\right).\] Further, the following characteristic polynomial can be written: \[{\lambda_{\alpha}}^{2}-P_{1}\left(\Lambda_{\alpha}\right)\lambda_{\alpha}+P_{2 }\left(\Lambda_{\alpha}\right)=0, \tag{8}\] where \[P_{1}\left(\Lambda_{\alpha}\right)=a_{10}+b_{01}+\left(d_{1}+d_{2}\right) \Lambda_{\alpha},\] \[P_{2}\left(\Lambda_{\alpha}\right)=a_{10}b_{01}-a_{01}b_{10}+a_{10}d_{2} \Lambda_{\alpha}+b_{01}d_{1}\Lambda_{\alpha}+d_{1}d_{2}\Lambda_{\alpha}^{2}.\] At this time, the necessary and sufficient conditions for Turing instability can be summarized as follows according to the characteristic equation (8): (i) \(a_{10}+b_{01}<0\), (ii) \(a_{10}b_{01}-a_{01}b_{10}>0\), (iii) \(a_{10}b_{01}-a_{01}b_{10}+a_{10}d_{2}\Lambda_{\alpha}+b_{01}d_{1}\Lambda_{ \alpha}+d_{1}d_{2}\Lambda_{\alpha}^{2}<0\). Furthermore, due to \(P_{1}(\Lambda_{\alpha})<0\) when \(E_{*}^{(2)}=\left(u_{*}^{(2)},v_{*}^{(2)}\right)\) is stable in model (4) and \(\Lambda_{\alpha}\leq 0\). For the reason that \(P_{2}(\Lambda_{\alpha})<0\) for some \(\Lambda_{\alpha}\), the inequality \[a_{10}d_{2}+b_{01}d_{1}>0 \tag{9}\] needs to be satisfied. Solving \(P_{2}^{\prime}(\Lambda_{\alpha})=0\) we obtain the critical Laplacian eigenvalue \(\bar{\Lambda}_{\alpha}=-\frac{a_{10}d_{2}+b_{01}d_{1}}{2d_{1}d_{2}}\). Substituting this value in \(P_{2}(\Lambda_{\alpha})\) we have \[P_{2}(\Lambda_{\alpha})_{\min}=(a_{10}b_{01}-a_{01}b_{10})-\frac{(a_{10}d_{2}+ b_{01}d_{1})^{2}}{4d_{1}d_{2}}. \tag{10}\] It follows from (9) and \(P_{2}(\Lambda_{\alpha})_{\min}<0\) that \[(a_{10}d_{2}+b_{01}d_{1})>2\sqrt{d_{1}d_{2}\left(a_{10}b_{01}-a_{01}b_{10}\right)}. \tag{11}\] **Remark 1**.: _It should be pointed out that there are two critical values, \(a_{10}+b_{01}=0\) and \(a_{10}b_{01}-a_{01}b_{10}+a_{10}d_{2}\Lambda_{\alpha}+b_{01}d_{1}\Lambda_{ \alpha}+d_{1}d_{2}\Lambda_{\alpha}^{2}=0\). They correspond to the critical value conditions for generating Hopf bifurcation and Turing bifurcation respectively. And from \(P_{2}(\Lambda_{\alpha})=0\), we can determine \(\Lambda_{\alpha}^{(1)}\) and \(\Lambda_{\alpha}^{(2)}\) as_ \[\Lambda_{\alpha}^{(1)} =\frac{-(a_{10}d_{2}+b_{01}d_{1})-\sqrt{\Delta}}{2d_{1}d_{2}},\] \[\Lambda_{\alpha}^{(2)} =\frac{-(a_{10}d_{2}+b_{01}d_{1})+\sqrt{\Delta}}{2d_{1}d_{2}}.\] _Where \(\Delta=(a_{10}d_{2}+b_{01}d_{1})^{2}-4d_{1}d_{2}(a_{10}b_{01}-a_{01}b_{10})\), and Turing instability requires \(\Lambda_{\alpha}^{(1)}<\Lambda_{\alpha}<\Lambda_{\alpha}^{(2)}\) to be maintained. Similar to the previous section, we will also give numerical examples to verify the theoretical analysis._ ### Example The specific theoretical analysis process has been given in subsection 2.3, so here we only provide some numerical examples to illustrate our conclusions. Here we select parameters as \(d_{1}=0.05\), \(d_{2}=0.2\), and the other parameters are consistent with the parameters when \(E_{*}^{(2)}=\left(u_{*}^{(2)},v_{*}^{(2)}\right)\) is stable in subsection 2.2. With the help of Erdos-Renyi network, the relationship between the real part of the eigenvalues of the model (2) and the eigenvalues of the Laplace matrix under different average degrees is tested. where the number of network nodes is \(N=1600\). According to Eq. (8), the relationship between Laplace eigenvalue (\(\Lambda_{\alpha}\)) and model (2) eigenvalue (\(\lambda_{\alpha}\)) with different average degree (\(\langle k\rangle=5\), \(\langle k\rangle=15\), \(\langle k\rangle=50\), and \(\langle k\rangle=60\)) can be obtained as shown in Fig. 2 (a), (b), (c) and (d). Obviously, when the eigenvalue of the Laplace matrix satisfies \(-\Lambda_{\alpha}\in(-\Lambda_{\alpha}^{(2)},-\Lambda_{\alpha}^{(1)})\), the model (2) placed on the average network exhibits Turing patterns. It should be noted that the Laplace eigenvalue on the network (\(\Lambda_{\alpha}\)) spectrum is discrete (red dots). In order to better understand the visualization results, we also draw the dispersion relationship (black curve) in the continuous case. ## 3 Pattern formation on non-networks and networks In this section, we carry out numerical experiments on the pattern formation in non-network, i.e., with continuous media, and in the network, i.e., with discrete media. Through observation, it is found that in the model (2) under different parameters and different environments, the pattern types of prey population \(u\) and predator population \(v\) always correspond, that is, the evolution of \(u\) and \(v\) at each node is similar. Therefore, in the next numerical simulation, we only give the pattern formation of \(v\). ### Pattern formation on non-networks In this subsection, we first show the pattern formation of model (2) in continuous media under the control of parameters and initial values. Here we use the forward Euler method as our main numerical method, in which the time interval is defined as \(\Delta t=0.01\), the total time is defined as \(T=2000\), the space interval is defined as \(h=0.1\), the 2D (two-dimensional) simulation regions are Figure 2: Dispersion relation between Laplace eigenvalue (\(\Lambda_{\alpha}\)) and model (2) eigenvalue (\(\lambda_{\alpha}\)) with different average degree. (a) for average degree is 5, (b) for average degree is 15, (c) for average degree is 50, and (d) for average degree is 60. defined as \(\Omega=[0,10]\times[0,10]\) in Fig. 3 and \(\Omega=[0,40]\times[0,40]\) in Fig. 4 under Neumann boundary conditions (zero-flux boundary), and for Fig. 3, we apply a small perturbation near the equilibrium point \(E_{*}^{(2)}=\left(u_{*}^{(2)},v_{*}^{(2)}\right)\) as our initial value expressed as \[u(0,x,y) =\left(u_{*}^{(2)}\right)+0.1\times\mathrm{rand}(0,1),\] \[v(0,x,y) =\left(v_{*}^{(2)}\right)+0.1\times\mathrm{rand}(0,1),\] where random small perturbations are generated using "rand" function. Therefore, the Laplacian (i.e., diffusion term) in the standard five-point explicit finite difference scheme can be expressed as: \[\Delta_{h}u_{i,j}^{n} =\frac{u_{i+1,j}^{n}+u_{i-1,j}^{n}+u_{i,j+1}^{n}+u_{i,j-1}^{n}-4u_ {i,j}^{n}}{h^{2}},\] \[\Delta_{h}v_{i,j}^{n} =\frac{v_{i+1,j}^{n}+v_{i-1,j}^{n}+v_{i,j+1}^{n}+v_{i,j-1}^{n}-4v _{i,j}^{n}}{h^{2}},\] where \(i\) and \(j\) represent the position in the grid, and \(n=1,\ldots,N,\ N=T/\Delta t\) represents the number of iterations. Therefore, we try to verify the test model without considering the network. The purpose is to explore the influence of different control parameters and initial values on the pattern formation of model (1) in continuous media. First, we select appropriate parameters to satisfy the Turing instability region in Fig. 2, where the predator diffusion rate is \(d_{2}=0.2\), and the other parameters are consistent with the parameters in Fig. 1 (a). Then through extensive numerical simulation, we obtained three different types of patterns corresponding to different prey diffusion coefficients (\(d_{1}=0.0005,\ d_{1}=0.001\) and \(d_{1}=0.005\)): labyrinthine pattern, the mixture of hot spot and labyrinthine pattern, and hot spot pattern as shown in Fig. 3. Inspired by [3; 4], we found that the initial value will also affect the pattern formation, so we made some attempts, that is, to observe the change in the pattern formation by selecting different initial values. It turns out that under different initial values, the system will have some interesting patterns, as shown in Fig. 4. ### Pattern formation on networks The pattern formation in continuous media has been discussed in the previous subsection. In this subsection, we try to explore pattern formation in discrete media (complex networks). we apply a small perturbation near the equilibrium \(E_{*}^{(2)}=\left(u_{*}^{(2)},v_{*}^{(2)}\right)\) as our initial value, expressed as \[u_{i}(0) =\left(u_{*}^{(2)}\right)+0.001\times\mathrm{rand}(0,1),\] \[v_{i}(0) =\left(v_{*}^{(2)}\right)+0.001\times\mathrm{rand}(0,1),\] where random small perturbations are generated using "rand" function. In numerical experiments, it is assumed that the model is defined on the ER random network with \(N=1600\) nodes. The selection of parameters in model (2) is consistent with that in Fig. 3 (c), in addition \(E_{*}^{(2)}=\left(u_{*}^{(2)},v_{*}^{(2)}\right)=(0.114480713848611,1.3737685661833 30)\), \(P_{1}(0)=-0.0517506287803922<0,P_{2}(0)=0.0820020295705703>0\), \(\Lambda_{\alpha}^{(1)}=-34.331723013217530\), and \(\Lambda_{\alpha}^{(2)}=-2.388520655925132\) can be calculated by substituting the set parameter values. Obviously, there exists a \(P_{2}(\Lambda_{\alpha})<0\) such that \(Re(\lambda_{\alpha})>0\), which satisfies the Turing instability condition as show in Fig. 2. Next, we design numerical experiments to study the steady-state predator density \(v_{i}\) of each node \(i\) under different network average degrees and then analyze its influence on pattern formation. Among them, Fig. 5 shows the variation of predator population density with node index \(i\) under different network average degrees. Through the observation of Fig. 5 (a), we find that the distribution of predator populations will be divided into two groups. We define a group with high abundance as \(\hat{v}_{i}\) and a group with low abundance as \(\check{v}_{i}\). When the average degree of the network is 5, we find that the distribution of predators satisfies \(\check{v}_{i}<v_{*}^{(2)}<\hat{v}_{i}\) at this time. Still, as the average degree increases, we find some differences that the distribution of predators is more concentrated as shown in Fig. 5 (b) and (c). Interestingly, when we increase the average degree to a certain value, such as 60, we find that the distribution of predator populations does not differentiate at this time, and remains in a steady state as shown in Fig. 5 (d). Correspondingly, we also give the evolution of 2D (two-dimensional) Turing patterns on ER random network, the density of the predator population (\(v_{i}\)) on the ER random network varies with time and the curves of the maximum, minimum, and average values of the predator population density (\(v_{i}\)) in all nodes on the network varies with time Figure 3: Pattern formation on non-networks with prey diffusion coefficient is \(d_{1}=0.0005\) for (a), \(d_{1}=0.001\) for (b) and \(d_{1}=0.005\) for (c). under four different network averages as shown in Fig. 6, Fig. 7, and Fig. 8 (where \(\langle k\rangle=5\) for (a), \(\langle k\rangle=15\) for (b), \(\langle k\rangle=50\) for (c), \(\langle k\rangle=60\) for (d)). These figures can all illustrate the changes in the distribution patterns of the above-mentioned predator populations. Finally, we also verified the possibility of spatiotemporal patterns in ER random networks with different initial values. It was found that similar to the situation in a continuous medium, under fixed parameters, different initial values can cause differences in the spatial distribution of species, as shown in Fig. 9 (a), (b) and (c). to the set of Laplace matrix eigenvectors, and the dispersion relationship between the wave number and the model (1) eigenvalue \(\lambda_{\alpha}\) in the continuous medium corresponds to the dispersion relationship between the Laplace matrix eigenvalue \(\Lambda_{\alpha}\) of the ER random network and the model (2) eigenvalue \(\lambda_{\alpha}\). In continuous media, we find that the parameters and diffusion coefficient in the model control the mode generation, in which the diffusion coefficient plays a decisive role. In addition, the selection of patterns is mentioned in [4; 5]. Interestingly, when we change the initial value, many beautiful patterns appear corresponding to different initial values. Next, we try to extend the model to discrete media, that is, consider the pattern formation on large random networks. The results show that the distribution pattern of the predator population is divided into two groups, one Figure 5: The density (red dots) of predator population (\(v_{t}\)) with four different network average degrees, where the average degree is 5 for (a), the average degree is 15 for (b), the average degree is 50 for (c), and the average degree is 60 for (d). The abscissa represents the ordinal number of the node index \(i\) from small to large, and the ordinate represents the density value of the predator population \(v_{i}\). group is high abundance and the other group is low abundance. The stable coexistence equilibrium \(E_{*}^{(2)}=\left(u_{*}^{(2)},v_{*}^{(2)}\right)\) exists between the two groups. However, with the increase of the average degree of the selected network, the two groups gradually merge and then are consistent with the stable coexistence equilibrium point. At this time, there is no Turing pattern on the network, although the parameter conditions exist Turing instability region in the continuous medium. Therefore, we draw the following conclusion: the average degree of the network plays an important role in the generation of the pattern, and an excessive average degree will inhibit the emergence of the Turing pattern. Figure 6: Evolution of 2D (two-dimensional) Turing patterns on ER random network with four different network average degrees, where the average degree is 5 for (a), the average degree is 15 for (b), the average degree is 50 for (c), and the average degree is 60 for (d). The specific color represents the corresponding density of the predator population (\(v_{i}\)) according to the color bar, and the density difference in space is reflected by the color difference. **CRediT authorship contribution statement** **Yong Ye:** Writing - original draft, Formal analysis, Investigation, Methodology, Software. **Jiaying Zhou:** Writing - Reviewing and Editing, Supervision. **Declaration of competing interest** All the authors declare that there is no conflict of interest during this study. **Acknowledgements** Yong Ye acknowledges support by the scholarship from the China Scholarship Council (No. 202206120230). Jiaying Zhou acknowledges support by the scholarship from the China Scholarship Council (No. 202106120290). **References**
2308.00261
Improving Pixel-based MIM by Reducing Wasted Modeling Capability
There has been significant progress in Masked Image Modeling (MIM). Existing MIM methods can be broadly categorized into two groups based on the reconstruction target: pixel-based and tokenizer-based approaches. The former offers a simpler pipeline and lower computational cost, but it is known to be biased toward high-frequency details. In this paper, we provide a set of empirical studies to confirm this limitation of pixel-based MIM and propose a new method that explicitly utilizes low-level features from shallow layers to aid pixel reconstruction. By incorporating this design into our base method, MAE, we reduce the wasted modeling capability of pixel-based MIM, improving its convergence and achieving non-trivial improvements across various downstream tasks. To the best of our knowledge, we are the first to systematically investigate multi-level feature fusion for isotropic architectures like the standard Vision Transformer (ViT). Notably, when applied to a smaller model (e.g., ViT-S), our method yields significant performance gains, such as 1.2\% on fine-tuning, 2.8\% on linear probing, and 2.6\% on semantic segmentation. Code and models are available at https://github.com/open-mmlab/mmpretrain.
Yuan Liu, Songyang Zhang, Jiacheng Chen, Zhaohui Yu, Kai Chen, Dahua Lin
2023-08-01T03:44:56Z
http://arxiv.org/abs/2308.00261v1
# Improving Pixel-based MIM by Reducing Wasted Modeling Capability ###### Abstract There has been significant progress in Masked Image Modeling (MIM). Existing MIM methods can be broadly categorized into two groups based on the reconstruction target: pixel-based and tokenizer-based approaches. The former offers a simpler pipeline and lower computational cost, but it is known to be biased toward high-frequency details. In this paper, we provide a set of empirical studies to confirm this limitation of pixel-based MIM and propose a new method that explicitly utilizes low-level features from shallow layers to aid pixel reconstruction. By incorporating this design into our base method, MAE, we reduce the wasted modeling capability of pixel-based MIM, improving its convergence and achieving non-trivial improvements across various downstream tasks. To the best of our knowledge, we are the first to systematically investigate multi-level feature fusion for isotropic architectures like the standard Vision Transformer (ViT). Notably, when applied to a smaller model (e.g., ViT-S), our method yields significant performance gains, such as 1.2% on fine-tuning, 2.8% on linear probing, and 2.6% on semantic segmentation. Code and models are available in MMPPretrain1. Footnote 1: [https://github.com/open-mmlab/mmpretrain](https://github.com/open-mmlab/mmpretrain) ## 1 Introduction Self-supervised learning (SSL) has made remarkable progress in language and computer vision. Masked Image Modeling (MIM) is an effective framework in image SSL, which boasts a simple training pipeline, a few handcrafted data augmentations, and high performance across downstream tasks. In the pioneering work of BEiT [1], 40% of the input image is masked, and the model is trained to capture the semantics of the masked patches by reconstructing the DALL-E [37] output features. To simplify pre-training and reduce computational overhead, MAE [14] only feeds the visible tokens into the encoder and encourages the decoder to reconstruct the raw pixels of masked patches. More recently, follow-up works have focused on adding auxiliary tasks or using large-scale pre-trained models to produce reconstruction targets. For instance, CMAE [23] explicitly adds a contrastive task and optimizes it in conjunction with the MIM task, while MILAN [20] and BEiT-v2 [34] employ multimodal pre-trained models such as CLIP [36] to generate the reconstruction features. Among the various MIM methods available, pixel-based approaches such as MAE [14] are particularly interesting because of their simple pre-training pipeline and minimal computational overhead. However, these methods are typically biased towards capturing high-frequency details due to their emphasis on reconstructing raw pixels [1, 30]. As a result, they waste a significant amount of modeling capability that could be better utilized to capture low-frequency semantics. Our objective is to reduce this waste of modeling capacity, aiming for an improved quality of learned representation for downstream visual tasks. Toward this goal, we design two pilot Figure 1: **Fusing shallow encoder layers for MAE.** During the training process, MAE increasingly relies on shallow layers for the pixel reconstruction task, demonstrating a bias toward low-level features. experiments based on the representative work of MAE [14] to uncover its neglected design aspects. **(1)** **Fuse Shallow Layers**: Rather than solely using the output layer for pixel reconstruction, we implement a weight-average strategy to fuse the output layer with all previous layers. The weights assigned to each layer are normalized and dynamically updated during the pre-training process, with their absolute values indicating the significance of each layer for the reconstruction task. We track the changes in these weights and illustrate them in Figure 1. As depicted in the figure, the model increasingly relies on the features of shallow layers as training progresses. **(2)** **Frequency Analysis**: To further understand the property of the representation learned with MAE, we analyze the frequency response of each layer's feature. We adopt the tools proposed by [33] to transform the encoder features into the frequency domain and visualize the relative log amplitude of the transformed representation in Figure 2. Typically, a higher amplitude indicates that the feature produced by one layer contains _more high-frequency information_. We empirically find that the shallow layers contain significantly more high-frequency components than deep layers, which are mostly related to low-level details (_e.g_., textures). Based on our analysis of the pilot experiments, we can conclude that the pixel reconstruction task exhibits a bias towards low-level details. This is evident from the low linear probing accuracy, which highlights the constraints of the pixel reconstruction task. Namely, it requires features that are semantically distinct enough to achieve linear separability. To address this limitation, we propose to explicitly incorporate low-level features obtained from shallow layers into the output layer for the pixel reconstruction task. By doing so, we alleviate the burden of the model having to focus excessively on these low-level details, allowing it to spend its modeling abilities to capture these high-level semantics. We denote the proposed method as **M**ulti-level **F**eature **F**usion (MFF ). Specifically, we extend the usage of the fusion strategy in the first pilot experiment and systematically investigate the design choices of multi-feature fusion, such as feature selection and fusion strategies. Despite the simplicity of the proposed method, it is a drop-in solution for unleashing the full modeling potential of pixel-based MIM approaches and has the following advantages: **(1)** Employing multi-level feature fusion can enhance the training efficiency of MAE by approximately \(\sim\)5x, thus helping to reduce the carbon footprint. For example, by pre-training MAE with this strategy for only 300 epochs, we achieve semantic segmentation results that are on par with those obtained after 1600 epochs in the original paper. **(2)** We also consistently and significantly improve performance across all downstream tasks, including semi-supervised fine-tuning and linear probing. Notably, with a small model such as ViT-S, we outperform MAE by 2.8% on linear probing, 2.6% on semantic segmentation, and 1.2% on fine-tuning. **(3)** After evaluating our model on four out-of-distribution datasets, we observe that the approach with multi-level feature fusion exhibits greater robustness than the base method. Furthermore, we conduct a thorough analysis to unveil how multi-feature fusion works for representation learning. Given the exploratory experiments from the perspective of latent features and optimization, we find that the fusion strategy attenuates high-frequency information in the latent features and flattens the loss landscapes. To summarize, our contributions are three-fold: * Firstly, we develop a multi-level feature fusion strategy for isotropic backbones such as ViT, achieving superior results compared to various pixel-based MIM approaches. * Secondly, we have conducted a thorough analysis of how this multi-level feature fusion strategy enhances the model from the perspectives of latent features and optimization. Our examination is meticulous and provides valuable insights. * Lastly, we have performed extensive and rigorous ablation studies on the design details, which also strengthens the validity of our findings. ## 2 Related Works **Self-supervised Learning** reviously, many works[9, 41] relied on abundant labeled datasets to achieve promising results. However, annotating this data requires a large number of human labors. Therefore, how to effectively capture useful semantics embedded in the abundance of data available on the Internet is currently a hot topic. In recent times, self-supervised learning has witnessed tremendous growth Figure 2: **Frequency analysis of MAE.** Higher log amplitude denotes more high-frequency information. Shallow layers contain more high-frequency information (or low-level information) than deep layers. in computer vision, following remarkable achievements in natural language processing. These methods cater to diverse inputs, including images[5, 1, 14, 54], videos[29, 21], and multi-modality inputs[36, 3]. They capture rich semantic information by creating effective proxy tasks, such as contrastive learning and masked image modeling, in large amounts of unlabeled data. In comparison to supervised learning[41, 42, 43], these self-supervised learning approaches have gradually outperformed them in numerous downstream tasks and possess immense potential to become the principal pre-training paradigm. **Feature Pyramid** Utilizing multi-level features has been extensively studied in previous years, and one of the most famous applications is the Feature Pyramid Network (FPN)[26]. This technique has been widely used in many dense tasks such as object detection and semantic segmentation to improve the model's perception of objects of different scales. Incorporating FPN into existing designs in many works[15, 47] has led to significant improvements. However, the multi-level feature fusion module only accepts features of different scales as input, limiting its adaptation to isotropic architectures such as ViT[9], in which features from different layers are of the same scale. In masked image modeling, most approaches choose ViT as their encoder due to the masked patch prediction task. Therefore, there are few works exploring multi-level feature fusion in this domain. Even though some works [11, 40] aim to explore multi-level fusion in masked image modeling, their applications are still limited to the traditional hierarchical architecture and do not address the issue of being biased toward low-level details for these pixel-based methods. ## 3 Methods We now introduce the proposed multi-level feature strategy for pixel-based MIM methods. In Section 3.1, we first give a short revisit to the universal framework of pixel-based MIM approaches. Then Section 3.2 describes how to insert multi-level feature fusion into pixel-based MIM approaches. Finally, two key components, the projection layer and fusion strategy, will be discussed in Section 3.3. ### Introduction to Pixel-based MIM We now introduce the unified formulation of recent pixel-based Masked Image Modeling(MIM) methods. This kind of MIM, using raw images as target, is a denoising autoencoder [44], and it follows a simple pipeline. Its primary objective is to predict raw pixel values of the original or post-processed images, such as the high-frequency filtered image in PixMIM [30]. When dealing with masked images, we can feed only visible tokens into the encoder or both visible and mask tokens. If only visible tokens are used for the encoder, both the mask token and the latent feature output by the encoder must be fed into the decoder. ### MIM with Multi-level Feature Fusion Multi-level feature fusion(MFF) can be incorporated into most existing pixel-based MIM approaches in a plug-and-play manner. Figure 3 gives an overview of the whole framework. To keep the simplicity, we mainly focus on these steps relevant to MFF, leaving out other steps. Given an image \(\mathbf{I}\in\mathbb{R}^{H\times W}\times^{3}\), we feed it into the encoder, \(\mathbb{E}\), to get the latent representations: \[\mathrm{X}=\mathbb{E}(\mathbf{I}) \tag{1}\] The latent representations, denoted by \(\mathrm{X}=\{x_{0},x_{1},...,x_{N-2},x_{N-1}\}\), correspond to the output feature from each transformer layer of the ViT, where \(N\) represents the depth of the encoder. For the pilot experiment in Figure 1, we fuse all-level features from each layer of the encoder. However, indiscriminately fusing all of them may introduce redundancy or even makes the model much harder to be optimized. But finding the most effective layers to fuse induces a large search space. To simplify the layer selection procedure, we follow the guidelines below: **(1)** : We first conduct an ablation study to compare the results of fusing shallow layers or deep layers (as shown in Figure 2, shallow layers contain low-level features, and deep layers contain high-level features), and more details are presented in Section 5.4. The results show that utilizing the features of the shallow layers performed significantly better than deep ones. Thus intuitive analysis and the quantitative numbers both indicate that the shallow layer should be selected for fusion. **(2)** : We then examine how many layers should be taken into consideration. In addition to the selected shallow layer and output layer, we also try to explore introducing different numbers of intermediate layers for fusion. We refer the reader to Section 5.4 for more details of this experiment. We finally selected \(M=5\) additional layers besides the last layer (6 layers in total) and the output features from those layers are used for fusion. We define the indices for these selected layers as \(\mathcal{W},|\mathcal{W}|=M\). After that, we apply a projection layer, \(\mathcal{P}_{\text{i}}\), to each of the additional \(M\) layers before fusion. \[\tilde{\mathrm{X}}=\{\mathcal{P}_{\text{i}}(x_{i})\}_{i\in\mathcal{W}}+\{x_{N -1}\} \tag{2}\] Adding a projection layer to align the feature space between different levels' features is a common practice in self-supervised learning. Finally, we introduce the fusion layer, \(\mathcal{F}\), to fuse multi-level features \(\tilde{\mathrm{X}}\): \[O=\mathcal{F}(\tilde{\mathrm{X}}) \tag{3}\] \(O\) will be fed into the decoder for pixel reconstruction. ### Instantiation of Projection and Fusion Layers We further investigate the instantiation of the projection layer and the fusion layer. **Projection Layer**: In terms of the projection layer, indicated as \(\mathcal{P}\), we focus on two popular options, namely linear projection and non-linear projection. Specifically, we instantiate the non-linear projection with the Linear-GELU-Linear structure. Our experiment has revealed that a simple linear layer is sufficient and effective within our framework. **Fusion Layer**: The fusion layer aims to gather low-level information from the features of shallower layers feature. We evaluate two commonly employed fusion methods: weighted average pooling and self-attention-based fusion. \[O=\sum_{i\in\mathcal{W}}w_{i}\mathcal{P}_{i}(x_{i})+w_{N-1}x_{N-1} \tag{4}\] The weighted average pooling fusion is illustrated in Equation 4. In this equation, \(w_{i}\) refers to the weight assigned to each of the \(M\) selected layers, while \(w_{N-1}\) is assigned to the output layer. During the training process, all these weights are dynamically updated and summed up to 1. As for the self-attention method, we use an off-the-shelf transformer layer. \[\hat{O}=\text{MultiHeadAttention}(([\mathcal{P}_{i}(x_{i})_{i\in\mathcal{W}}, x_{N-1}]) \tag{5}\] After the multi-head attention layer, we extract the transformed tokens corresponding to \(x_{N-1}\) from \(\hat{O}\) to use in pixel reconstruction. Experimental results demonstrate that the weighted average pooling strategy is comparable to self-attention for this purpose while also being simpler and more computationally efficient. Our method is intuitive and simple, and can be inserted into most of pixel-based MIM approaches without introducing non-negligible computational overhead. We evaluate it on MAE[14] and PixMIM[30], and more detailed results are shown in Section 5.2. ## 4 Analysis In order to uncover the dark secret behind multi-level feature fusion, we first conduct a frequency analysis on the model (with and without MMF) in Section 4.1. In Section 4.2, we find multi-level feature fusion can be helpful to the optimization of the model, by flattening the loss landscape. Finally, in Section 4.3, we apply the first pilot experiment introduced in Section 1 to EVA [10] and supervised ViT [14], to investigate whether they too require low-level features and to confirm that the bias towards low-level details is the unique and inherent drawback of pixel-based MIM. ### Frequency Analysis We employ multi-level feature fusion to enhance MAE [14], resulting in \(\text{MFF}_{\text{MAE}}\). The aim of this fusion is to prevent the model from excessively focusing on low-level details. To investigate the change in frequency response before and after the fusion, we transform the feature from the last encoder layer into the frequency domain and calculate the amplitude of various frequency bands. As depicted in Figure 4, multi-level feature fusion diminishes high-frequency responses and amplifies those that belong to the low-frequency range, which supports the efficacy of the fusion technique. Figure 3: **Multi-level feature fusion for pixel-based MIM. The multi-level feature fusion (MFF) module can be inserted into existing pixel-based MIM approaches in a plug-and-play manner. \(i,j\in\mathcal{W}\).** ### Optimization Analysis Following [33], we analyze the Hessian max eigenvalue spectrum. Multi-level feature fusion has the additional benefit of reducing the magnitude of Hessian max eigenvalues. As shown in Figure 5, the expected hessian max eigenvalue of \(\text{MFF}_{\text{MAE}}\) is smaller than that of MAE[14]. Hessian max eigenvalues represent the local curvature of the reconstruction loss function, and this result suggests that multi-level feature fusion flattens the loss landscape. Large eigenvalues can impede neural network training [12]. Therefore, _multi-level feature fusion can help the model learn better representations by suppressing large Hessian eigenvalues._ ### Feature Bias of Different Pre-training Methods In order to investigate whether being biased towards low-level features is the sole and inherent drawback of pixel-based MIM, we introduce multi-level feature fusion to EVA[10] and supervised ViT[14]. EVA is one of the representative works that focuses on regressing high-level features produced by CLIP[36], while supervised ViT is one of the works that require the model to map the input image to a semantic label. Both EVA and supervised ViT target high-level features that contain rich semantic information describing the input image. As shown in Figure 6, unlike MAE, the weight of the last layer's feature for both EVA and supervised ViT is significantly higher than that of the shallow layers. This observation suggests that the bias towards low-level features exhibited by these pixel-based MIM approaches is primarily caused by the raw-pixel reconstruction task. ## 5 Experiment In Section 5.1, we present the experimental settings for pre-training and evaluation. Next, in Section 5.2, we apply MFF to two MIM baselines, namely MAE [14] and PixMIM [30], and show the improvements brought by such design. In addition, we also evaluate the effectiveness of MFF using a smaller model (_e.g_., ViT-S) and fine-tune the pre-trained model under a low-shot setting. To evaluate the robustness of our proposed method, Section 5.3 includes additional analyses that assess the robustness of pre-trained models against out-of-distribution (OOD) ImageNet variants. Finally, Section 5.4 presents comprehensive ablation studies of our method. ### Experiment Settings To ensure the efficacy of our methods and design components, we conducted a series of extensive experiments on image classification using ImageNet-1K[8], object detection on COCO [27] and semantic segmentation on ADE20K[52]. Unless otherwise stated, our default settings are based on ViT-B. **ImageNet-1K [8]** The ImageNet-1K dataset comprises 1.3 million images belonging to 1,000 categories and is divided into training and validation sets. To ensure the fairness of our experiments while applying our methods to MAE [14] and PixMIM[30], we strictly follow their original pre-training and evaluation settings on ImageNet-1K. This includes following the pre-training schedule, network architecture, learning rate setup, and fine-tuning protocols. Furthermore, in addition to the conventional fine-tuning protocol, we fine-tune the model using a low-shot setting, where only a fraction of the training set (_e.g_. 1% and 10%) is used. This approach is consistent with previous works[5] and we ensure that the low-shot fine-tuning setting also strictly follows that of the conventional fine-tuning. **ADE20K [52]** To conduct the semantic segmentation experiments on ADE20K, we utilize the off-the-shelf set Figure 4: **Frequency analysis of output layer features from the encoder. Multi-level feature fusion reduces high-frequency components and enhances low-frequency components. The lowest-frequency band is labeled as 1, while the high-frequency band is labeled as 7.** Figure 5: **Hessian max eigenvalues spectrum. Multi-level feature fusion reduces the magnitude of Hessian max eigenvalues to flatten the loss landscape.** tings from MAE[14]. With this approach, we fine-tune a UperNet[47] for 160k iterations with a batch size of 16 and initialize the relative position bias to zero. **COCO [27]** For our object detection experiments on COCO, we adopt the Mask R-CNN approach[15] that enables simultaneous production of bounding boxes and instance masks, with the ViT serving as the backbone. As in MAE, we evaluate the box and mask AP as the metrics for this task. However, we note that there is no universal agreement for the setting of object detection fine-tuning epochs. We have chosen the commonly used 2\(\times\) setting, which fine-tunes the model for 25 epochs. Other settings strictly follow those in ViTDet [25]. **Ablation studies** We conduct all of our ablation studies based on the customary MAE settings [28, 6]. We pre-train all model variants on ImageNet-1K for 300 epochs and conduct a comprehensive performance comparison on linear probing, fine-tuning, and semantic segmentation. All other settings are consistent with those discussed previously. ### Main Results The application of multi-level feature fusion to MAE [14] and PixMIM [30] resulted in significant improvements in various downstream tasks, as shown in Table 1. After pre-training the model for 300 epochs, we achieve a 0.5%, 1.8%, and 3.6% improvement over MAE in fine-tuning, linear probing, and semantic segmentation, respectively. Additionally, our model exhibits scalability across pre-training epochs and consistently outperforms these base methods by a substantial margin. Compared to these methods, using an extra heavy tokenizer, CLIP, we also gradually close the performance gap with them. Although fine-tuning accuracy is often considered a reliable measure of the quality of non-linear features in a model, we find that it is not a sensitive metric, as compared to other metrics presented in Table 1. This may be attributed to pre-training and fine-tuning following the same data distribution, and the size of the training set and model capacity being sufficient to offset the performance gap between different methods. To address this limitation, we adopt the following two workarounds: **Low-shot fine-tuning.** This protocol has also been adopted by many previous works, [5]. Rather than utilizing the entire training set, we fine-tune the pre-trained model end-to-end using only 1% and 10% of the training set. As indicated by Table 1, the performance gap between MFF and the base methods is much more prominent when using low-shot fine-tuning, which further verif Figure 6: **MAE [14] attempts to extract low-level features from shallow layers, whereas EVA [10] and DEiT [41] do not.** MAE refers to MIM approaches that use the raw pixel as the target, while EVA refers to MIM approaches that use additional tokenizers like CLIP[36]. Supervised ViT denotes supervised learning approaches. Compared to EVA and supervised ViT, MAE exhibits a greater affinity for low-level features and displays a more aggressive appetite for them. Figure 7: **Performance on ViT-S.** Applying MFF to ViT-S brings significant improvements on all downstream tasks. We reuse the same setting as for ViT-B, without specifically tuning. MFF. **Pre-train with ViT-S.** To mitigate the influence brought by model capacity, we pre-train MAE using ViT-S and compare their performance using fine-tuning, linear probing, and semantic segmentation. Since our objective is to evaluate the improvement brought by MFF to these base methods, we do not specifically tune hyper-parameters for the experiments with ViT-S to achieve state-of-the-art performance, but rather use the same settings as for ViT-B. Due to its smaller capacity compared to ViT-B, ViT-S requires a pre-training method that can effectively capture semantic features to perform well on downstream tasks. As demonstrated in Figure 7, the method with MFF significantly outperforms their base method, further validating the effectiveness of MFF. We also evaluate our pre-trained models with the object detection protocol and report the AP\({}^{\text{box}}\) and AP\({}^{\text{mask}}\). As shown in Table 2, MFF can still bring non-trivial improvements for object detection. ### Robustness Evaluation Robustness evaluation is a common practice in many previous works [53, 20, 14] to assess a model's ability to handle different types of noise. In this study, we compare our pre-trained models with their corresponding baselines on four out-of-distribution (OOD) ImageNet variants: ImageNet-Corruption [17], ImageNet-Adversarial [18], ImageNet-Rendition [16], and ImageNet-Sketch [45]. These datasets introduce various domain shifts to the original ImageNet-1K and are widely used to evaluate a model's robustness and generalization ability. As illustrated in Table 4, MFF significantly improves the robustness of MAE and PixMIM on all datasets by a clear margin. The enhanced robustness against domain shifts strengthens the value of our simple yet effective method. ### Ablation Studies **Is shallow layer important?** In order to determine the significance of low-level features from shallow layers, we explore the fusion of the output layer with either a deep or shallow layer. So we try to fuse the output layer with an extra shallow or deep layer, selected from the previous 11 \begin{table} \begin{tabular}{l c c c c c c} \multicolumn{2}{c}{Evaluation Protocol\(\rightarrow\)} & \multicolumn{2}{c}{ImageNet} & \multicolumn{2}{c}{Low-shot} & ADE20K \\ \cline{3-7} Method & Target & Epoch & ft(\%) & lin(\%) & 1\% & 10\% & mIOU \\ \hline \multicolumn{2}{l}{**Supervised learning**} & & & & & & \\ DeiT III[43] & - & 800 & 83.8 & - & - & - & 49.3 \\ \hline \multicolumn{2}{l}{**Masked Image Modeling w/ pre-trained target generator**} & & & & & & \\ BEiT[1] & DALLE & 800 & 83.2 & 56.7 & - & - & 45.6 \\ CAE[6] & DALLE & 800 & 83.8 & 68.6 & - & - & 49.7 \\ MILAN*[20] & CLIP-B & 400 & 85.4 & 78.9 & 67.5 & 79.7 & 52.7 \\ BEiT-v2[34] & VQ-KD & 1600 & 85.5 & 80.1 & - & - & 53.1 \\ MaskDistill[35] & CLIP-B & 800 & 85.5 & - & - & - & 54.3 \\ \hline \multicolumn{2}{l}{**Masked Image Modeling w/o pre-trained target generator**} & & & & & & \\ MaskFeat*[46] & HOG & 1600 & 84.0 & 62.3 & 52.9 & 73.5 & 48.3 \\ SemMAE[24] & RGB & 800 & 83.4 & 65.0 & - & - & 46.3 \\ SimMIM[48] & RGB & 800 & 83.8 & 56.7 & - & - & - \\ MAE*[14] & RGB & 300 & 82.8 & 61.5 & 41.4 & 70.5 & 43.9 \\ **MFF\({}_{\text{MAE}}\)** & RGB & 300 & 83.3 (+0.5) & 63.3 (+1.8) & 43.7 (+2.3) & 71.4 (+0.9) & 47.7 (+3.6) \\ MAE*[14] & RGB & 800 & 83.3 & 65.6 & 45.4 & 71.2 & 46.1 \\ **MFF\({}_{\text{MAE}}\)** & RGB & 800 & 83.6 (+0.3) & 67.0 (+1.4) & 48.0 (+2.6) & 72.0 (+0.8) & 47.9 (+1.8) \\ PixMIM[30] & RGB & 800 & 83.5 & 67.2 & 47.9 & 72.2 & 47.3 \\ **MFF\({}_{\text{PixMIM}}\)** & RGB & 800 & 83.6 (+0.1) & 68.2 (+1.0) & 49.0 (+1.1) & 73.0 (+0.8) & 48.6 (+1.3) \\ \end{tabular} \end{table} Table 1: **Performance comparison of MIM methods on various downstream tasks. We report the results with fine-tuning (ft) and linear probing (lin) experiments on ImageNet-1K, objection detection on COCO, and semantic segmentation on ADE20K. The backbone of all experiments is ViT-B[9]. \(*\): numbers are reported by running the official code release. Low-shot: end-to-end fine-tuning with 1% and 10% of the training set.** \begin{table} \begin{tabular}{l l l} Method & AP\({}^{\text{box}}\) & AP\({}^{\text{mask}}\) \\ \hline MAE & 47.3 & 42.4 \\ MFF MAE & 48.0 (+0.7) & 43.0 (+0.6) \\ PixMIM & 47.8 & 42.8 \\ MFF PixMIM & 48.1 (+0.3) & 43.1 (+0.3) \\ \end{tabular} \end{table} Table 2: **Results of COCO object detection.** layers of ViT-B[9]. And the specific index of the selected layer will be detailed in the appendix. As illustrated in Table 2(a), fusing the output layer with a deep layer only results in marginal improvements. However, incorporating low-level features directly from the shallow layer into the output layer leads to a significant performance boost, as it enables the model to focus on semantic information. Therefore, we have decided to use a shallow layer (_i.e._ the first layer) for multi-level feature fusion. **How many layers are used for fusion?** Aside from the output layer and the shallow layer picked in the previous selection, it is reasonable to consider using intermediate layers for fusion, as they may contain additional low-level features or high-level meanings that could assist in the reconstruction task. However, selecting these intermediate layers is a daunting task due to the large search space involved. To simplify the process, we pick an additional 1, 2, and 5 layers evenly spaced between the shallow layer and output layer selected in Table 2(a). The specific indices for these selected layers are placed in the appendix. As shown in Table 2(b), introducing more layers brings consistent improvements because they may contain unique features, such as textures or colors, that help the model complete the reconstruction task. Nevertheless, when we fuse all these layers, we witness a performance drop in all downstream tasks. This drop may result from the difficulty of optimization, because of the redundancy in these layers. **Do the projection layer and fusion strategy matter?** In Equation 2, we investigate the influence of the projection layer on the final results. Our findings indicate that a simple linear projection layer is sufficient to achieve satisfactory results, as compared to using no projection layer or a nonlinear projection layer. Incorporating a single linear projection layer offers benefits in mitigating the domain or distribution gap between different layers, as compared to using no projection layer. However, the addition of a nonlinear projection layer, which includes an extra linear projection and GELU activation before the linear projection, introduces computational overhead and is more challenging to optimize. As a result, the non-linear projection achieves sub-optimal performance. With regard to the fusion strategy, we found that the **weight-average pooling** strategy, which assigns a dynamic weight to each layer and then performs element-wise addition, achieves the best performance. Compared to **attn**, this strategy shares the merits of simplicity and smaller computational overhead. ## 6 Conclusion In this study, we take the first step to systematically explore multi-level feature fusion for the isotropic architecture, such as ViT, in masked image modeling. Initially, we recognize that pixel-based MIM approaches tend to excessively rely on low-level features from shallow layers to complete the pixel value reconstruction task by a pilot experiment. We then apply a simple and intuitive multi-level feature fusion to two pixel-based MIM approaches, MAE and PixMIM, and observe significant improvements in both, gradually closing the performance gap with these approaches by using an extra heavy tokenizer. Finally, we conduct an extensive analysis of multi-level feature fusion and find that it can suppress high-frequency information and flatten the loss landscape. We believe that this work can provide the community with a fresh perspective on these pixel-based MIM approaches and continue to rejuvenate this kind of simple and efficient self-supervised learning paradigm. \begin{table} \end{table} Table 4: **Robustness evaluation on ImageNet variants.** To evaluate the robustness of MFF, we further evaluate the models (after fine-tuning) from Table 1 on four ImageNet variants. Results are reported in top-1 accuracy, except for IN-C[17] that uses the mean corruption error. \begin{table} \end{table} Table 3: We conducted ablation studies with ViT-B/16 that was pre-trained on ImageNet-1K for 300 epochs. Our report includes results for fine-tuning (ft), linear probing (lin), and semantic segmentation (seg). The final settings are highlighted in gray. **(a): output** denotes the output layer of the encoder. **(b)**: the second row corresponds to the second row of (a). **(c): linear** and **nonlinear** denotes linear and nonlinear projection layers, while **pool** and **attn** represents the fusion strategies of weight-average pooling and self-attention, respectively. The first row of column (c) does not use any projection layer before fusion.
2306.08391
Understanding Privacy Over-collection in WeChat Sub-app Ecosystem
Nowadays the app-in-app paradigm is becoming increasingly popular, and sub-apps have become an important form of mobile applications. WeChat, the leading app-in-app platform, provides millions of sub-apps that can be used for online shopping, financing, social networking, etc. However, privacy issues in this new ecosystem have not been well understood. This paper performs the first systematic study of privacy over-collection in sub-apps (denoted as SPO), where sub-apps actually collect more privacy data than they claim in their privacy policies. We propose a taxonomy of privacy for this ecosystem and a framework named SPOChecker to automatically detect SPO in real-world sub-apps. Based on SPOChecker, we collect 5,521 popular and representative WeChat sub-apps and conduct a measurement study to understand SPO from three aspects: its landscape, accountability, and defense methods. The result is worrisome, that more than half of all studied sub-apps do not provide users with privacy policies. Among 2,511 sub-apps that provide privacy policies, 489 (19.47%) of them contain SPO. We look into the detailed characteristics of SPO, figure out possible reasons and the responsibilities of stakeholders in the ecosystem, and rethink current defense methods. The measurement leads to several insightful findings that can help the community to better understand SPO and protect privacy in sub-apps.
Xiaohan Zhang, Yang Wang, Xin Zhang, Ziqi Huang, Lei Zhang, Min Yang
2023-06-14T09:30:08Z
http://arxiv.org/abs/2306.08391v1
# Understanding Privacy Over-collection in WeChat Sub-app Ecosystem ###### Abstract. Nowadays the app-in-app paradigm is becoming increasingly popular, and sub-apps have become an important form of mobile applications. WeChat, the leading app-in-app platform, provides millions of sub-apps that can be used for online shopping, financing, social networking, etc. However, privacy issues in this new ecosystem have not been well understood. This paper performs the first systematic study of privacy over-collection in sub-apps (denoted as SPO), where sub-apps actually collect more privacy data than they claim in their privacy policies. We propose a taxonomy of privacy for this ecosystem and a framework named _SPOChecker_ to automatically detect SPO in real-world sub-apps. Based on SPOChecker, we collect 5,521 popular and representative WeChat sub-apps and conduct a measurement study to understand SPO from three aspects: its landscape, accountability, and defense methods. The result is worrisome, that more than half of all studied sub-apps do not provide users with privacy policies. Among 2,511 sub-apps that provide privacy policies, 489 (19.47%) of them contain SPO. We look into the detailed characteristics of SPO, figure out possible reasons and the responsibilities of stakeholders in the ecosystem, and rethink current defense methods. The measurement leads to several insightful findings that can help the community to better understand SPO and protect privacy in sub-apps. + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: WebApps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Ap + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Ap + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Apps + Footnote †: journal: Ap + ## 1. Introduction The app-in-app paradigm provides a new form of mobile applications (apps), where a _super-app_ acts as a host for many _sub-apps_ to run on. A super-app is often a large, popular mobile app, such as WeChat (Wechat (2018), Line (2018), and Microsoft Teams (2018). The super-apps usually provide easy-to-use development frameworks, a general running environment, and seamless distribution and integration mechanisms for sub-apps. More importantly, sub-apps can use the abundant resources from the super-apps, which can help them quickly acquire a large number of users. Therefore, more and more Internet companies, especially startups, are starting to provide services using sub-apps. According to a recent study (Sundrum, 2018), there are even more sub-apps on WeChat than Android apps on Google Play. Figure 1 illustrates the overall architecture of the app-in-app paradigm and what privacy data sub-apps can access. The super-apps run on a mobile device and can use system APIs (_systemaPPIs_ for short) to access device and system resources, while they provide a runtime and customized APIs (_subAPIs_ for short) for upper-level sub-apps. Therefore, depending on the source of the privacy, a sub-app can access three categories of privacy: 1) _device privacy_ that comes from the underlying OS and device, such as the device location, camera photos, etc; 2) _platform privacy_ that is produced by and stored in the super-app, such as the friend list and address information, etc; 3) _user-input privacy_ that is input by the users, such as their identity information, health data, etc. The first two categories of privacy are accessed by sub-apps by invoking subAPIs, while the last by directly interacting with the users. One intuitive research question here is whether sub-apps _over-collect_ privacy data, or more specifically, _do sub-apps collect more privacy data than they claim in their privacy policies1_ Unclaimed privacy collection can pose severe privacy threats to users. For example, we find that a popular student assistant sub-app 1 collects users' _Bluetooth_ and _Device Information_ by invoking subAPIs like wx.getBluetoothDevices() and wx.getSystemInfo0. However, it does not claim these collection behaviors in its privacy policy, which may cause sensitive data leakage without user awareness. The privacy over-collection problem has been well studied in the field of mobile apps (Kang et al., 2018; Wang et al., 2019; Wang et al., 2022) and Web apps (Wang et al., 2022; Wang et al., 2022; Wang et al., 2022). However, it faces more challenges in sub-apps, resulting in previous approaches inapplicable to this new field. First, the development model, distribution mechanisms, and privacy policy management of sub-apps are different from those of mobile and Web apps. Second, Figure 1. An Overview of the App-in-app Paradigm and the Privacy Data Sub-apps Can Access. privacy in sub-apps can be more complex than in mobile apps, as sub-apps can use more flexible and customized components to collect user-input data. Third, sub-apps may access highly sensitive data that mobile and Web apps cannot. For example, an app on the same device as WeChat cannot access users' WeChat contact information as they are separated by system sandboxes. However, if the app developer releases a sub-app on WeChat, it may have the ability to access contact data from WeChat. Given the above challenges, the privacy over-collection problem in sub-apps is not well understood yet. This paper conducts the first systematic study of sub-app privacy over-collection (denoted as SPO) in the real-world WeChat sub-app ecosystem. Through an in-depth study of real-world sub-apps and their development model, we give a taxonomy of the privacy available to WeChat sub-apps, which includes 37 privacy items in 3 categories. We then design _SPOChecker_ to automatically collect sub-apps and detect the over-collection of these privacy items. Specifically, SPOChecker first collects privacy policies and all code of a sub-app, including those in on-demand loading packages, with the help of sub-app lifecycle analysis and dynamic testing. It then uses static data flow analysis and natural language processing (NLP) to extract the sub-apps _privacy collection set_, while using NLP to get the _privacy claim set_. After that, the over-collected privacy can be obtained by calculating the set difference. We use SPOChecker to collect 5,521 popular and representative WeChat sub-apps and conduct a measurement study from three aspects: SPO _landscape_, _accountability_, and _defense methods_. We find that SPO is very prevalent in WeChat sub-apps, with 15.65% of all collection behaviors failing to inform users. SPO rates also vary for different privacy items and categories, with those that are harder to regulate tending to have higher SPO rates. Templates and SDKs are heavily used and are also responsible for privacy over-collection. Furthermore, we found that the current defense mechanisms employed by WeChat can significantly improve user privacy protection, but these methods cannot cover all privacy items. For instance, only 34.4% of all subAPIs that can be used to collect privacy are protected by WeChat permissions. Based on these findings, we can derive meaningful lessons that can help the community better protect privacy in this emerging field. In summary, this paper makes the following contributions: * We provide a taxonomy of privacy in WeChat sub-apps and conduct the first systematic study on privacy over-collection in this ecosystem. * We propose SPOChecker, a tool combining static analysis, dynamic testing, and NLP techniques to automatically collect sub-apps and detect SPO in real-world sub-apps. * We make a large-scale real-world measurement and demystify the landscape, accountability, and defense of SPO, which leads to several useful findings that can help the community better protect user privacy in this emerging field. **Organization**. The rest of this paper is organized as follows. In SS2, we present the background and statement of the SPO problem. Detailed steps and evaluation of SPOChecker are proposed in SS3. In SS4, we build a large-scale dataset and conduct a measurement on SPO by answering several research questions. Discussions are proposed in SS5, and related works in SS6. Finally we conclude the paper in SS7. ## 2. The SPO Problem This section provides an overview of WeChat sub-apps, presents our taxonomy of privacy in sub-apps, and discusses the problem statement to clarify the SPO problem. ### The Structure of WeChat Sub-apps As previously shown in Figure 1, a super-app provides a runtime, usually based on WebView (Xu et al., 2017; Wang et al., 2018) for sub-apps to run on. Thus sub-apps are generally a special type of Web app. Figure 2 shows the structure of a WeChat sub-app after abstraction without loss of correctness. **WeChat sub-app structure**. A WeChat sub-app consists of two parts, the main body, and several code packages. The main body contains an instance file (app.js) and some global configuration files (app.json, etc.). In app.js, the sub-app registers its instance and binds lifecycle callbacks and event listeners. In app.json and other configuration files, the sub-app specifies its global configurations, including its basic information, overall style, network settings, etc. Furthermore, the list of all pages of a sub-app is also stated in the configuration files. **Code packages and on-demand loading**. A sub-app contains one main code package (_mainPkg_ for short) and zero or more supplementary code packages (_subPkgs_ for short). Each code package has several "pages", where a page is the basic component of a sub-app, like an "Activity" in Android apps. The entry point of a sub-app, or entry page, is stored in the mainPkg. To optimize loading efforts, WeChat adopts on-demand loading of subPkgs. That is, only the mainPkg is loaded when opening a sub-app, while subPkgs are dynamically loaded when users visit the corresponding pages. **Sub-app pages and subAPIs**. Each page consists of a render layer and a logic layer, which are composed of HTML-like files and JavaScript files respectively. The render layer is responsible for displaying the page to users, while the logic layer performs data updating and networking operations. As previously shown in Figure 1, sub-apps can use JavaScript to invoke subAPIs provided by super-apps, to access and obtain various resources. ### Privacy in Sub-apps: A Taxonomy Figure 2. The Structure of WeChat Sub-apps. The structure of WeChat sub-apps determines that they can access three types of privacy, _device privacy_ and _platform privacy_ through subAPIs, and _user-input privacy_ through UI interactions. However, to the best of our knowledge, no previous work has systematically studied the complete list of privacy items a sub-app can access. Therefore, in this paper, we propose the first taxonomy of privacy a sub-app can access in the WeChat sub-app ecosystem. To set up the taxonomy, we carefully study the official API document (Zhou et al., 2020), investigate real-world sub-apps, and refer to authoritative privacy standard (Zhou et al., 2020; Zhou et al., 2020) and prior work (Zhou et al., 2020). We collect sensitive items about personal information (PI) from the above official standards and filter out those that cannot be obtained in WeChat sub-apps. Then we categorize the remaining privacy items according to the property of PI referring to the authoritative classification mentioned above. Finally, we summarize 37 privacy items in three categories, as shown in Table 1. Note that some privacy items may appear in multiple categories because they can be collected in different ways. For example, invoice data can be collected by calling subAPI wx.chooseInvoice(), or from a user-input component. As a result, there are 29 distinct privacy items in this taxonomy. In the rest of this paper, we use a suffix to distinguish privacy items from different categories, e.g. _contact_d_, _contact_p_, and _contact_u_ represent contact privacy from device, platform, and user-input respectively. Especially, platform privacy is a novel privacy category unique to sub-app ecosystems. As super-apps like WeChat often have very sensitive platform data, such as social and financial information, the over-collection of this kind of data may cause more serious privacy and security risks than previous mobile platforms. Table 1 also lists all the subAPIs and privacy keywords that can be used by sub-apps to obtain these privacy items. SPOChecker relies on this table to find all privacy collection behaviors of sub-apps. ### Problem Statement In this paper, we study the SPO problem by calculating a set \(S_{\mathbf{spo}}\) for each sub-app using the following equation: \[S_{\mathbf{spo}}=S_{\mathbf{colect}}-S_{\mathbf{claim}}\] where \(S_{\mathbf{colect}}\) is the set of the privacy items it collects and \(S_{\mathbf{claim}}\) it claims in privacy policies. The privacy items are those defined in our taxonomy in Table 1. Note that a previous work (Zhou et al., 2020) shows that a sub-app may illegally access more resources by privilege escalation and invoking hidden platform APIs. We do not consider this kind of attack and only focus on privacy items that a sub-app can access using regular and legal methods. The key challenge here is to accurately and completely get \(S_{\mathbf{colect}}\) for a sub-app. If a sub-app only uses the privacy data locally without sending them to its server, it should not be seen as a collection behavior. Therefore, we should carefully define the source and sink points of privacy data, as well as track the data propagation in the sub-app code. ## 3. Detecting Spo In this section, we describe the workflow of SPOChecker on how it automatically collects sub-apps and identifies SPO. As shown in Figure 3, SPOChecker has three main steps. ### Step 1: Collecting Sub-apps The first step is to collect sub-apps, including all their code and privacy policies. However, unlike Android and iOS which have app markets, there is no such market for WeChat sub-apps. A prior work (Zhou et al., 2020) utilizes the search interface in WeChat to collect sub-apps, but it can only collect _mainPkgs_ while neglecting the _subPkgs_ of these sub-apps. Also, it does not collect any privacy policies. In this paper, we start with the same idea, but we improve the collection by including all code packages and privacy policies. More specifically, we use dynamic testing to automatically trigger the on-demand loading of all code packages, and then dump them by hooking certain WeChat functions. We describe the detailed process below. **Collecting metadata.** We first determine what sub-apps should be collected by obtaining sub-app metadata, including its "APPID", "developer", "category", "recently used", etc. To achieve this, we intercept the searching API 2 and then feed different keywords into this API and then extract the metadata from search results. Note that we can get more relevant keywords from this metadata, and then we repeat the previous step until the number of sub-apps ceases to increase. Specifically, to make the collection of sub-apps as diverse as possible, we collected 191 app category keywords from major app markets (Zhou et al., 2020), and 560 top-ranked sub-apps and app names from major app data ranking websites (Zhou et al., 2020; Zhou et al., 2020), as the initial keyword list. Footnote 2: [https://mp.weixin.qq.com/wax-cgi/immersearch/mmbizwaxsearchapp](https://mp.weixin.qq.com/wax-cgi/immersearch/mmbizwaxsearchapp), Jun 2022 **Collecting code packages.** After we get the metadata of a sub-app, we use dynamic testing to download all its code packages by hooking the WeChat app. First, we use UIAutomater (Zhou et al., 2020) to let the WeChat app load a sub-app based on its APPID, when the main body and mainPkg of the sub-app can be dumped. Then we parse the configuration files to extract the route to all subPkgs, force WeChat to trigger on-demand loading, and dump these subPkgs. In this way, we can collect all code of a sub-app. **Collecting privacy policies.** Both Android and iOS app markets require developers to display privacy policies on the app downloading page. However, there are no such rules for sub-apps, so we need to locate the privacy policies in sub-apps. By examining real-world sub-apps, we find privacy policies may be provided as text files or links that needed to be redirected. Consequently, we choose to collect privacy policies in a dynamic way. Given the observation that privacy policies are always displayed on certain pages to increase user awareness, we trigger all the pages that contain privacy-related keywords. Then we parse the page layout, locate the component containing keywords, simulate clicking into the detail page of the privacy policy, and extract the full privacy policy text. Furthermore, when privacy policies are presented as images or PDFs, we utilize OCR tools (Zhou et al., 2020) to extract the text. \begin{table} \begin{tabular}{l l l} \hline \hline **Category** & **Privacy Items** & **SubAPI \& Keywords** \\ \hline \multirow{8}{*}{**Device privacy** (15 items)} & device information & getjsystemeesting(), getSystemInfo(), getDeviceInfo(), startLocalServiceDiscovery(), getfilteraryInfo(), getScreenRightiness() \\ & photographic image & shareVideoMessage(), chooseImage(), chooseVideo(), chooseMedia(), createLivePusherContect.sendMessage(), showShareImageMenu() \\ & file & shareFileMessage(), uploadFile(), chooseMessageFile(), getSavedFileList.ost(), getSavedFileInfo(), getFileInfo(), getFileInfo(), getFileInfo(), getFileInfo(), getFileInfo(), getServiceFile(), readLoCompressressFile(), openDocument() \\ & createMapContect.fromConnect.location), createMapContect.getCenterLocation(), createMapContect.toStringCreCreCreLocation(), chooseLocation(), startLocation(), updateBlackground(), openLocation(), onLocationChang(), getLocation(), choosePai(), startLocation(Update) \\ & screenshot & createLivePusherContect.snapshot() \\ & & startPreview(), switchCamera(), startRecord(), onCameraFrame(), takePhoto(), CameraFrameListener.start(), scanCode(), getSVKFrame(), subscribeIoVVideoMember(), join1v1Chat(), onVoIPVideoMembersChanged(), initFaceDetect(), faceDetect(), createKeySession.start() \\ & recording & startRecord(), getRecInterManager(), setEngliev1v1Chat(), joinVolPChat(), subscribeIoVIPVideoMembers(), join1v1Chat(), onVoIPVideoMembersChanged() \\ & & onVoIPVideoMembersChanged() \\ & biometric information & startSet This subsection outlines how SPOChecker identifies the collection of privacy data for three sub-app categories. Specifically, we first describe how SPOChecker locates the _Source_ of all privacy collection behavior in the first three parts of this subsection, and then describe how SPOChecker decides whether the data is _Sinked_ to the sub-apps servers. #### 3.2.1. Platform Privacy To locate what platform privacy a sub-app locally collects is relatively straightforward, as WeChat subAPI documents (Zhu et al., 2017) have listed all subAPIs that can collect platform privacy, as listed in Table 1 of this paper. Therefore, SPOChecker finds all invocation of platform device collecting subAPIs and then finds the success and complete callbacks, or more specifically the returned data of these callbacks, and marks them as the _Source_ points. #### 3.2.2. Device Privacy Unlike platform privacy, there is no existing work that can map all device privacy-collecting subAPIs to corresponding privacy items. The WeChat API documents provide descriptions for each subAPI but do not explicitly specify what device privacy a subAPI collected. WeChat provides a large number of subAPIs, i.e. 97 subAPIs in official documents (Zhu et al., 2017), thus manually mapping them to privacy items is infeasible. A prior work (Zhu et al., 2018) utilizes fuzzing to find which systemAPIs a subAPI invocation may eventually reach, thus inferring the system resources a subAPI can access. However, it only considers the invocation of a single subAPI during each fuzzing test, while neglecting the dependency between subAPIs, thus may fail to find some privacy-collecting subAPIs. Furthermore, it mainly focuses on privilege escalation subAPIs. As a result, it cannot meet the demand of our work. In this paper, we optimize the fuzzing method proposed in (Zhu et al., 2018) by considering control and data flow dependencies between subAPIs to generate more effective testing cases. Our method has the following steps: 1) For all 973 WeChat subAPIs, we exclude those that are unrelated to device privacy, such as subAPIs used for UI layout and styles, and subAPIs that cannot be used to get meaningful data, such as _wx.closeSocket()_ and _wx.getRandomValue()_. After this step, the number of subAPIs to analyze is reduced from 973 to 153. 2) We refer to sample code in API documents and real-world sub-apps to generate test cases for the above 153 subAPIs, where 55 (36%) of them have control or data flow dependency with other subAPIs, so we add necessary dependencies to generate high-quality testing cases. 3) We fuzz these subAPIs and monitor what systemAPIs they can reach by monitoring systemAPIs using frido (Zhu et al., 2017). If a subAPI invocation finally gets data from systemAPIs that collect device privacy, we manually check whether it collects any device privacy. Here the list of sysytemAPIs that can collect device privacy is generated by combining APIs in several classical studies (Bogor et al., 2016; Li et al., 2017; Li et al., 2017). In summary, we identify 125 WeChat subAPIs that can be used to collect 15 types of privacy items, as shown in Table 1. We also build a complete mapping (also open-sourced in our repo) from subAPI invocation to privacy item collection. Note that this mapping is generated off-line, and once generated it can be used to detect device privacy collection in all sub-apps. Specifically, Table 2 lists some device privacy-collecting APIs that cannot be found by previous method (Zhu et al., 2018). As done in handling platform privacy, we mark the success and complete callbacks of these 125 subAPIs as the _Source_ points of device privacy collection. #### 3.2.3. User-input Privacy Sub-apps can also collect user-input privacy, known as UIP in previous work (Zhu et al., 2017), through UI interactions with users. Previous work (Zhu et al., 2017; Zhu et al., 2017) mainly studies a limited set of UIP on mobile apps. However, the forms of UIP in sub-apps are more diverse, because sub-apps, which are written in Web languages, can customize the input components more flexibly compared to mobile apps on Android or iOS. As a result, the widely used customized input components, as well as dependencies between these Web components, bring new challenges to identifying UIP in sub-apps. To address these challenges, we propose a bottom-up identification method to recognize customized UIP-related components. \begin{table} \begin{tabular}{l l l} \hline **SubAPI** & **SystemAPI** & **Privacy Items** \\ \hline wx.getFileSystemManager & StorageManager.get/VolumeList & file \\ wx.onWitConnected & WithManager.getConnectionInfo & network \\ wx.getConnectedWith & WithManager.getConnectionInfo & network \\ wx.connectWith & WithManager.getConnectionInfo & network \\ \hline connectivtifyManager.getNetworkwork & network \\ vx.chooseContact & ContentResolencyregisterContentObserver & contact \\ wx.getClipboardData & ClipboardManager.getPrimaryClip & clipboard data \\ wx.startAccelerometer & SensorManager.registerListener & sensor data \\ wx.onAccelerometerChange & SensorManagerregisterListener & sensor data \\ \hline \end{tabular} \end{table} Table 2. New SubAPIs We Found That Can Be Used to Collect Device Privacy. Figure 3. The Overall Workflow of SPOChecker. First, we extract all customized UIP-related components that depend on native components, called first-level components. After that, second-level components, i.e., customized components that depend on native and first-level components, are identified. By parsing the hierarchy bottom-up, we eventually find all UIP-related components. Then we can locate all UIPs by matching sensitive privacy words in their texts. We then map the UI components to their underlying processing code, specifically, which variables in the logic layer receive any collected privacy data directly from the UI components. The WeChat subAPI documents (Wang et al., 2017) have listed all the data binding patterns for the native components, which are collated and listed in Table 3. Note that the basic two types of native components, i.e. "Input" and "Process", have different binding patterns, and higher-level components inherit the patterns of the native components. We refer to these observations to get the bindings for all UIP-related components. As an example, in "-picker bindchange="handleCityChange" - ", SPOChecker will locate the _handleCityChange_ function in the logic layer, whose parameters will be marked as the _Source_ points. #### 3.2.4. Data Flow Analysis After finding all the _Source_ points of privacy collection, SPOChecker then conducts a static data flow analysis on the logic layer to find whether the data is _Sinked_ to the sub-app's server. There are several works that are related to static analysis on JavaScript code. For example, SAFE (Zhou et al., 2017) and JSAI (Zhou et al., 2017) use static analysis to find JavaScript syntax and type errors, while TAJS (Zhou et al., 2017), JSFlow (Zhou et al., 2017) and JSPrime (Beng et al., 2017) are proposed to do data flow analysis on JavaScript code. However, WeChat sub-apps differ from general JavaScript programs due to the existence of the sub-app framework, lifecycle management, and event handling mechanisms, etc. As a result, existing tools cannot be directly applied to WeChat sub-apps. Therefore, in SPOChecker we develop a customized static data flow analysis for WeChat sub-apps, which tracks the data transmission from the _Source_ to the _Sink_. By referring to existing works (Zhou et al., 2017; Wang et al., 2017; Wang et al., 2017) that use taint analysis to locate JavaScript vulnerabilities, we customize our tool to solve the unique problems in the context of sub-apps. The main steps are described below. **Building call graph.** Given the code of the sub-apps, we first generate the abstract syntax tree (AST) based on Esprima (Esprima, 2017). As shown in Figure 2, a sub-app is made of an _App_ instance (app.js) and several _Pages_, which all have several key global functions and variables. According to the sub-app and page registration framework specified by WeChat, functions and variables in the app instance and each page may have pre-defined lifecycles and callbacks. Therefore, we set up a main page loading function as the _dummyMain_ method in Flowdroid (Beng et al., 2017), and add these implicit calls to the generated AST to build a complete call graph. **Tracking the data flow.** SPOChecker then tracks the data flow along the call graph to find whether the tainted source is sinked to the sub-apps server. Specifically, SPOChecker tracks all intra-procedure and all direct inter-procedure data transmissions. Complicated inter-procedure data transmission, such as first storing privacy data in a file and then uploading it from another process, is not considered in this step. However, our evaluation (SS 3.4) shows that such complex data transmissions are rare in sub-apps. **Checking the sinks.** As shown in Table 4, SPOChecker focuses on subAPIs that can be used to upload files or resources (the "Upload" type) and send requests (the "Request" type). Note that there may be some network libraries based on these basic subAPIs, and SPOChecker can track them into these libraries. If these _Sink_ points are reached, SPOChecker then finds a privacy collection behavior that sends privacy to the sub-app's server. After the above steps, SPOChecker can get the privacy collection set of the analyzed sub-apps. ### Step 3: Identifying SPO We then identify from the privacy policy what privacy sub-app claims, as sub-apps in our dataset are for the Chinese market so their privacy policies are in Chinese. SPOChecker takes the following steps to identify privacy statements in them: **Generating keyword list.** We first use _Named Entity Recognition_ to recognize privacy keywords in privacy policies. Inspired by (Beng et al., 2017), we adapt the BERT model (He et al., 2019) in Chinese to the privacy policy domain and generate a keyword list for privacy items. Note that we have tested different NLP pre-trained models (He et al., 2019; Wang et al., 2019) and the BERT model performs the best. Additionally, we refine the list by adding synonyms and manually found keywords to make sure its completeness. **Identifying candidate sentences.** \begin{table} \begin{tabular}{l l} \hline \hline **Category** & **subAPI** \\ \hline **Upload** & wx.uploadFile(), wx.sendMessage() \\ & wx.request(), wx.sendSocketMessage(), \\ & SocketTask.send(), wx.createTCPSocket(), \\ & TCPSocket.write(), wx.createUIDPSocket(), \\ & UIDPSocket.send(), UIDPSocket.write() \\ \hline \hline \end{tabular} \end{table} Table 4. Sink SubAPIs. Figure 4. Customized UIP-related Components in Sub-apps and the Bottom-up Idea. After getting the complete keyword list, we then use NLP tools to process the privacy policies. Before matching keywords, we first tokenize each sentence into words. We then add those sentences that contain at least one keyword to the candidate list. **Confirming collection claim.** We then conduct non-collective statement recognition and negative sentiment analysis to exclude sentences such as "please call our contact number" or "we do not collect your address". Specifically, we inspect the verb to exclude statements without collecting behavior and propagate the negative sentiment in the sentence by switching the sentiment label of each matched keyword to filter out non-collective statements. Having acquired the complete sub-app privacy collection set and privacy claim set, we bring SPO to light by computing the difference set between the former and the latter. ### Evaluating SPOChecker We evaluate the performance of SPOChecker in locating privacy policies and detecting privacy over-collection behaviors in sub-apps. **Locating privacy policies**. We selected a random sample of 1,000 sub-apps from our dataset (SS 4.1) for manual verification. Using SPOChecker, we identified 445 sub-apps with valid privacy policies and 518 sub-apps lacking privacy policies. Next, two security experts were asked to review the 445 identified privacy policies. For the 518 sub-apps lacking privacy policies, we conducted manual searches by dynamically running the sub-apps in order to identify any present privacy policies. Our manual verification confirmed the accuracy of SPOChecker's findings. In addition, SPOChecker identified 37 sub-apps with invalid privacy policies, which we further manually checked and found that the privacy policies were indeed inaccessible, such as the absence of responses after a user clicks or a blank detail page. Overall, our experiment demonstrates the effectiveness of SPOChecker in accurately and comprehensively identifying privacy policies for sub-apps. **Detecting SPO**. We randomly selected 100 sub-apps with valid privacy policies from the above samples and had two security experts dynamically run these sub-apps and manually inspect their code, to construct the ground truth of privacy collection behaviors for them. Overall, these 100 sub-apps have 283 privacy collection behaviors, while 55 of them are SPO. We then apply SPOChecker on these 100 sub-apps, and it successfully detects 280 (98.94%) privacy collection behaviors and 50 (90.91%) SPO behaviors, as shown in Table 5. Since SPOChecker is designed to be conservative as discussed in SS 3.2.4, it does not report any false positives of both collection and over-collection behaviors. We conduct further analysis to determine the reasons for the false negatives. The primary reason for the 3 false negatives in identifying privacy collection behavior (thus also for SPO detection) is the inability of static analysis to recognize complex data transmission. For instance, we observe that the sub-app stores collected user email as page data (data.email) but does not immediately upload it to its server within the current process. Instead, it retrieves the email information from another process before uploading it. However, our expert analysis indicates that such cases are uncommon in sub-apps, resulting in few false negatives. The 2 remaining false negatives in identifying SPO are caused by insufficient contextual information that cannot be directly obtained from subAPI invocations and UI input semantics. For instance, we find that a sub-app collects a file from users by calling wx.uploadFile() and then uploads it to its server, which SPOChecker identifies as a file collection behavior. However, in reality, the sub-app collects a screenshot image (which is also a file) by dynamically specifying the file path, i.e., a file under the screenshot directory. In this case, SPOChecker fails to identify the over-collection of the screenshot. The above analysis indicates that SPOChecker can accurately and comprehensively identify SPO behaviors in sub-apps, and is qualified to perform subsequent measurement analyses. ## 4. SPO Measurement In this section, we build a large-scale dataset and conduct a measurement of SPO by answering the following research questions: * **RQ1. SPO Landscape.** What are the prevalence and characteristics of SPO in the wild? * **RQ2. SPO Accountability.** Who are the stakeholders in the sub-app ecosystem and what are their responsibilities in the SPO problem? * **RQ3. SPO Defense.** What are current privacy protection methods and why they are not enough? By answering these research questions, we make clear the SPO problem in the sub-app ecosystem and get interesting findings and inspiring lessons for further improvement. ### Dataset We use SPOChecker to crawl WeChat sub-apps during Jun 2022. To make the dataset representative, we remove zombie sub-apps and only keep those used by more than 1,000 different users, indicated by the "recently used" field in their metadata. Eventually, SPOChecker downloads 5,521 sub-apps, covering all common sub-app categories in an official list of mobile app types (Shen et al., 2019). **Finding 1. Less than half of sub-apps provide valid privacy policies**. As shown in Table 6, only 2,511 (45.48%) sub-apps provide \begin{table} \begin{tabular}{l l l} \hline \hline **\# Sub-apps** & **\# Valid Privacy Policy** & **\# Code Packages** \\ \hline 5,521 & 2,511 & 66,975 \\ \hline \hline \end{tabular} \end{table} Table 6. Our Dataset. \begin{table} \begin{tabular}{l l l} \hline \hline & **\# Collected Privacy** & **\# SPO** \\ \hline **Ground Truth** & 283 & 55 \\ **SPOChecker** & 280 & 50 \\ \hline \hline \end{tabular} \end{table} Table 5. Manual Check Result. valid privacy policies, which is surprisingly low. For all those 3,010 sub-apps that do not provide valid privacy policies, 749 (24.88%) have privacy policy texts and links, but these links are either unavailable or linked to blank pages. The left 2,261 (75.12%) sub-apps do not provide any privacy policy indicators at all. **Finding 2. Popular sub-apps may have a higher privacy policy providing rate.** We then study the difference between sub-apps with different popularity, as shown in Figure 5. We classify sub-apps into three categories according to their "recently used" field in the metadata. We find that sub-apps with larger "recently used" tend to have a higher rate of providing valid privacy policies. This trend shows that popular sub-apps care more about user privacy. However, it may also reveal a sad truth that the privacy policy providing rate in the whole ecosystem may be even lower, as there are large amounts of unpopular sub-apps that have less than 1,000 "recently used". **Finding 3. SubPkgs play an important role in sub-apps.** Table 6 shows that 5,521 sub-apps all together have 66,975 code packages, thus on average each sub-app has 12.13 packages, including 1 mainPkg and 11.13 subPkgs. We also find that the average size of a subPkg is 5.192MB, while the average size of a mainPkg is 5.371MB, showing that subPkgs provide as much code as mainPkgs do. Furthermore, we find that there are 35.64% privacy policies located in subPkgs. The above results indicate that subPkgs play an important role in sub-apps and considering subPkgs is necessary when studying sub-app privacy. **Lesson Learned:** The privacy policy providing rate is relatively low in the WeChat sub-app ecosystem. ### SPO Landscape (RQ1) This subsection tries to answer RQ1 by demystifying the prevalence and characteristics of SPO in real-world WeChat sub-apps. #### 4.2.1. SPO Prevalence We use SPOChecker to detect all SPO behaviors in 2,511 sub-apps. Note that if a sub-app collects the same privacy item more than once, it is only counted once in our analysis. **Finding 4. SPO is very prevalent in the real world.** In all 2,511 sub-apps, 19.47% of them (489) contain SPO. 452 (18.00%) sub-apps have at least 1 but no more than 4 SPO, while 37 (1.47%) sub-apps have at least 5 SPO. The most over-collecting sub-apps even over-collect 10 different privacy items. Note that the result is conservative as we only consider sub-apps with valid privacy policies. As a comparison, we also list the result of all 5,521 sub-apps in Table 7. For the sub-apps in the dataset that do not provide privacy policies, we consider their privacy collection behaviors all as SPO. Once the SPO issues of these sub-apps are taken into account, the overall rate of SPO in the dataset will dramatically increase from 15.65% to 54.03%. #### 4.2.2. SPO Characteristics We study SPO in each privacy item and by different categories, and possible objective reasons that lead to SPO. **Finding 5: SPO varies in different privacy items, and some privacy items are more over-collect \begin{table} \begin{tabular}{l l l l} \hline \hline **Dataset** & **\# Collected** & **\# SPO** & **SPO rate** \\ \hline 2,511 & 4,989 & 781 & 15.65\% \\ 5,521 & 9,153 & 4945 & 54.03\% \\ \hline \hline \end{tabular} \end{table} Table 7. SPO Result of 5,521 Sub-apps. Figure 5. Privacy Policy Providing Rate According to Sub-app Popularity. Figure 6. Cumulative Distribution of SPO in 489 Sub-apps. Figure 7. SPO of Different Privacy Items. each privacy item and the results are shown in Figure 7. For each privacy item, the _SPO_rate_ is also calculated, which is the over-collected amount divided by the collected amount and presented in the form of a heatmap in the figure. As it shows, some privacy items are heavily over-collected. The top 10 most over-collected privacy items are shown in Table 8. We then manually look into these top 10 over-collected privacy items and summarize possible reasons. Initially, all SPO behaviors can be subjectively used by sub-apps for _Covert Data Harvesting_ (1). Nevertheless, we try to infer the objective reasons leading to SPO by studying the specific code usage and privacy policy statements. As a result, we summarize the following possible objective reasons: _Vague Statement_ (2). The primary reason for some privacy items to be over-collected is an imprecise and coarse-grained statement in sub-apps privacy policies. For example, developers only claim to collect "personal information such as name, ID, etc" without explicitly specifying the marriage and political information they collected. For another example, sub-apps may use _wx.chooselnovice_() to get users' invoices which have their organization and consumption information, but only state collecting "financial information" instead of "invoices". More examples are shown in Table 16 in Appendix. _Privacy Unawareness_ (3). This may be because developers only focus on the functionality of subAPIs but are not aware of the privacy risks behind them. For example, the usage of Bluetooth and screenshots may leak more user privacy than they expected, e.g. the _UUID_ field of Bluetooth may be used to get the user's location under some circumstances (Bartos et al., 2017), and screenshots may leak time and system information. Developers unaware of such risks will not state these risks in their privacy policies. _Perception Evolution_ (4). The concept of privacy on mobile platforms keeps evolving, where some items not considered privacy data before may be seemed as privacy data as people's perceptions change. For example, Google does not restrict the access to user's clipboard data until Android 10 (Han et al., 2017). The restrictions on visiting the clipboard thus lagged on super-apps, and sub-apps also do not make clear of this in their privacy policies. **Finding 6: Vague statement, privacy unawareness, and perception evolution are objective reasons leading to SPO.** Therefore, we recommend that sub-app developers focus on these objective issues to provide more accurate descriptions of privacy policies, ensuring that users have a comprehensive understanding of all privacy collection behaviors. In addition, for items that may contain hidden or indirect privacy information, such as Bluetooth UUID, developers should take measures to prevent the collection of such hidden information (if hidden information is not used), or clearly state the purpose of this privacy information in the privacy policies. **Finding 7: Platform privacy is the least over-collected category.** We also study SPO of different privacy categories, as shown in Table 9. These 2,511 sub-apps collect a total of 4,989 privacy items, while 781 (15.65%) of them are over-collected. The over-collection rates for device and user-input privacy are roughly the same. Platform privacy is the least over-collected, which can be more easily monitored and constrained by the WeChat platform. In contrast, user-input privacy is more diverse and hard to be constrained. For example, the SPO_rates for _contact_p_, _contact_u_ are 3.33% and 31.99% respectively, where contact information is significantly more over-collected through the way of user-input than calling platform subAPIs. **Lesson Learned:** SPO is prevalent in the real world, and privacy items that are more diverse and hard to regulate tend to be more over-collected. Sub-app developers should provide more precise and comprehensive descriptions to prevent unintentional SPO. ### SPO Accountability (RQ2) This subsection tries to answer RQ2 by identifying stakeholders involved in this ecosystem and their respective responsibilities regarding user privacy protection. First, through an in-depth study on real-world sub-apps, we summarize important stakeholders in developing and operating sub-apps, as shown in Figure 8. In this ecosystem, the _Super-apps_ set up the platform and build a bridge between sub-apps and users. The _Service Providers_ operate the sub-apps, claim what privacy they want in the privacy policies and collect them through code and UI interaction, to provide Figure 8. Stakeholders in Sub-app Ecosystem. \begin{table} \begin{tabular}{l l l l} \hline \hline **Privacy** & **Category** & **SPO\_rate** & **PR** \\ \hline Bluetooth & Device & 85.71\% (54/63) & (3) \\ Clipboard Data & Device & 84.91\% (45/53) & (2) \\ Contact Info\_u & User-input & 31.99\% (11/347) & (2) \\ Camera & Device & 21.00\% (21/100) & (2) \\ Identity Info & User-input & 20.70\% (71/343) & (2) \\ Device Info & Device & 20.53\% (7/375) & (2) \\ Property Info\_u & User-input & 16.67\% (4/204) & (2) \\ Network Info & Device & 14.70\% (66/449) & (2) \\ Name & User-input & 11.66\% (52/446) & (2) \\ Political & User-input & 11.11\% (9/81) & (2) \\ \hline \hline \end{tabular} \end{table} Table 8. TOP 10 SPO Privacy Items and Possible Reasons (PR). \begin{table} \begin{tabular}{l l l} \hline \hline **Category** & **\# Collected** & **\# SPO** \\ \hline Device Privacy & 2,444 & 389 (15.92\%) \\ Platform Privacy & 125 & 4 (3.2\%) \\ User-input Privacy & 2,420 & 388 (16.03\%) \\ Sum & 4,989 & 781 (15.65\%) \\ \hline \hline \end{tabular} \end{table} Table 9. SPO by Category. specific services. The development of sub-apps may be done by service providers themselves, or outsourced to specialized _Sub-app Developers_. As such, service providers are directly responsible for protecting user privacy, while super-apps should regulate sub-apps and provide support to service providers, developers, and users in privacy protection. Furthermore, we find templates and SDKs are widely used in developing sub-apps. _Templates_, or sub-app generators, are those general sub-app frameworks that can generate different sub-apps with only a few changes and configurations. Consequently, if privacy is not well-protected in one template, it may affect a large number of downstream sub-apps. The same problem also applies to _SDKs_, which make parts of a sub-app. In this subsection, we study privacy collection and over-collection in sub-app templates and SDKs. #### 4.3.1. Detecting Templates and SDKs in Sub-apps According to our observation, sub-apps generated by the same template have almost identical layouts and code, with only slight differences in configuration and resource files, such as the service name, product description, and images. Therefore, we propose a two-level clustering algorithm (Algorithm 1) that groups similar sub-apps into the same cluster, by first comparing page structure and then page content. A similar idea is also applied to clustering sub-app SDKs. We discuss the details below. ``` 1:Sub-app set \(S\), where \(s.rt\), \(s.ctn\), \(s.dev\) denote page route, file content and developer of sub-app \(s\); threshold \(\theta_{1},\theta_{2}\). 2:Template set \(T=\emptyset\) 3:for each \(i\) in \(S\)do 4:if\(Sim(s.rt,t.rt)\geq\theta_{1}\ \&\ Sim(s.ctn,t.ctn)\geq\theta_{2}\)then 5: Add \(s\) to \(t\) 6:if\(s\) not added to any \(t\)then 7: Add New \(t(s)\) to \(T\) 8:for\(t\) in \(T\)where \((\ \|t\|<2\ )\) or \((\ \|t.dev\|<2)\)do 9: Remove \(t\) from \(T\) 10:Return \(T\) ``` **Algorithm 1** Detecting Sub-app Templates As shown in Algorithm 1, to detect sub-app templates, we first cluster sub-apps with the same page routes into different groups. Files of each sub-app from the same group are compared one by one, and their similarity is calculated. The similarity results are either very close to 1 or obviously below 0.9. Therefore, the similarity thresholds are set to 0.9, and sub-apps with similarity above the threshold are classified into one template. We then remove templates with only one sub-app or one developer. The SDK detection algorithm is similar. First, all JavaScript files in sub-app packages with the same names are clustered into groups, and files in the same group are compared for similarity. Statistical analysis shows that SDK file similarity tends to equal 1 or converge to 1, so a relatively high threshold of 0.95 is set. Subsequently, we consider that SDK files tend to be used by various sub-apps, so we also count the number of file occurrences among different sub-apps, and only keep files used by more than 100 sub-apps as the candidate SDK files. We then merge these files into SDKs based on their path and obtain the final SDK list. Finally, we manually confirmed the accuracy of the template and SDK identification. **Finding 8: Templates and SDKs are heavily used in sub-apps.** As shown in Table 10, for all 5,521 sub-apps in our dataset, 1,307 (23.7%) sub-apps are generated by 272 distinct templates, and 3,430 (62.1%) sub-apps use 307 distinct SDKs. It should be noted that the rates presented here are conservative, as our detection method ignores less popular templates and SDKs, such as templates only used by one sub-app. Table 11 and Table 12 list the top 10 templates and SDKs, respectively. We observed that popular templates include those related to online shopping, online ordering, and recommendation, while the most commonly used SDKs are those that provide UI components, parsing, and network communication functions. These business models and functions are crucial to sub-apps, often involving their core services. Therefore, the study of SPO in templates and SDKs is worth exploring. \begin{table} \begin{tabular}{l l l} \hline \hline **Detected** & **Count** & **\# Sub-apps** \\ \hline Templates & 272 & 1,307 (23.7\%) \\ SDKs & 307 & 3,430 (62.1\%) \\ \hline \hline \end{tabular} \end{table} Table 10. Detected Sub-app Templates and SDKs in 5,521 Sub-apps. \begin{table} \begin{tabular}{l l l l} \hline \hline **Rank** & **Template** & **\# Sub-apps** & **Function** \\ \hline 1 & **XX Travelling** & 69 & Bicycle-sharing \\ 2 & **Hualala** & 56 & Catering \& Takeaway \\ 3 & **Wuuxiang\_v1** & 39 & Catering \& Takeaway \\ 4 & **Youzan\_v1** & 37 & Online Shopping \\ 5 & **Biaodian\_v1** & 33 & Online Shopping \\ 6 & **mini-Video** & 32 & Video Recommend \\ 7 & **Youzan\_v2** & 31 & Online Shopping \\ 8 & **Court** & 26 & Online Court \\ 9 & **Wuuxiang\_v2** & 26 & Catering \& Takeaway \\ 10 & **Biaodian\_v2** & 22 & Online Shopping \\ \hline \hline \end{tabular} \end{table} Table 11. Top 10 Sub-app Templates. \begin{table} \begin{tabular}{l l l l} \hline \hline **Rank** & **SDK** & **\# Sub-apps** & **Function** \\ \hline 1 & **Vant-Weapp** & 1,252 & UI comp. \\ 2 & **wxParse** & 815 & parsing \\ 3 & **unapp** & 626 & front-end \\ 4 & **uview-ui** & 583 & WebUI comp. \\ 5 & **mp-html** & 462 & rich text \\ 6 & **trtc-room** & 239 & RTC SDK \\ 7 & **ext-player** & 230 & RTC SDK \\ 8 & **WebU** & 201 & UI framework \\ 9 & **ThorU1** & 199 & UI comp. \\ 10 & **Youzan-SDK** & 201 & framework \\ \hline \hline \end{tabular} \end{table} Table 12. Top 10 Sub-app SDKs. Note that One Sub-app May Use Multiple SDKs. #### 4.3.2. SPO in Templates and SDKs We study the difference between sub-apps generated by templates and those not by templates (_non-templates_ for short), as shown in Table 13. **Finding 9: SPO in templates is more severe than in non-templates.** In all 2,511 sub-apps with valid privacy, 564 sub-apps are generated by templates, while 1,947 are non-templates. Overall, sub-apps generated from templates collect and over-collect significantly more user privacy than non-template sub-apps. As shown in Table 13, non-template sub-apps collect an average of 1.69 privacy items, whereas template sub-apps collect 3.00. In terms of SPO, non-template sub-apps have 0.26 SPO per sub-app, while template ones 0.49. Furthermore, out of 21 privacy items collected more than 10 times, 14 (66.67%) of them have a higher SPO_rate in template sub-apps than in non-template sub-apps. **Finding 10: SDKs play an important role in the privacy collection of sub-apps.** In all 2,511 sub-apps with valid privacy policies, 1,593 sub-apps use at least one SDK, and we study the location of SPO in these sub-apps. Note that as SDKs usually do not have specific Web pages, we do not count user-input privacy here. As shown in Table 14, out of the 1,593 sub-apps, a total of 1,443 privacy items are collected, with 216 (14.97%) of them located in SDK files. Furthermore, these sub-apps over-collected 176 privacy items, with 14 (7.95%) of them being over-collected by code belonging to SDKs. Additionally, it should be noted that the code size of SDKs lies is small, accounting for only 2.93% (0.31MB out of 10.563MB - see SS 4.1) of all code. Therefore, we can conclude that the importance of sub-app SDKs in protecting user privacy should not be ignored. **Lesson Learned:** Sub-app service providers should be responsible for user privacy. Also, template and SDK developers, as well as the regulation of templates and SDKs, play an important role in sub-app privacy protection. ### SPO Defense (RQ3) In this research question, we study the current privacy-related defense methods in the WeChat platform and discuss why they are not enough. #### 4.4.1. Current Defense Methods Unlike Android and iOS mobile platforms, the WeChat platform does not mandate sub-app developers to provide privacy policies, nor does it check the privacy policies in sub-apps. As a result, the privacy policy providing rate is relatively low in WeChat sub-apps. Table 7 proves sub-apps without valid privacy policies tend to collect and over-collect more privacy items. Nevertheless, in this subsection, we discuss the existing protection methods adopted by WeChat below. **Finding 11: Existing protection methods are unable to force sub-apps to protect users' privacy**. Currently, WeChat provides two methods to protect user privacy: 1) a permission mechanism, and 2) privacy protection guidelines. However, neither of these two methods is mandatory, and sub-apps may bypass them. _Permission Mechanism._ Like in high-version Android, WeChat requires sub-apps to apply for permissions and dynamically prompts a window to inform users when using certain subAPIs, but it only covers a small set of privacy items, as listed in WeChat official documents (Bordes and Keen, 2018). Specifically, for all 125 subAPIs listed in Table 1, only 43 of them (34.4%) need explicit permission grants. For example, _wx.getConnectedWifi()_ can return users' network information including BSSID, but using it does not need any permission in WeChat. Furthermore, this subAPI-based method cannot be applied to user-input privacy items, which are collected from the UI instead of subAPIs. As a result, the permission mechanism is not enough to protect all user privacy. _Privacy Protection Guidelines_. WeChat also requires developers to fill in a form, named privacy protection guidelines (Keen and Keen, 2018), to make clear what privacy is collected. The basic framework of the form, which specifies the privacy items that need to be clarified, is generated by WeChat through code auditing. Subsequently, sub-app developers are responsible for filling in the form with the appropriate details. However, our experiments find that this form is not rigorously checked, thus developers may provide incomplete information. For example, in our dataset with 5,521 sub-apps, 286 (5.18%) of them submit a blank guideline with no privacy collection statements. In addition, the entry of the form, as shown in Figure 9, located in a corner of a pop-up box, is not easy for users to find, thus weakening its effectiveness. #### 4.4.2. Evaluating Existing Protection We also investigate whether existing methods can promote privacy protection. \begin{table} \begin{tabular}{l l l l} \hline \hline & **\# Sub-apps** & **\# Collected** & **\# SPO** \\ \hline Templates & 564 & 1,693 & 277 \\ Non-Templates & 1,947 & 3,296 & 504 \\ \hline \hline \end{tabular} \end{table} Table 13. SPO of Templates and Non-templates in 2,511 Sub-apps With Valid Privacy Policies. \begin{table} \begin{tabular}{l l l l} \hline \hline & **\# Collection** & **\# SPO** & **Size (MB)** \\ \hline Sub-apps & 1,443 & 176 & 10.563 \\ SDK code & 216 & 14 & 0.31 \\ Ratio & 14.97\% & 7.95\% & 2.93\% \\ \hline \hline \end{tabular} \end{table} Table 14. Collection and SPO Distribution in 1,593 Sub-apps With Valid Privacy Policies and SDKs. Figure 9. WeChat Permission and Privacy Protection Guidelines. First, we try to quantify the impact of existing protection on SPO. We find that both the permission mechanism and the code auditing of privacy protection guidelines fail to cover all collection behaviors of one privacy item. For example, WeChat has a permission named _userLocation_ for sub-apps to apply for user's location, but only the invocations of 4 subAPIs, i.e. getLocation(), chooseLocation(), startLocationUpdate() and startLocationUpdateBackground(), need to request this permission. However, there are 10 subAPIs (see Table 1) that can be used by sub-apps to acquire user location. As a result, location privacy is only partially protected by existing protection methods. Therefore, we classify privacy items into three protection levels based on the extent to which they are protected by one of the existing two protection methods: _Not Protected_, _Partially Protected_, and _Fully Protected_. After that, we calculate the SPO_rate of privacy items under different levels of protection, as shown in Table 15. **Finding 12. Existing protection methods can significantly reduce the SPO_rate, but their main drawback is that the coverage is not complete.** We can see from the table that the better a privacy item is protected, the lower its SPO_rate. Therefore, one way to improve user privacy protection is to increase the coverage rate of existing protection methods. **Lesson Learned:** Super-apps can adopt more complete and visible protection methods and stricter regulations on developers to better protect user privacy. ## 5. Limitations & Discussions **Detecting Privacy Collection.** SPOChecker relies on static analysis on JavaScript to identify SPO and therefore is subject to the limitations of static analysis tools. However, we have taken the following measures to ensure the reliability of SPOChecker: First, we make sure the invocation of subAPIs does have data transmission by carefully selecting _Source_ subAPIs (Section 3.2). Second, we apply sentimental analysis to privacy policies. As a result, if a sub-app only uses privacy items locally without sending them to its server, and it clearly states the local usage in its privacy policy, SPOChecker can exclude such behaviors from SPO (SS 3.3). Finally, our evaluation with SPOChecker (SS 3.4) proves its effectiveness and accuracy. Nevertheless, we agree that sophisticated static analysis works [41; 46; 44] can further improve SPOChecker. **Identifying User-input Privacy.** We recognize user-input privacy by matching keywords in texts from components such as labels and tips. However, only considering texts may be incomplete, as non-text resources may also contain privacy. For example, images showing males or females with a toggle button can also be a way to collect users' gender information. We do not find such cases in our dataset, and we plan to look into this problem in our future work. **Other Sub-app Ecosystems.** In this work, we focus on privacy over-collection in the WeChat sub-app ecosystem, while several other super-apps support the app-in-app paradigm. We choose WeChat as the study target because it provides the largest and most representative sub-app ecosystem yet [57; 59]. According to our observations, most sub-app ecosystems have similar architecture and technology as WeChat to support Web-based sub-apps, thus methods used in our study can also be applied to other platforms. ## 6. Related Work **App-in-app Security**. The app-in-app paradigm is very popular now and several work [32; 33; 56; 57; 58; 59] have studied its security. Most of them [56; 57; 58; 33] focus on the security issues such as access control between the native code and Web code. Y. Zhang et al [59] conducts a preliminary study on code properties using millions of WeChat sub-apps. They propose a method to download sub-apps but subPEqs, which make up an important part of the sub-apps shown in our work, are not included. H. Lu et al [33] recognize subAPIs that are not well protected by super-apps and can be used to illegally access system resources. We optimize their method with high-quality fuzzing seeds and find more subAPIs that can access user privacy. Recently, more and more attention has been paid to the vulnerabilities and bugs present in sub-apps. For example, Y.Yang et al[53] study cross-sub-app request forgery and possible consequences, while T.Wang et al[49] present an empirical study on WeChat sub-app bugs caused by common JavaScript errors such as type errors. These works show that security and privacy in the sub-app ecosystem are of great importance. However, none of the above work focuses on privacy over-collection in sub-apps, and ours is the first of its kind. **Web Privacy.** Privacy on websites is a classic topic of security research. A.R.A. Bouguettaya et al [12] discuss the definition of online privacy, the possible ways websites collect privacy, and how to preserve user privacy. G. Yee et al [54] provide semi-automated approaches to derive privacy policy based on community consensus and existing privacy policies. With the help of machine learning, S. Zimmeck et al [60] implement a classifier to recognize the privacy usage of websites, while T. Libert [29] presents a large-scale audit of a million website privacy policies and provides a thorough analysis. Sub-apps, as a special form of Web apps, share similar privacy problems but also face new challenges. This paper fills a gap in Web privacy research by studying a new form of Web apps. **Privacy Policy Analysis.** There are many work analyzing privacy policy [8; 9; 36; 37; 43; 50; 55; 56; 61]. For example, B. Andow et al [8] find internal semantic contradictions in the privacy policies of mobile apps. S. Zimmeck et al [61], R. Slavin et al [43], L. Yu et al [55] reveal the inconsistency between code behavior and privacy policy, and develop tools to help developers and users to detect such inconsistency. Some work [36; 21] focus on the problem of user-input privacy over-collection in mobile apps, based on which \begin{table} \begin{tabular}{l l l l} \hline \hline **Protection Level** & **\# Collect** & **\# SPO** & **SPO\_rate** \\ \hline Not Protected & 3,472 & 607 & 17.48\% \\ Partial Protected & 1,420 & 171 & 12.04\% \\ Fully Protected & 97 & 3 & 3.09\% \\ \hline \hline \end{tabular} \end{table} Table 15. The SPO_rate of Privacy Items under Different Levels of Protection. X. Wang et al. (2019) study the inconsistency between user-input privacy and privacy policy in mobile apps. Different from the above works, this paper focuses on the sub-app over-collection problem, where a sub-app collects more than it claims in its privacy policy. ## 7. Conclusion In this paper, we perform the first systematic study of privacy over-collection in WeChat sub-apps. We propose _SPOChecker_, a framework that can automatically collect sub-apps and detect SPO in the wild. Using SPOChecker, we collect 5,521 popular WeChat sub-apps and study the problem from three aspects, including the landscape, accountability, and defense methods of SPO. We reveal the seriousness of privacy over-collection in the current WeChat sub-app ecosystem, uncover the possible reasons and responsible parties, and discuss the deficiencies of existing defense methods and the directions for improvement. We hope our work can help the community to better understand SPO and protect privacy in sub-apps.
2303.15278
Geometric Methods for Spherical Data, with Applications to Cosmology
This survey is devoted to recent developments in the statistical analysis of spherical data, with a view to applications in Cosmology. We will start from a brief discussion of Cosmological questions and motivations, arguing that most Cosmological observables are spherical random fields. Then, we will introduce some mathematical background on spherical random fields, including spectral representations and the construction of needlet and wavelet frames. We will then focus on some specific issues, including tools and algorithms for map reconstruction (\textit{i.e.}, separating the different physical components which contribute to the observed field), geometric tools for testing the assumptions of Gaussianity and isotropy, and multiple testing methods to detect contamination in the field due to point sources. Although these tools are introduced in the Cosmological context, they can be applied to other situations dealing with spherical data. Finally, we will discuss more recent and challenging issues such as the analysis of polarization data, which can be viewed as realizations of random fields taking values in spin fiber bundles.
Javier Carrón Duque, Domenico Marinucci
2023-03-27T15:04:05Z
http://arxiv.org/abs/2303.15278v1
# Geometric Methods for Spherical Data, with Applications to Cosmology ###### Abstract This survey is devoted to recent developments in the statistical analysis of spherical data, with a view to applications in Cosmology. We will start from a brief discussion of Cosmological questions and motivations, arguing that most Cosmological observables are spherical random fields. Then, we will introduce some mathematical background on spherical random fields, including spectral representations and the construction of needlet and wavelet frames. We will then focus on some specific issues, including tools and algorithms for map reconstruction (_i.e._, separating the different physical components which contribute to the observed field), geometric tools for testing the assumptions of Gaussianity and isotropy, and multiple testing methods to detect contamination in the field due to point sources. Although these tools are introduced in the Cosmological context, they can be applied to other situations dealing with spherical data. Finally, we will discuss more recent and challenging issues such as the analysis of polarization data, which can be viewed as realizations of random fields taking values in spin fiber bundles. ###### Contents * 1 BACKGROUND: COSMOLOGICAL MOTIVATIONS * 2 MATHEMATICAL FOUNDATIONS * 2.1 Spectral Representation for Isotropic Spherical Random Fields * 2.2 On the Meaning of Asymptotics in Cosmology * 3 CMB MAP RECONSTRUCTION AND COMPONENT SEPARATION * 4 POINT SOURCE DETECTION AND SEARCH FOR GALAXY CLUSTERS * 5 TESTING FOR GAUSIANITY AND ISOTROPY * 6 DIRECTIONS FOR FURTHER RESEARCH * 7 A. SPHERICAL HARMONICS AND THEIR MAIN PROPERTIES * B. THE SPECTRAL REPRESENTATION THEOREM ###### Abstract The status of Cosmology as an observational Science has experienced a dramatic shift in the last two decades. Until about the year 2000, Cosmology was known as a data-starved science: important experiments had been carried over in a number of fields, but the amount and quality of data were very far from the level needed to produce precise estimates of parameters and conclusive tests of most Cosmological Models; the situation has changed completely in the last couple of decades, when a number of experiments have improved the size and precision of existing observations by several orders of magnitude. To name only a few such experiments, the Cosmic Microwave Background (CMB; to be discussed at length below) has been probed by the satellite experiments WMAP and Planck, by several balloon-borne and by ground-based observatories, and it is going to be further investigated by the forthcoming satellite mission LiteBIRD; Gamma rays sources have been probed by Satellite Missions Fermi-LAT and AGILE, and from several observatories on the ground; ultra-high energy cosmic rays are investigated by huge international collaborations such as the Pierre Auger observatory; Cosmic Neutrinos are the object of investigation by IceCube; Gravitational Waves have been detected by the LIGO-Virgo Collaboration and will be further probed in the next decades by a new generation of observatories; radio sources are probed by huge collaborations such as SKA, whereas the Large Scale Structure of the Universe and weak gravitational lensing are the object of upcoming missions such as Euclid. A remarkable feature of all the datasets produced by these observations is the following: they are all defined on the sphere \(\mathbb{S}^{2}.\) For reasons of space and clarity, we will concentrate most of the discussion below on the CMB, but the tools that we shall introduce for spherical data analysis are relevant for many of the other applications as well. To understand the CMB radiation, let us recall the Standard Cosmological Model: it states that the Universe (or the Universe that we currently observe) started 13.7 Billion years ago in a very hot and dense state, filled by a plasma of electrons, photons and protons (we note that this is a large oversimplification, but it does fit our purposes, see, _e.g._, Dodelson (2003), Durrer (2008), Vittorio (2018), for a more detailed description). Matter was completely ionized, meaning that the mean kinetic energy of the electrons was higher than the electromagnetic potential of the protons, and no stable atoms formed: free electrons have a much larger "scattering surface", _i.e._, probability to interact with a photon, so that the mean free path of the latter was very short and the Universe was basically opaque to light. As the Universe expanded, the energy density and the kinetic energy of the electrons decreased to a point where it was no longer enough to resist the attraction of the protons: at this stage, stable hydrogen atoms formed, and the Universe became transparent to light. In this so-called "Age of Recombination", which is now estimated to have taken place about 377,000 years after the Big Bang, the Universe became transparent to light and these primordial photons started to move freely across the Universe. Hence, one of the key predictions of the model is that we should live embedded into this relic radiation, providing an image of the Universe as it was 13.7 billion years ago. These photons, now observed in microwave frequencies and in every direction on the sky, constitute the Cosmic Microwave Background. Although the first papers predicting the CMB radiation date back to around 1945, the first observational proof of its existence was given in a celebrated experiment by Penzias and Wilson in 1965. However, it was only in the current century that sophisticated satellite missions such as WMAP and, especially, Planck managed to produce high-resolution low-noise full-sky maps of the _Last Scattering Surface_, see **Figure 1**, Planck Collaboration (2020a), and Bobin et al (2014). In the next Section, we shall introduce the mathematical formalism that we will require for the statistical analysis of these maps. _www.annualreviews.org_ _Geometric Methods for Spherical Data, with Applications to Cosmology_ ## 2 Mathematical Foundations ### Spectral Representation for Isotropic Spherical Random Fields Formally, a spherical random field is a collection of real random variables indexed by points on the sphere, _i.e._, \(\left\{T:\Omega\times\mathbb{S}^{2}\rightarrow\mathbb{R}\right\},\) for some suitable underlying probability space \(\left\{\Omega,\Im,\mathbb{P}\right\};\) without loss of generality we take these random variables to be zero mean \(\mathbb{E}\left[T(x)\right]=0,\) we assume they have finite variance \(\mathbb{E}\left[T^{2}(x)\right]<\infty,\) and we assume isotropy, meaning that the field is invariant in law to rotations: \[T(x)\stackrel{{ Law}}{{=}}T(gx)\text{ for all }g\in SO(3).\] Isotropy can be viewed as broadly analogous to the (strong) stationarity that allows to develop the classical spectral representation approach when dealing with Time Series, see _e.g._ Brockwell & Davis (2006). Indeed, under these conditions, a Spectral Representation Theorem holds on the sphere; more precisely, we have (see the Appendix or Marinucci & Peccati 2011) \[T(x,\omega)=\sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{\ell}a_{\ell m}(\omega)Y_{ \ell m}(x)\, \tag{1}\] where the identity holds both in \(L^{2}(\Omega)\) and in \(L^{2}(\Omega\times\mathbb{S}^{2})\); here we have written explicitly the random variables as functions of \(\omega\in\Omega\) to highlight the decoupling between random components depending on \(\Omega\) and deterministic components depending on \(\mathbb{S}^{2}\). The deterministic functions \(\left\{Y_{\ell m}\right\}\) are the _spherical harmonics_ and, for fixed \(\ell\), they form an orthonormal basis of the eigenspace of the Spherical Laplacian operator \(\Delta_{\mathbb{S}^{2}}\), with eigenvalues corresponding to \(-\lambda_{\ell}=-\ell(\ell+1)\). They indeed satisfy \[\Delta_{\mathbb{S}^{2}}Y_{\ell m}=-\lambda_{\ell}Y_{\ell m}\,\,\ell=0,1,2,...,\ m=- \ell,...,\ell\ ; \tag{2}\] again, more details and definitions are given in the Appendix. We therefore have a decomposition into a set of eigenfunctions, along different frequencies basically corresponding to the square root of the corresponding eigenvalues (known as _multipoles_\(\ell\) in the spherical case). The most important message to remember, also for its statistical consequences, is that in the case of the sphere there are \(2\ell+1\) independent components corresponding to each "frequency" (multipole) \(\ell\): this fact will play a crucial role in the asymptotic results to follow. By expressing the field \(T(x)\) in the spherical harmonics base according to Equation 1, we obtain the _random spherical harmonics coefficients_\(\left\{a_{\ell m}\right\}\). They form an array of zero-mean, complex valued random variables with covariance given by \[\mathbb{E}\left[a_{\ell m}\overline{a_{\ell^{\prime}m^{\prime}}}\right]= \delta_{\ell}^{\ell^{\prime}}\delta_{m}^{m^{\prime}}C_{\ell}\ ;\] in other words, they are uncorrelated whenever \(\ell\) or \(m\) differ from \(\ell^{\prime},m^{\prime}\) and they have variance given by the non-negative sequence \(\left\{C_{\ell}\right\}\), called the _angular power spectrum_ of the random field \(T(x)\). The covariance function of the field can be expressed as (Schoenberg's Theorem): \[Cov(T(x_{1}),T(x_{2}))=\mathbb{E}\left[T(x_{1}),T(x_{2})\right]=\sum_{\ell=0}^ {\infty}\frac{2\ell+1}{4\pi}C_{\ell}P_{\ell}(\left\langle x_{1},x_{2}\right\rangle )\,\] where \(P_{\ell}(.)\) is the sequence of Legendre polynomials, to be introduced in the Appendix and \(\langle x,y\rangle\) is the standard scalar product. It should also be noted that the spherical harmonic coefficients can in principle be recovered from the observations of the field (up to some statistical difficulties that we shall discuss below) by means of the inverse spherical harmonic transform: \[a_{\ell m}=\int_{\mathbb{S}^{2}}T(x)\overline{Y_{\ell m}(x)}dx. \tag{3}\] From a cosmological perspective, the angular power spectrum of the CMB, \(C_{\ell}\), is a main source of information. It is sensitive to the exact values of the cosmological parameters, such as the amount of baryonic matter, Dark Matter, or Dark Energy in the Universe (Planck Collaboration 2020c,d). Modern CMB observations such as Planck are able to measure the angular power spectrum with unprecedented precision, leading to subpercentage uncertainties in most cosmological parameters; because of this, it is commonly said that the field is now in an era of _precision Cosmology_. ### On the Meaning of Asymptotics in Cosmology A deep foundational question must be addressed when dealing with cosmological data. By definition, Cosmology is a science based on a single observation--our Universe; then, how is it possible to apply asymptotic statistics tools? The asymptotic theory which is used in this framework is meant in the high-frequency sense, rather than the large domain one; it is strictly related to so-called fixed-domain asymptotics, the common framework under which geophysical data are usually handled (see, _e.g._, Loh 2005, 2015, and the references therein). It is assumed that inverse Fourier transforms like Equation 3 are evaluated for \(\ell=1,2,...,L_{\rm max}\), where \(L_{\rm max}\) grows from one experiment to the other. This setting fits exactly the reality of data collection in Cosmology: a pioneering experiment like COBE had a resolution to reach an \(L_{\rm max}\) of the order of 20-30, a number which was raised to 600-800 for WMAP and subsequently to about 2000 by Planck; in the next generation of experiments, these values could grow further by a factor of at least 2 or 3. More precisely, observations are collected on a grid of sampling points that depend on the resolution of the experiment: it arrives to a cardinality of about \(12\times 1024^{2}\) points for Planck. By means of these observations, integrals as Equation 3 are approximated as Riemann sums; the order of the multipoles for which this approximation can work depends on the resolution of the grid. In practice, however, the observations are not available with the same level of accuracy on the full sphere: there are some regions of the sky where foreground contaminants such as the Milky Way cannot be removed efficiently. It is then convenient to introduce some form of spherical wavelet transform, as we shall do in the remainder of this section. In particular, we will consider needlets: a form of spherical wavelets which was introduced by Narcowich et al (2006a,b) in the mathematical literature (see also Geller & Mayeli 2009a,b,c) and then to the Cosmological and Statistical communities by Baldi et al (2009a,b), Marinucci et al (2008), see also Scodeller et al (2011) and the references therein. The basic idea behind needlets can be summarized as follows. Consider first the operator which goes from the random field \(T(x)\) to its Fourier components \(T_{\ell}(x)\): \[T_{\ell}(x) = \sum_{m=-\ell}^{\ell}a_{\ell m}Y_{\ell m}(x)=\sum_{m=-\ell}^{\ell} \int_{\mathbb{S}^{2}}T(z)\overline{Y}_{\ell m}(z)dzY_{\ell m}(x)\] \[= \int_{\mathbb{S}^{2}}T(z)\sum_{m=-\ell}^{\ell}\overline{Y}_{\ell m }(z)Y_{\ell m}(x)dz=\int_{\mathbb{S}^{2}}T(z)\frac{2\ell+1}{4\pi}P_{\ell}( \langle z,x\rangle)dz\,\] where the last equality comes from Equation 8 in the Appendix. In other words, the Fourier components can be realized as the projections of the fields on the "frequency" (multipole) component \(\ell\) by means of the kernel operator \(\frac{2\ell+1}{4\pi}P_{\ell}(\langle x,.\rangle)\). These multipole components of the field \(T_{\ell}\) are perfectly localized in harmonic domain (as they are projected onto a single multipole) at the expense of losing all spatial localization. This can be a problem when different parts of the sphere are observed with different levels of noise or with incomplete observations. The idea of needlets is to partially give up the perfect localization in the harmonic domain in order to obtain better localization properties in the real domain. In particular, let us now consider a smooth (possibly \(C^{\infty}\)) non-negative function \(b(.)\), supported in the compact domain \((\frac{1}{B},B)\), with \(B>1\); we additionally impose that its square satisfies the partition of unity property, namely \[\sum_{j=0}^{\infty}b^{2}\left(\frac{\ell}{B^{j}}\right)\equiv 1\,\,\mbox{ for all }\ell=1,2,....\] Note that the elements in the sum are different from zero only for \(\ell\in(B^{j-1},B^{j+1}).\) Now consider also a grid of cubature points \(\left\{\xi_{jk}\in\mathbb{S}^{2}\right\}_{j=1,2,...;k=1,2,...N_{j}}\), with growing cardinality \(N_{j}\) of order \(B^{2j}\), nearly equispaced so that \(d_{\mathbb{S}^{2}}(\xi_{jk},\xi_{jk^{\prime}})\simeq cB^{-j}\), where we have introduced the spherical geodesic distance \[d_{\mathbb{S}^{2}}(x,y):=\arccos\left(\langle x,y\rangle\right)\.\] The needlet projection coefficients are defined by \[\beta_{jk}:=\int_{\mathbb{S}^{2}}T(x)\psi_{jk}(x)dx\,\,\psi_{jk}(x):=\sum_{ \ell}b\left(\frac{\ell}{B^{j}}\right)\frac{2\ell+1}{4\pi}P_{\ell}(\langle x, \xi_{jk}\rangle)\sqrt{\lambda_{jk}}\,\] where \(\{\lambda_{jk}\}\) are cubature weights, which can be chosen in such a way as to ensure the identity \[\sum_{k=1}^{N_{j}}Y_{\ell m}(\xi_{jk})\overline{Y_{\ell^{\prime}m^{\prime}}}( \xi_{jk})\lambda_{jk}=\delta_{\ell}^{\ell^{\prime}}\delta_{m}^{m^{\prime}}\,\,\mbox{ for all }\ell,\ell^{\prime}\leq[B^{j+1}]\.\] It is then easy to see that the following reconstruction formula holds: \[T(x)=\sum_{jk}\beta_{jk}\psi_{jk}(x)=\sum_{j}\widetilde{T}_{j}(x)\,\] where the needlet components are defined by \[\widetilde{T}_{j}(x)=\sum_{\ell=0}^{\infty}b^{2}\left(\frac{\ell}{B^{j}}\right) T_{\ell}(x). \tag{4}\] Alternatively, we could introduce the _needlet projection kernel_ \[\Psi_{j}(x,y)=\sum_{\ell=0}^{\infty}b^{2}\left(\frac{\ell}{B^{j}}\right)\frac{2 \ell+1}{4\pi}P_{\ell}(\langle x,y\rangle)\] which acts on the field \(T(.)\) in such a way that \[\Psi_{j}:T(x)\rightarrow \int_{\mathbb{S}^{2}}T(z)\Psi_{j}(z,x)dz=\] \[= \sum_{\ell=0}^{\infty}b^{2}\left(\frac{\ell}{B^{j}}\right)\int_{ \mathbb{S}^{2}}T(z)\frac{2\ell+1}{4\pi}P_{\ell}(\langle z,x\rangle)dz= \widetilde{T}_{j}(x)\.\] Therefore, it is now readily seen that this needlet projection operator projects the field \(T(.)\) on a linear combination of eigenfunctions \(T_{\ell},\) for \(\ell\in(B^{j-1},B^{j+1}).\) More importantly, the needlet kernel projector enjoys much better localization properties than the simple Legendre projector; indeed, it has been shown (see Narcowich et al 2006a,b, Geller & Mayeli 2009a) that for all \(M\in\mathbb{N}\) there exists a positive constant \(C_{M}\) s.t. \[|\Psi_{j}(x,y)|\leq C_{M}\frac{B^{2j}}{(1+B^{j}d_{\mathbb{S}^{2}}(x,y))^{M}}. \tag{5}\] In words, this means that for any fixed angular distance, the kernel decays to zero faster than any polynomial in \(j.\) As a consequence, it is possible to evaluate needlet projections even in the case of sky maps which are only partially observed, given that, for any "masked" (_i.e._, unobservable) region \(G\subset\mathbb{S}^{2}\) and \(x\in\mathbb{S}^{2}\backslash G\) we have \[\int_{\mathbb{S}^{2}\backslash G}T(z)\Psi_{j}(z,x)dz = \int_{\mathbb{S}^{2}}T(z)\Psi_{j}(z,x)dz-\int_{\mathbb{G}}T(z) \Psi_{j}(z,x)dz\] \[= \widetilde{T}_{j}(x)+R_{j}(x)\,\] where \[\mathbb{E}\left[|R_{j}(x)|\right] \leq \int_{G}\mathbb{E}\left[|T(z)|\right]|\Psi_{j}(z,x)|\,dz\] \[\leq Const\times\int_{G}|\Psi_{j}(z,x)|\,dz\] \[\leq Const\times C_{M}\int_{G}\frac{B^{2j}}{(1+B^{j}d_{\mathbb{S}^{2}}( x,z))^{M}}dz\] \[\leq C^{\prime}_{M}\times 4\pi\times B^{j(2-M)}\times d_{\mathbb{S}^{2}}^ {-M}(x,G)\to 0\,\,\mbox{as}\ j\rightarrow\infty\,\] where \(d_{\mathbb{S}^{2}}(x,G)\) is defined as the infimum of the distances between \(x\) and the points of \(G\). The decay to zero is itself super-exponential, and a very broad numerical and empirical evidence has shown that needlet components of spherical random fields are minimally affected by unobserved regions, in the high-frequency sense, and for reasonable amounts of masked information. Needlet fields enjoy another very important property for statistical analysis. In particular, it has been established that (see Baldi et al 2009b) \[\left|Corr(\widetilde{T}_{j}(x),\widetilde{T}_{j}(y))\right|\leq Const\times \frac{1}{(1+B^{j}d_{\mathbb{S}^{2}}(x,y))^{M}}\ ;\] in other words, the correlation between the needlet fields evaluated on any two points decays faster than any polynomial. At first sight, one may think this is a direct consequence of the localization properties of the needlet kernel projector (see Equation 5), but this is not the case: uncorrelation is not in general a consequence of kernel localization. To understand this point, consider the extreme case of a delta-like projector such that \(\delta_{x}:T\to T(x)\); this is obviously an example of perfect localization in real space, but nevertheless the correlation between any two projected components \(T(x)\) and \(T(y)\) does not decay to zero in any meaningful sense. It is the combination of localization in real and harmonic space, a defining feature of needlets, that makes fast uncorrelation possible. In the next Section, we show how these uncorrelation properties make it possible to study principled statistical inference with an asymptotic justification. ## 3 CMB Map Reconstruction and Component Separation The first statistical issue we shall consider is the so-called CMB map reconstruction (sometimes also called component separation or foreground removal). This is the cosmological instance of the image reconstruction issues that are common in many fields, and because of this, the techniques introduced in this chapter are likely to be applicable to many different areas. The problem arises from a very natural question: when observing the celestial sky, how can you distinguish CMB radiation from the many other galactic and extragalactic sources that lie between us and the last scattering surface? The key remark is that CMB observations are collected on many different electromagnetic frequencies, where they follow a Planckian black-body emission governed by a single (temperature) parameter (see Durrer 2008, Planck Collaboration 2020b, Axelsson et al 2015). Therefore, we can decompose the observation at frequency \(\nu_{k}\) and point \(x\), \(T(x,\nu_{k})\), after suitable transformations, in the following way: \[T(x;\nu_{k})=T(x)+F(x;\nu_{k})+N(x,\nu_{k})\,\] where \(N(x,\nu_{k})\) denotes instrumental noise and \(F(x;\nu_{k})\) are "foreground residuals", _i.e._ the collection of emission by galactic dust, astrophysical sources, and other mechanisms both from within the Galaxy and from outside it. The crucial identifying assumption is that the CMB "signal" \(T(x)\) is constant across the different electromagnetic channels (although by no means constant over the sky directions \(x\in\mathbb{S}^{2}\)). Assume as a starting point that the noise \(N(.)\) has zero mean, constant variance and is uncorrelated over different electromagnetic frequencies; for the foreground, assume again that it has finite variance, and that its variance covariance matrix over different channels is given by \[\mathbb{E}\left[F(x;.)F(x;.)^{T}\right]=\Omega\.\] The best linear unbiased estimates of \(T(x)\), viewed as a fixed parameter at a given \(x\), is then simply given by the generalized least squares solution. \[\widehat{T}_{ILC}(x)=\left\{P^{T}\left(\Omega+\sigma_{N}^{2}I_{K}\right)^{-1}P \right\}^{-1}P^{T}\left(\Omega+\sigma_{N}^{2}I_{K}\right)^{-1}T(x,.)\,\] where \(P=\left(1,1,...,1\right)^{T}\) is the \(K\)-dimensional vector of ones, and by \(T(x,.)\) we mean the \(K\)-dimensional column vector with the observations across the different frequencies. This map-making algorithm is known as ILC (Internal Linear Combination) in the cosmological literature. To be implemented, it requires an estimate of the covariance matrix \(\Omega\); here is where the needlet approach can come into play. The covariance matrix \(\Omega\) represents the dependence structure and the overall magnitude of different foregrounds over different frequencies. This covariance varies wildly over different regions and over different scales: some foregrounds are dominant over large scales (_e.g._, galactic dust) whereas other dominates on smaller scales (localized sources). To take into account the fact that variance can be different across different scales, the idea is to introduce the \(NILC\) (Needlet Internal Linear Combination) map-making algorithms, defined by (see Remazeilles et al 2011, and references therein) \[\widehat{T}_{NILC;j}(x)=\left\{P^{T}\left(\widehat{\Omega}_{j}+\sigma_{N}^{2}I _{K}\right)^{-1}P\right\}^{-1}P^{T}\left(\widehat{\Omega}_{j}+\sigma_{N}^{2}I _{K}\right)^{-1}\widetilde{T}_{j}(x,.)\,\] \[\widehat{T}_{NILC}(x)=\sum_{j}\widehat{T}_{NILC;j}(x)\,\] where the covariance matrix \(\Omega_{j}\) is estimated in a first step as \[\widehat{\Omega}_{j}=\frac{1}{4\pi}\int_{\mathbb{S}^{2}}\left(T(x;\nu_{k})- \overline{T}(x)\right)\left(T(x;\nu_{k})-\overline{T}(x)\right)^{T}dx\,\ \ \ \overline{T}(x)=\frac{1}{K}\sum_{k}T(x;\nu_{k})\.\] As a second step (see, _e.g._, Carones et al (2022c) and the references therein), one may take into account the spatial variability of the matrices \(\Omega=\Omega(x)\), another task that can be addressed by needlets because of their spatial localization. The idea is then to partition the celestial sky into subregions \(A_{r}\), with \(r=1,...,R\), such that \(A_{r_{1}}\cap A_{r_{2}}=\varnothing\) and \(\cup_{r=1}^{R}A_{r}=\mathbb{S}^{2}\). We can write now \[\widehat{\Omega}_{j,r}=\frac{1}{4\pi}\int_{A_{r}}\left(T(x;\nu_{k})-\overline {T}(x)\right)\left(T(x;\nu_{k})-\overline{T}(x)\right)^{T}dx\] \[\widehat{T}_{NILC;j,r}(x)=\left\{P^{T}\left(\widehat{\Omega}_{j,r}+\sigma_{N}^ {2}I_{K}\right)^{-1}P\right\}^{-1}P^{T}\left(\widehat{\Omega}_{j,r}+\sigma_{N} ^{2}I_{K}\right)^{-1}\widehat{T}_{j}(x,.)\,\ \ \ \ x\in A_{r}\,\] \[\widehat{T}_{NILC;r}(x)=\sum_{j}\widehat{T}_{NILC;j,r}(x)\,\ \ \ x\in A_{r}\.\] This method has proven to be very efficient and compares favorably with other existing techniques, see again Carones et al (2022c) for further discussion. For the rest of this paper, we shall assume to be dealing with maps that have been made according to one of these procedures: in the next two sections, we discuss how to test for residuals point sources and for deviations from the assumptions of isotropy and Gaussianity. ## 4 Point Source Detection and Search for Galaxy Clusters After a CMB map has been built from observations, as discussed in the previous Section, a natural question to ask is whether all the foreground contaminants have been properly removed. In this Section, we focus on so-called point sources: these are mainly galaxies or clusters of galaxies which appear as point-like local maxima in the maps (Planck Collaboration 2015). The proper approach to handling such issues is clearly a form of multiple testing. Indeed, for an experiment like Planck, there are several thousand local maxima that could be identified as potential candidates for point sources: at each of these locations one may wish to run a significance test, but controlling the size of the test is then a daunting task. Very recently, this topic has been addressed in Carron Duque et al (2019), Cheng et al (2020) by means of a variation in the so-called STEM (Smoothing and Testing Multiple Hypotheses) algorithm, see also Schwartzman et al (2011), Cheng & Schwartzman (2015, 2017, 2018), Cammarota et al (2016). The idea of the procedure is rather natural and can be explained as follows. Because we are looking for point sources, large-scale fluctuations can be considered to produce just noise without information on the signal; in the first step, it is then natural to filter the map with a needlet transform and consider only the needlet components \(\widetilde{T}_{j}\) (as defined in Equation 4), for "high enough" \(j\), as we will discuss later. This procedure does increase the signal-to-noise ratio considerably, as illustrated in **Figure 2**, see also Scodeller et al (2012), Scodeller & Hansen (2012). In a second step, we focus on the candidate sources, which correspond to the local maxima of the field \(\widetilde{T}_{j}\). Our aim is to establish the \(p\)-value of each of these local maxima. Therefore, we need to study the density of these maxima. It turns out that an exact result can be given (see Cheng et al 2020); we shall denote the probability density of maxima by \(f_{j}\). The expected density of maxima \(f_{j}\) is compared to the observed distribution in a single realization in **Figure 3**, for a needlet-filtered map at \(B=1.2\), \(j=39\). At these small scales, corresponding to the size of point sources, the theoretical expectaction is remarkably close to the realized distribution, even when considering a single map. We are then in the position to compute the \(p\)-value of each maximum: it is the probability of the maximum being larger than \(\widetilde{T}_{j}(\xi_{k})\), with \(k=1,2,...K\) for the \(K\) maxima located at \(\xi_{1},...\xi_{K}\in\mathbb{S}^{2}\). Therefore, the \(p\)-value of each maximum is \[p_{k,j}:=\int_{\widehat{T}_{j}(\xi_{k})}^{\infty}f_{j}(u)du\.\] As a third step, we implement a Benjamini-Hochberg procedure for the identification of point sources (Banjamin & Hochberg 1995). To this aim, let us first reorder the \(p\)-values and sources so that they are increasing, \(p_{(1),j}\leq p_{(2),j}\leq...\leq p_{(K),j}\); we then fix a value \(0<\alpha<1\), corresponding to the expected proportion of False Discoveries that we are willing to tolerate among the reported Point Sources. Namely, we consider \(\xi_{k}\) to be the location of a Point Source whenever \[u_{k,j}:=p_{(k),j}-\alpha\frac{k}{K}<0\,\] that is, when the \((k)^{\rm th}\) ordered \(p\)-value is smaller that \(\alpha\) times the expected \(p\)-value of the \((k)^{\rm th}\) maximum under the null hypothesis of a purely Gaussian field. Figure 3: Distribution of the maxima of a needlet-filtered map, \(\widehat{T}_{j}\) with a small-scale needlet (\(B=1.2\), \(j=35\)). In dashed black line, the theoretical prediction, \(f_{j}\); in orange bars, the realization on a single map. Top: linear scale. Bottom: logarithmic scale. It can be seen that the maxima density closely follows the prediction, even in a single realization. The technical analysis of the properties of this procedure is based on a result of some independent interest, namely the high-frequency ergodicity of the empirical distribution of the maxima. In other words, it turns out to be possible to show that, as \(j\rightarrow\infty\), for all fixed \(u\in\mathbb{R}\) \[\frac{Card\left\{\widetilde{T}_{j}(\xi_{k}):\widetilde{T}_{j}(\xi_{k})>u\right\} }{\int_{u}f_{j}(x)dx}\rightarrow_{p}1\.\] In turn, this result follows from the Kac-Rice representation of maxima and a high-frequency uncorrelation result on the gradient and Hessian of the needlet fields, see Cheng et al (2020) for more discussion and details. The previous result forms the mathematical basis for establishing the two main properties of the Benjamini-Hochberg procedure in this context, namely 1. False Discovery Rate control: as \(j\rightarrow\infty\), we have that \[\mathbb{E}\left[\frac{Card\left\{\widetilde{T}_{j}(\xi_{k}):u_{k,j}<0\text{ and no point source at }\xi_{k}\right\}}{Card\left\{\widetilde{T}_{j}(\xi_{k}):u_{k,j}<0\right\}} \right]<\alpha\] 2. Power control: as \(j\rightarrow\infty\), the proportion of point sources that are detected converges to unity. Actually, the result established in Cheng et al (2020) is stronger than (b), because the number of sources is allowed to grow with \(j\), in an effort to mimic in this context a high-dimensional framework where the complexity of the signal (point sources) grows with the amount of observations. The approach described here was successfully applied to Planck CMB temperature maps and led to the possible discovery of a previously undetected source, see Carron Duque et al (2019) for more details. A final remark on this topic: in this section, the procedure we have introduced is based on the framework where the signal is made up by the sources to be detected, whereas noise is the given by a random Gaussian field, namely the CMB radiation which we consider as signal for most of the paper. This is sometimes summarized by the ironical motto _"your noise is my signal"_ in some papers in this area. In the next section, we will further probe goodness-of-fit tests for the basic assumptions on the random fields at hand, namely Gaussianity and isotropy. ## 5 Testing for Gaussianity and isotropy A crucial assumption to be investigated on spherical random fields is whether they are actually Gaussian and isotropic. A very important tool for this task is the so-called _Lipschitz-Killing Curvatures_; in the cosmological literature, the term "Minkowski functionals" is more often used, but the two definitions are equivalent up to some scaling constants and reordering of indexes. The proper definition of Lipschitz-Killing Curvatures requires some geometric background. Let us first consider some convex set \(A\subset\mathbb{R}^{d}\), and let us define the _Tube_ of radius \(r\) around \(A\) as the set of points at distance smaller or equal than \(r\) from \(A:\) \[Tube(A,r):=\left\{x\in\mathbb{R}^{d}:d(x,A)\leq r\right\}\.\] The _Tube Formula_ (see Adler & Taylor 2007, 2011) proves that the volume of the Tube admits a finite-order Taylor expansion into powers of \(r\): \[Meas\left\{Tube(A,r)\right\}=\sum_{i=0}^{d}\,\omega_{d-i}\,\mathcal{L}_{i}(A)\,r ^{i}\,\,\text{where}\ \omega_{i}=\frac{\pi^{i/2}}{\Gamma(\frac{i}{2}+1)}\,\] \(\Gamma(\alpha)=\int_{0}^{\infty}t^{\alpha-1}\exp(-t)dt\) denoting the Gamma function and \(\omega_{i}\) represents the volume of the \(i\)-dimensional unit ball (\(\omega_{0}=1\), \(\omega_{1}=2\), \(\omega_{2}=\pi\), \(\omega_{3}=\frac{4}{3}\pi\),...). The coefficients \(\mathcal{L}_{i}(A)\) are the so-called Lipschitz-Killing Curvatures of \(A\); for instance, in two-dimensional space there are three of them, equal to the area, half the boundary length, and the Euler-Poincare Characteristic of the set \(A\). The latter is a topological invariant that plays a crucial role in Mathematics: in the two-dimensional case, it is equivalent to the number of connected components of \(A\) minus its "holes". It can be shown that any sufficiently regular functional of \(A\) can be written in terms of the Lipschitz-Killing Curvatures alone; in this sense, they can be viewed as some sort of sufficient statistic for the information encoded into \(A\), see again Adler & Taylor (2007, 2011). The Lipschitz-Killing Curvatures (or equivalently the Minkowski Functionals, which are the same quantities up to some constants and relabelling of indexes) are one of the most popular statistical tools for Cosmological data analysis. One reason for their popularity is that, quite surprisingly, it is possible to give simple analytic forms for their expected values, under the null assumptions of Gaussianity and isotropy: they are hence very natural tools for goodness-of-fit tests. More precisely, let us define the derivative of the covariance function at the origin as \[\mu^{2}:=\left.\frac{\partial}{\partial y}\Gamma(\langle x,y\rangle)\right|_{ y=x}=\sum_{\ell}\frac{2\ell+1}{4\pi}\frac{\lambda_{\ell}}{2}C_{\ell}\.\] Let us also introduce the excursion sets \(A_{u}\), which are simply those subsets of the sphere where the field is above some given value \(u:\) \[A_{u}(f):=\left\{x\in\mathbb{S}^{2}:f(x)\geq u\right\}\.\] The idea is to compute the Lipschitz-Killing Curvatures on the (random) excursion sets, and then compare their observed values on real data with their expectation under Gaussianity and isotropy. It could be imagined that computing the latter might be a daunting task; on the contrary, it turns out that a completely explicit expression holds for random fields defined on general manifolds. In particular, the following Gaussian Kinematic Formula holds : \[\mathbb{E}\left[\mathcal{L}_{i}(A_{u}(f))\right]=\sum_{k=0}^{2-i}\left[\! \begin{array}{c}k+i\\ k\end{array}\!\right]\mathcal{L}_{k+i}(\mathbb{S}^{2})\,\rho_{k}(u)\,\mu^{k/2 }\,\] where we have introduced the flag coefficients \[\left[\!\begin{array}{c}d\\ k\end{array}\!\right]=\left(\!\begin{array}{c}d\\ k\end{array}\!\right)\!\frac{\omega_{d}}{\omega_{k}\,\omega_{d-k}}\] and the functions \[\rho_{0}(u) = 1-\Phi(u)\,\] \[\rho_{k}(u) = \frac{1}{\left(2\pi\right)^{\frac{k+1}{2}}}H_{k-1}(u)\exp(-\frac {u^{2}}{2})\,\,\text{for}\ k\geq 1\,\] where \(\Phi(.)\) denotes the standard Gaussian cumulative distribution whereas \(H_{k}(u)\) stands for the Hermite polynomials, given by \[H_{k}(u)=(-1)^{k}\exp(\frac{u^{2}}{2})\frac{d^{k}}{du^{k}}\exp(-\frac{u^{2}}{2})= (-1)^{k}\frac{1}{\phi(u)}\frac{d^{k}}{du^{k}}\phi(u)\ ;\] here, \(\phi(u)\) denotes as usual the standard Gaussian probability distribution. The first Hermite polynomials are \(H_{0}(u)=1\), \(H_{1}(u)=u\), \(H_{2}(u)=u^{2}-1\). In particular, we obtain the following values for the case of the excursion area, the boundary length and the Euler-Poincare characteristic: \[\mathbb{E}\left[\mathcal{L}_{2}(A_{u}(f))\right] = 4\pi(1-\Phi(u))\,\] \[\mathbb{E}\left[\mathcal{L}_{1}(A_{u}(f))\right] = \pi\exp\left(-\frac{u^{2}}{2}\right)\,\mu^{1/2}\,\] \[\mathbb{E}\left[\mathcal{L}_{0}(A_{u}(f))\right] = 2\left[u\,\phi(u)\,\mu+1-\Phi(u)\right]\] It turns out to be especially convenient to compute Lipschitz-Killing Curvatures on the needlet components of the random fields, \(\widetilde{T}_{j}\). Indeed, in these circumstances, the expected values are simply obtained by replacing the needlet covariance derivative for the parameter \(\mu\), which is simply given by: \[\mu_{j}^{2}:=\sum_{\ell}\frac{2\ell+1}{4\pi}\frac{\lambda_{\ell}}{2}\;b^{2} \bigg{(}\frac{\ell}{B^{j}}\bigg{)}\,C_{\ell}\.\] The main advantage is that it can be shown that the variances of Lipschitz-Killing Curvatures decay to zero and a Central Limit Theorem holds in the high frequency limit, see, _e.g._, Cammarota & Marinucci (2015), Shevchenko & Todino (2023) and the references therein. We have that \[\frac{\mathcal{L}_{i}(A_{u}(\widetilde{T}_{j}))-\mathbb{E}\left[\mathcal{L}_{i }(A_{u}(\widetilde{T}_{j}))\right]}{\sqrt{Var\left[\mathcal{L}_{i}(A_{u}( \widetilde{T}_{j}))\right]}}\rightarrow_{d}N(0,1)\,\,\text{as}\ B^{j} \rightarrow\infty\,\,i=0,1,2\.\] Moreover, we have \[Var\left[\frac{\mathcal{L}_{i}(A_{u}(\widetilde{T}_{j}))}{\mathbb{E}\left[ \mathcal{L}_{i}(A_{u}(\widetilde{T}_{j}))\right]}\right]=O\left(\frac{1}{B^{2 j}}\right)\,\,\text{as}\ j\rightarrow\infty\.\] These results prove that, in the high-frequency limit, the Lipschitz-Killing Curvatures converge to their expected values even on a single realization of a Gaussian isotropic map, and the fluctuations follow Gaussian behaviour, thus allowing for standard chi-square testing procedures for goodness-of-fit. This is yet another form of high-frequency ergodicity, as discussed in the previous section for the empirical distribution of critical points. The values of the three Minkowski Functionals can be seen in **Figure 4**, including the theoretical expectation and both the mean and standard deviation on 100 simulations. The three Minkowski Functionals are shown for a large scale needlet component (\(B=2\), \(j=3\), \(\ell_{mean}=8\)), as well as the evolution of the standard deviation with needlet scale; the trend closely resembles \(\ell^{-1}\sim\frac{1}{B^{j}}\), as expected. These ideas have been applied in a series of papers, including an analysis on needlet components of Planck maps (Planck Collaboration 2020e). They find that the assumption of Gaussianity and isotropy of the foreground-cleaned maps is consistent with the empirical evidence: the Minkowski Functionals evaluated on the Planck CMB data are compatible with the results on realistic simulations at the \(2\sigma\) level across different needlet frequencies (see also Planck Collaboration 2020f, for a study on Non-Gaussianity on Planck data). Finally, it is worth noting that Minkowski Functionals, as well as other higher-order statistics, are being explored in several spherical fields within Cosmology. This is in order to extract more information from the possible non-Gaussianities of the fields, as this information is unavailable when using only the angular power spectrum. Some applications include Galactic foregrounds (Krachmalnicoff & Puglisi 2021, Martire et al 2023), maps of the gravitational lensing of the CMB due to general relativity (Euclid Collaboration 2023, Grewal et al 2022, Zurcher et al 2021), and the distribution of matter in the Universe, using either the galaxy distribution (Appleby et al 2022, Liu et al 2023) or observations of the emission of neutral H in the 21cm line (Spina et al 2021). ## 6 Directions for Further Research The analysis of CMB temperature maps has now been very deeply explored in the last 20 years and, hence, a wide variety of tools and techniques are available for data analysis. Figure 4: The results of the three Minkowski Functionals on needlet components of spherical fields. In all cases, the theory is compared with the mean and standard deviation of 100 Gaussian isotropic simulations. Top left, top right, and bottom left show the first, second, and third Minkowski Functionals as a function of threshold, for a low multipole needlet component (\(B=2\), \(j=3\), \(\ell_{mean}=8\)). Bottom right shows the trend of the relative standard deviation of these statistics with respect to the size of the needlet considered, shown at \(u=1\) as an example; this is compared to the theoretically expected \(\ell^{-1}\) trend. The next couple of decades will present more sophisticated and mathematically challenging issues. Among these, a special mention must be devoted to CMB polarization data, which will be the object of several ground-based and satellite experiments, such as the Simons Observatory, ACT, or LiteBIRD, among others; see Hazumi et al (2020) and the references therein. It is not easy to do justice to the mathematical complexity of polarization random fields at an introductory level. In a nutshell, the point is that the incoming photons making up CMB oscillate in the plane orthogonal to their travelling direction (as a standard consequence of Maxwell's equations of electromagnetism). Because of these oscillations, they can be understood as drawing unoriented lines on the tangent plane of every point in the celestial sphere. This line bundle can indeed be observed: see **Figure 5** for a visualization of CMB polarization (modulus and direction), and **Figure 6** for a visualization of the polarization direction over the temperature map; both figures with data measured by the Planck Satellite. Mathematically, this can be modeled by considering polarization data as a realization of a random spin fiber bundle, as discussed for in instance in Geller & Marinucci (2010), Leonenko & Sakhno (2012), Malyarenko (2013), Baldi & Rossi (2014), Stecconi (2022), Lerario et al (2022). The idea of spin fiber bundles was first introduced in a classical paper by Newman & Penrose (1966), where it is argued that a quantity \(f_{s}(x)\) behaves as a spin bundle of order \(s\) if it transforms as follows under a rotation of \(\gamma\) radians of the tangent Figure 5: CMB polarization, as measured by the Planck satellite. Given that this is a spin 2 field on the sphere, we represent the modulus and direction: the polarization modulus is represented by the background colors, whereas the direction of the polarization is represented through unoriented lines, which constitute the texture of the image. Both modulus and direction have been smoothed with a Gaussian kernel at \(fwhm=4^{\circ}\) for visualization purposes. plane at \(x\) : \[f^{\prime}_{s}(x)=\exp(is\gamma)\,f_{s}(x)\,\ \gamma\in[0,2\pi)\.\] In the case of line bundles, \(s=2\). Indeed, polarization at any point \(x\in\mathbb{S}^{2}\) is invariant with respect to rotations of \(\pi\) radians (_i.e._, 180 degrees). Spin random fields could also be seen as tensor-valued, see again Leonenko & Sakhno (2012), Malyarenko (2013) and the references therein. We note that it is also possible to give a spectral representation theorem (Geller & Marinucci 2010, Baldi & Rossi 2014) for spin random fields, which takes the form \[f_{s}(x)=\sum_{\ell,m}a_{\ell m;2}Y_{\ell m;2}(x)\, \tag{6}\] where we have introduced the spin spherical harmonics and the spin raising operators \[Y_{\ell m;2}(x)=\partial_{1}\partial_{0}Y_{\ell m}\,\] \[\partial_{0}:=(\frac{\partial}{\partial\theta}+\frac{i}{\sin\theta}\frac{ \partial}{\partial\varphi})\,\ \ \ \ \partial_{1}:=-\sin\theta(\frac{\partial}{\partial\theta}+\frac{i}{\sin\theta} \frac{\partial}{\partial\varphi})\frac{1}{\sin\theta}\.\] The expansion in Equation 6 must again be taken with some care, as both the left and right-hand sides are not invariant to change of local coordinates in the tangent plane, even if the coordinates for \(x\in\mathbb{S}^{2}\) remain the same. The analysis of polarization data has an enormous importance for Cosmology. In short, polarization can provide a compelling proof of the existence of primordial gravitational waves, which would constitute an impressive verification of the key prediction of Inflation, Figure 6: CMB temperature and direction of polarization, as measured by the Planck satellite. The color represents the value of the CMB temperature, while the unoriented lines represent the direction of the polarization. Both the temperature and the polarization direction have been smoothed with a Gaussian kernel at \(fwhm=5^{\circ}\) for visualization purposes. Figure reproduced from Planck Collaboration (2020a); copyright 2020 by The European Southern Observatory. a theorized epoch of fast expansion in the very Early Universe (the first \(10^{-32}\) seconds of the Universe). Other interesting aspects that can be studied with polarization involve the existence of a "reionization bump" in the angular power spectrum and the presence of weak gravitational lensing effects. However, polarization data are much fainter than the CMB temperature, and map-making and foreground removal are especially challenging, as proved, for instance, by the joint analysis of the Bicep/KECK and Planck team in Paoletti et al (2022). To conclude this paper, we list a number of very challenging tasks, mostly related to the analysis of polarization data discussed in this section. These are likely to require major statistical efforts in the next ten years. 1. The implementation of techniques for map-making in polarization. This task is especially difficult because much less is known about the emission of astrophysical foreground sources in polarization with respect to temperature data; some attempts have been made to address these issues by means of neural network techniques (Krachmalnicoff and Puglisi 2021), but the field is still largely open for research. Very recently, attempts have been made to extend the ideas of NILC and its generalizations to the polarization framework, as in Carones et al. (2022), Carones et al (2022). 2. The development of goodness-of-fit tests for the assumptions of Gaussianity and isotropy is definitely a very urgent issue, as shown by the misclassification of Galactic dust emissions in some recent analysis of polarization data. The implementation on polarization data of Lipschitz-Killing Curvatures is made difficult by the fact that the definition of excursion sets is much more subtle in the case of fiber bundles: a possible approach is to focus on the squared norm of spin data, which is a scalar quantity following a chi-square distribution with two degrees of freedom. This is the approach followed by Carones et al (2022), where the expected values of these functionals were also derived, exploiting results by Lerario et al (2022). Much remains to be done to investigate the statistical properties of such functionals in this framework: for instance, nothing is currently known about their asymptotic variance nor asymptotic distributions. 3. An alternative approach to polarization data can be pursued by lifting the field on the group of rotations \(SO(3)\) where it is a scalar-valued (but anisotropic) field; see Stecconi (2022) for a mathematical discussion. In this case, the machinery by Adler and Taylor (2007) to compute the expected values of Lipschitz-Killing Curvatures becomes much more challenging, because the fields are no longer isotropic. However, to leading order, the main results are addressed in Carron Duque et al (2023). Computation of variances and limiting distributions are still completely open for research. 4. A different approach for the derivation of statistics based on geometric and topological functionals (including Betti numbers) is pursued in Lerario et al (2022), where the excursion sets are given a much more general characterization in terms of the behaviour of the field, its gradient and Hessian, suitably defined for the spin case. Although this approach has made possible the calculation of some expected values, nothing is currently known on the variance and distribution of these statistics. 5. The sparsity of foreground components in the needlet domain could be exploited to improve its estimation and removal, as done by Oppizzi et al (2020) in CMB temperature. The relevant needlet and wavelet construction is known in polarization (Geller and Marinucci 2010) but the implementation of thresholding and other sparsity enforcing techniques is still open for research and applications. 6. Very little is currently known about polarized point sources. The extension to spin data of the STEM procedure that we discuss in Section 4 for the scalar case therefore seems a very natural goal. However, a number of tools are still to be derived, starting from the distribution of local maxima for spin-valued random fields (in either of the approaches that we envisaged before, _i.e._, either considering the corresponding norms as chi-square fields or searching for maxima in the anisotropic field on \(SO(3)\)). 7. With a more novel approach, considering that polarization observations are collected on many different frequency channels, it may be possible to address the foreground estimation issue in the framework of functional data analysis on the sphere. This area is completely open for research, even for the scalar (temperature) case: an attempt to consider statistical analysis for data on the sphere, which take values in a Hilbert space, has been very recently given by Caponera (2022). This list of topics is by no means exhaustive, but we hope it will be enough to motivate some readers to get interested in this challenging and fascinating area of research. ## Disclosure Statement The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review. ## Acknowledgments The research by JC has been supported by the InDark INFN project. The research by DM has been supported by the MUR Department of Excellence Programme MatMotTov.
2307.04682
Properties underlying the variation of the magnetic field spectral index in the inner solar wind
Using data from orbits one to eleven of the Parker Solar Probe (PSP) mission, the magnetic field spectral index was measured across a range of heliocentric distances. The previously observed transition between a value of $-5/3$ far from the Sun and a value of $-3/2$ close to the Sun was recovered, with the transition occurring at around $50 \, R_{\odot}$ and the index saturating at $-3/2$ as the Sun is approached. A statistical analysis was performed to separate the variation of the index on distance from its dependence on other parameters of the solar wind that are plausibly responsible for the transition; including the cross helicity, residual energy, turbulence age and the magnitude of magnetic fluctuations. Of all parameters considered the cross helicity was found to be by far the strongest candidate for the underlying variable responsible. The velocity spectral index was also measured and found to be consistent with $-3/2$ over the range of values of cross helicity measured. Possible explanations for the behaviour of the indices are discussed, including the theorised different behaviour of imbalanced, compared to balanced, turbulence.
J. R. McIntyre, C. H. K. Chen, A. Larosa
2023-07-10T16:35:46Z
http://arxiv.org/abs/2307.04682v1
# Properties underlying the variation of the magnetic field spectral index in the inner solar wind ###### Abstract Using data from orbits one to eleven of the Parker Solar Probe (PSP) mission, the magnetic field spectral index was measured across a range of heliocentric distances. The previously observed transition between a value of \(-5/3\) far from the Sun and a value of \(-3/2\) close to the Sun was recovered, with the transition occurring at around \(50\,R_{\odot}\) and the index saturating at \(-3/2\) as the Sun is approached. A statistical analysis was performed to separate the variation of the index on distance from its dependence on other parameters of the solar wind that are plausibly responsible for the transition; including the cross helicity, residual energy, turbulence age and the magnitude of magnetic fluctuations. Of all parameters considered the cross helicity was found to be by far the strongest candidate for the underlying variable responsible. The velocity spectral index was also measured and found to be consistent with \(-3/2\) over the range of values of cross helicity measured. Possible explanations for the behaviour of the indices are discussed, including the theorised different behaviour of imbalanced, compared to balanced, turbulence. ## 1 Introduction The solar wind is known to contain a turbulent cascade and it is proposed that this cascade plays a role in its heating and acceleration (Coleman, 1968; Belcher, 1971; Alazraki & Couturier, 1971; Bruno & Carbone, 2013). An important diagnostic of the turbulence is the spectral index, \(\alpha\), defined by \(E(k)\propto k^{\alpha}\), where \(E(k)\) is the trace power spectrum and \(k\) is wavenumber. The Parker Solar Probe (PSP) mission (Fox et al., 2016; Raouafi et al., 2023) allows us to measure this index across an unprecedented range of heliocentric distances and environments, having already reached a heliocentric distance of less than \(14\,R_{\odot}\). Using data from its first two orbits, Chen et al. (2020) found the magnetic field spectral index, \(\alpha_{\rm B}\), to vary with heliocentric distance, appearing consistent with \(-5/3\) at \(0.6\,\)au but consistent with \(-3/2\) at \(0.17\,\)au. Whether the spectrum would continue to shallow for measurements closer to the Sun was unclear. Using later encounters Shi et al. (2021) and Sioulas et al. (2023) found the same transition in \(\alpha_{\rm B}\) across a wider range of distances. The two extremes of the transition, \(-5/3\) and \(-3/2\), are common predictions for \(\alpha_{\rm B}\) from theoretical models of MHD turbulence. As these predictions are arrived at by making different assumptions about the turbulence, the value of the index can give us insight into its physics. To construct such models it is useful to consider the turbulence in terms of the Elsasser variables, defined as \(\delta\mathbf{z}^{\pm}=\delta\mathbf{v}\pm\delta\mathbf{b}\), where \(\delta\mathbf{v}\) is the perturbation to the velocity field and \(\delta\mathbf{b}=\delta\mathbf{B}/\sqrt{\mu_{0}\rho}\), where \(\rho\) is the density, is the perturbation to the magnetic field in velocity units (Elsasser, 1950). From the ideal MHD equations, the evolution of these Elsasser variables is given by \[\partial_{t}\delta\mathbf{z}^{\pm}\mp(\mathbf{V}_{\rm A}\cdot\nabla)\delta\mathbf{z} ^{\pm}+(\delta\mathbf{z}^{\mp}\cdot\nabla)\delta\mathbf{z}^{\pm}=-\nabla\tilde{p}, \tag{1}\] where \(\mathbf{V}_{\rm A}\) is the Alfven velocity and \(\tilde{p}\) is the total pressure, the sum of the plasma pressure and the magnetic pressure. From this, perturbations to the Elsasser fields can be viewed as wave packets travelling along the background field at the Alfven speed, with \(\delta\mathbf{z}^{+}\) and \(\delta\mathbf{z}^{-}\) corresponding to travel in opposite directions. Since the nonlinear term in Equation (1) requires the presence of both variables to be non-zero, MHD turbulence can be viewed in terms of the interaction of these counter propagating wave packets (Kraichnan, 1965). In the Iroshnikov-Kraichnan model (Iroshnikov, 1964; Kraichnan, 1965) the turbulence is taken to be isotropic and, in the picture described above, a wave packet must interact with many others to be significantly deformed. In other words the characteristic propagation time of the wave packets is shorter than the nonlinear time of their interactions, this is known as weak turbulence. The resulting spectral index is \(-3/2\). However, solar wind turbulence is anisotropic (Horbury et al., 2012; Chen, 2016). This is accounted for in the Goldreich & Sridhar (1995) model, where the wave packets are elongated along the background magnetic field such that only one interaction of wave packets is necessary to significantly deform those wave packets. Here the turbulence is critically balanced, where the propagation and nonlinear times are taken to be equal -- this condition has been observed to hold in the solar wind (Chen, 2016). The result is a \(-5/3\) scaling with respect to \(k_{\perp}\), the wavenumber perpendicular to the background field, and a \(-2\) scaling with respect to \(k_{\parallel}\), the wavenumber parallel to the background field. Consistent with this, Horbury et al. (2008) reported a \(k_{\parallel}^{-2}\) scaling in the solar wind. To this picture the Boldyrev (2006) model adds that the angular alignment of the velocity and magnetic field fluctuations is dependent on scale, giving a \(k_{\perp}^{-3/2}\) scaling and resulting in the wave packets having a three-dimensional anisotropic structure. Observations of the solar wind (Podesta et al., 2009; Wicks et al., 2013; Chen et al., 2012; Verdini et al., 2018) and simulations (Mason et al., 2006; Perez et al., 2012; Verdini and Grappin, 2015; Mallet et al., 2016) provide mixed evidence for such alignment of fluctuations or 3D anisotropic structure in MHD turbulence. All these models assume homogeneous background conditions, the picture becomes more complicated when gradients in these conditions are considered (Chandran and Perez, 2019). Further, these models assume the energy in the two Elsasser fields to be of comparable magnitude, this is known as balanced turbulence. It is known that the level of imbalance in energy between the two Elsasser fields varies with heliocentric distance (Roberts et al., 1987; Tu and Marsch, 1995; Bavassano et al., 1998, 2000; Matthaeus et al., 2004; Breech et al., 2005; Bruno et al., 2007; Chen et al., 2020) and \(\alpha_{\rm B}\) has been found to depend on the cross helicity, a measure of the imbalance, and residual energy at 1 au (Podesta and Borovsky, 2010; Chen et al., 2013; Wicks et al., 2013; Bowen et al., 2018). Sioulas et al. (2023) found a dependence of \(\alpha_{\rm B}\) with cross helicity across the distance range provided by PSP. Further, some theoretical models (Lithwick et al., 2007; Chandran, 2008; Beresnyak and Lazarian, 2008; Schekochihin, 2022) and simulations (Beresnyak and Lazarian, 2009, 2010) suggest imbalanced turbulence behaves differently from balanced turbulence, though this has been disputed (Perez and Boldyrev, 2009, 2010). The level of imbalance therefore appears as a clear, plausible parameter behind the observed transition in \(\alpha_{\rm B}\) with distance, however, no previous theoretical work predicts this particular effect. In contrast, the velocity spectral index, \(\alpha_{\rm v}\), has been found to be consistent with \(-3/2\) as cross helicity is varied at 1 au (Podesta and Borovsky, 2010; Chen et al., 2013; Bowen et al., 2018) and to not vary with distance across the distance range provided by PSP (Shi et al., 2021). However, Roberts (2010) reported \(\alpha_{\rm v}\) to evolve with heliocentric distance from \(-3/2\) at 1 au to \(-5/3\) at distances of several au, with some evidence of shallower spectra being associated with regions of high cross helicity. An alternative explanation for the transition in \(\alpha_{\rm B}\) could lie in the fact that plasma at greater radial distances has had a greater number of nonlinear times pass during its journey from the Sun. It, therefore, might be argued that the transition is reflective of the turbulence evolving during its journey from an earlier transient state. Shi et al. (2021) suggested that the turbulence age, a parameter characterising this effect, could be behind the transition after finding variation of the index with both solar wind speed and radial distance. However, Chen et al. (2020) found that, for distances as close as \(35.7\,R_{\odot}\), the travel time from the Sun is much greater than the outer scale nonlinear time so the turbulence should already be well evolved. As the parameters discussed above themselves vary with distance it is possible that \(\alpha_{\rm B}\)'s apparent dependence on those parameters is merely a reflection of the parameters' and \(\alpha_{\rm B}\)'s shared dependence on distance. In this paper a statistical analysis is presented, which, for the first time, rigorously separates the dependence of \(\alpha_{\rm B}\) on distance from \(\alpha_{\rm B}\)'s dependence on other properties of the solar wind, in order to clearly identify which is controlling its behaviour and therefore the nature of the MHD inertial range in the solar wind. ## 2 Data PSP data from orbits 1 to 11 were used, covering the date range 1st October 2018 to 31st March 2022. The magnetic field data were provided by the flux-gate magnetometer (MAG) of the FIELDS instrument suite (Bale et al., 2016), with the 4 samples per cycle data product being used throughout this paper. The ion velocity data were provided by the SPAN-I instrument of the SWEAP suite (Kasper et al., 2016), with bi-Maxwellian fits (Woodham et al., 2021) used during encounters 2 to 7, where available, and moments being used otherwise. Fits data were only used where at least 3 \(\phi\) bins were fitted to, in order to ensure that the proton core was sufficiently captured. Density data were obtained from the quasi-thermal noise (QTN) measurements made by the Radio Frequency Spectrometer Low Frequency Receiver (Moncuquet et al., 2020). Density data from SPAN-I were also used but only as a check on the quality of the velocity data, as described in Section 3.2. ## 3 Results ### Dependence of magnetic spectral index with distance The magnetic field data were divided into intervals of six hour duration in order to study the dependence of the spectral index, \(\alpha_{\rm B}\), on heliocentric distance, \(r\). Only intervals where PSP was at a heliocentric distance of less than \(150\,R_{\odot}\) were considered and any intervals with more than 1% of data points missing were excluded from the analysis. This left 1873 intervals. For each interval a fast Fourier transform was performed to produce a trace power spectral density. Invoking the Taylor hypothesis (Taylor, 1938) allows such frequency spectra to be interpreted as wavenumber spectra. Perez et al. (2021) found the Taylor hypothesis to be appropriate for analysis with PSP data, even when working with data from its closest approaches to the Sun. \(\alpha_{\rm B}\) was calculated for each interval in the spacecraft-frame frequency range \(10^{-2}\,\mathrm{Hz}<f_{\rm sc}<10^{-1}\,\mathrm{Hz}\), it was verified that this corresponds to the MHD inertial range for each interval used. All the analysis in this paper involving the magnetic spectral index was repeated with \(\alpha_{\rm B}\) calculated over a range of a fixed number of ion gyroradii (assuming the Taylor hypothesis), this was found to have no significant impact on the results. The index, \(\alpha_{\rm B}\), calculated for each interval is shown in Figure 1 as a function of heliocentric distance, \(r\). At large distances the results are consistent with a \(-5/3\) scaling but are close to a \(-3/2\) scaling at the closest distances to the Sun. The transition between the two values occurs at about \(50\,R_{\odot}\). This result is in agreement with Chen et al. (2020), with the additional finding that the index appears to saturate near \(-3/2\) as the Sun is approached. The transition is further illustrated in Figure 2. A selection of trace power spectra from the intervals are shown, with the colour of the spectra indicating the heliocentric distance at which they were measured. The spectra have been smoothed by averaging over a sliding window of a factor of two. Consistent with the above discussion, the spectra measured closest to the Sun are clearly shallower than those at the greatest distances and are consistent, in their inertial range, with a \(-3/2\) scaling indicated by the upper solid black line. The spectra measured at the furthest distances are consistent with a \(-5/3\) scaling, indicated with the lower solid black line. ### Dependence on cross helicity and residual energy To investigate the mechanism behind the transition in the value of \(\alpha_{\rm B}\), other parameters of the solar wind, plausibly underlying the transition, were also measured. In order to determine which, if any, of these parameters may be responsible for the transition, the variation of \(\alpha_{\rm B}\) with distance, \(r\), was separated from its variation with these parameters. Those considered include the normalised cross helicity, defined as \[\sigma_{\rm c}=\frac{\langle\delta\mathbf{z}^{+2}-\delta\mathbf{z}^{-2}\rangle}{ \langle\delta\mathbf{z}^{+2}+\delta\mathbf{z}^{-2}\rangle}, \tag{2}\] and the normalised residual energy, defined as Figure 1: The trace magnetic field spectral index, \(\alpha_{\rm B}\), of six hour PSP intervals against the heliocentric distance, \(r\), with \(\alpha_{\rm B}\) calculated in the frequency range \(10^{-2}\,\mathrm{Hz}<f_{\rm sc}<10^{-1}\,\mathrm{Hz}\). The red line is a 75-point running mean. The dashed lines mark the spectral index values commonly predicted from theory, \(-3/2\) and \(-5/3\). Figure 2: Example smoothed, six hour magnetic field power spectra, \(E\), coloured by the heliocentric distance, \(r\), of the intervals the spectra are calculated from. The solid black lines mark slopes corresponding to spectral index values of \(-3/2\) and \(-5/3\). \[\sigma_{\rm r}=\frac{2\langle\delta\mathbf{z}^{+2}\cdot\delta\mathbf{z}^{-2}\rangle}{\langle \delta\mathbf{z}^{+2}+\delta\mathbf{z}^{-2}\rangle}, \tag{3}\] where the angular brackets represent averages taken over the interval. The imbalance and alignment of the Elsasser fields, characterised by \(\sigma_{\rm c}\) and \(\sigma_{\rm r}\) respectively, are a factor in determining the magnitude of the non-linear term in Equation (1), the governing equation of ideal MHD turbulence. This, and their known radial dependence (Roberts et al., 1987; Tu & Marsch, 1995; Bavassano et al., 1998, 2000; Matthaeus et al., 2004; Breech et al., 2005; Bruno et al., 2007; Chen et al., 2020), make \(\sigma_{\rm c}\) and \(\sigma_{\rm r}\) clear candidates for a potential parameter underlying the \(\alpha_{\rm B}\) transition. The data were divided into one hour intervals. Only intervals where PSP was at heliocentric distance of less than \(80\,R_{\odot}\) were considered. Any interval with at least 1% of the magnetic field data, 10% of the ion velocity data or 80% of the density data missing was discarded. Intervals where the average SPAN-I measured density was less than 10% of the density measured from quasi-thermal noise were also discarded. This final condition, along with the condition on the heliocentric distances of the intervals, is to ensure the SPAN-I measurements are sufficiently capturing the velocity distribution of the solar wind, which is not always fully in the instrument's field of view (Kasper et al., 2016). After application of these conditions, 1894 intervals remained, of which 558 obtained their velocity data from bi-Maxwellian fits, the remainder from moments. Both \(\sigma_{\rm c}\) and \(\sigma_{\rm r}\) were calculated in the inertial range. This was achieved by determining the perturbations to the Elsasser variables in Equations 2 and 3 using \(\delta\mathbf{z}^{\pm}(t)=\mathbf{z}^{\pm}(t+\tau)-\mathbf{z}^{\pm}(t)\), with \(\tau\approx 100\,\)s, a duration that corresponds to the inertial range. \(\mathbf{z}^{\pm}(t)\) were calculated using only the magnetic field and ion velocity components perpendicular to \(\mathbf{B}_{0}\), the mean magnetic field of each interval. Figure 3(a) and (b) show \(|\sigma_{\rm c}|\) and \(|\sigma_{\rm r}|\) as functions of \(r\). The radial dependence of both quantities is immediately apparent with intervals of high imbalance and low residual energy being more frequent closer to the Sun. Note that when calculated with moments \(|\sigma_{\rm c}|\) tended to be slightly lower than when calculated with the bi-Maxwellian fits. The apparent decrease in \(|\sigma_{\rm c}|\) as the Sun is approached at the smallest \(r\) displayed, where only moments are available, is therefore possibly artificial. For each interval \(\alpha_{\rm B}\) was calculated, as described in the previous section. The intervals were binned by absolute cross helicity or absolute residual energy and the mean index for each bin determined, with associated standard error. The results are shown in 3(c) and (d) for \(|\sigma_{\rm c}|\) and \(|\sigma_{\rm r}|\), respectively. The clear trends of increasing index for increasing imbalance and decreasing index for increasing absolute residual energy are consistent with the trends of \(\alpha_{\rm B}\), \(|\sigma_{\rm c}|\) and \(|\sigma_{\rm r}|\) with \(r\). These strong trends make both \(|\sigma_{\rm c}|\) and \(|\sigma_{\rm r}|\) good candidates for the analysis of this paper. To examine whether \(\sigma_{\rm c}\), say, may be behind the transition in \(\alpha_{\rm B}\), the variation of \(\alpha_{\rm B}\) with \(r\) was separated from the variation of \(\alpha_{\rm B}\) with \(|\sigma_{\rm c}|\). In order to do this one of \(r\) or \(|\sigma_{\rm c}|\) was held approximately constant and the response of \(\alpha_{\rm B}\) to varying the other under this constraint was observed. Take first isolating the variation of \(\alpha_{\rm B}\) with \(|\sigma_{\rm c}|\) from its variation with \(r\). The intervals were binned according to \(r\) and, within each bin, a linear fit was performed of \(\alpha_{\rm B}\) against \(|\sigma_{\rm c}|\). For each bin the gradient of the linear fit, \(\gamma\), and associated 95% confidence interval from that fit are displayed in Figure 4(a), against the arithmetic centre of the heliocentric distance range of that bin. 12 of the confidence intervals do not contain zero and so have an associated \(\gamma\) statistically different from zero. This is strong evidence that, even when \(r\) is kept approximately constant, \(\alpha_{\rm B}\) continues to vary with \(\sigma_{\rm c}\). To isolate the variation with \(r\) from the variation with \(\sigma_{\rm c}\) a similar procedure was followed. In this case the intervals were binned by \(|\sigma_{\rm c}|\) and, within each bin, a linear fit of \(\alpha_{\rm B}\) to \(r\) was performed. Again a gradient, \(\gamma\), and associated 95% confidence interval were obtained, the results are shown in 4(b). In this case only 4 of the bins have an associated \(\gamma\) which is statistically different from zero and the sign of \(\gamma\) is inconsistent across bins. There is therefore little evidence of a trend with \(r\) remaining when \(|\sigma_{\rm c}|\) is held approximately constant. Figures 4(a) and (b) therefore suggest cross helicity is a strong candidate for a parameter underlying the observed transition in \(\alpha_{\rm B}\). The above process was repeated with \(|\sigma_{\rm r}|\) and \(r\), the results are shown in Figures 4(c) and (d). The data points for bins containing fewer than 10 intervals are not shown and are excluded from analysis, as is the case throughout this paper. Figure 4(c) is analogous to 4(a) with the intervals binned by \(r\) to isolate the variation of \(\alpha_{\rm B}\) with \(|\sigma_{\rm r}|\). 6 of the bins have confidence intervals that do not contain zero. There is, therefore, weaker evidence that the trend of \(\alpha_{\rm B}\) with \(|\sigma_{\rm r}|\) remains when \(r\) is held approximately constant compared to the \(|\sigma_{\rm c}|\) case. Figure 4(d) is analogous to 4(b); the intervals are binned by \(|\sigma_{\rm r}|\) to isolate the effect of varying \(r\) on the index. Only 5 of the bins have associated \(\gamma\) with confidence intervals that do not include zero and therefore there is some evidence that holding \(|\sigma_{\rm r}|\) constant has removed the apparent trend with \(r\). Overall, Figures 4(c) and (d) suggest some evidence in favour of residual energy as a candidate for a parameter underlying the observed transition but this evidence is weaker than that for the cross helicity. It should be noted that \(\sigma_{\rm c}\) and \(\sigma_{\rm r}\) do not vary independently and so it is possible that an apparent trend with one is due to the trend with the other. The values of \(\sigma_{\rm r}\) and \(\sigma_{\rm c}\) for the intervals used are plotted against each other in Figure 5. From Equations (2) and (3) it is apparent that \(\sigma_{\rm c}^{2}+\sigma_{\rm r}^{2}\leq 1\). There is a tendency, observed in previous studies (Bavassano & Bruno, 2006; Bruno et al., 2007; D'Amicis et al., 2010; Chen et al., 2013; Wicks et al., 2013), for the points to preferentially lie towards the edge of the circle this condition defines and cluster in the negative residual energy, positive cross helicity quadrant. Given this, it is important to separate the dependence of \(\alpha_{\rm B}\) on \(\sigma_{\rm c}\) and on \(\sigma_{\rm r}\). The above analysis technique was therefore used with \(|\sigma_{\rm c}|\) and \(|\sigma_{\rm r}|\), \(r\) no longer being considered. The results are shown in Figure 6. Figure 6(a) shows \(\gamma\) for a fit of \(\alpha_{\rm B}\) against \(|\sigma_{\rm c}|\) for intervals binned by \(|\sigma_{\rm r}|\). For 14 of the bins the confidence interval does not include zero. This indicates that, even with \(|\sigma_{\rm r}|\) held constant, there is still good evidence of statistically significant variation of \(\alpha_{\rm B}\) with \(|\sigma_{\rm c}|\). Figure 6(b) shows the reverse, with \(\gamma\) corresponding to a fit of \(\alpha_{\rm B}\) against \(|\sigma_{\rm r}|\) for intervals binned by \(|\sigma_{\rm c}|\). In this case only two of the intervals have a corresponding \(\gamma\) with a confidence interval that does not include zero. When \(|\sigma_{\rm c}|\) is held constant it appears the trend with \(|\sigma_{\rm r}|\) vanishes. This suggests that the apparent trend with residual energy is simply a manifestation of the underlying trend with cross helicity. ### Dependence on turbulence age A parameter also considered was the turbulence age (Matthaeus et al., 1998), the approximate number of outer scale nonlinear times that have passed for a parcel of plasma during its journey from the Sun, as it is possible that the trend observed in \(\alpha_{\rm B}\) may be due to the turbulence evolving in time as it becomes fully developed. Equation (1) suggests a form for the nonlinear time of \(\tau_{\rm nl}\sim\lambda/\delta b\), where \(\lambda\) is the scale of the fluctuation and \(\delta b\) is in velocity units. Note that this form of \(\tau_{\rm nl}\) does not Figure 3: (a) The absolute cross helicity, \(|\sigma_{\rm c}|\), dependence and (b) the absolute residual energy, \(|\sigma_{\rm r}|\), dependence on heliocentric distance, \(r\). (c) The mean magnetic spectral index, \(\alpha_{\rm B}\), for intervals binned by \(|\sigma_{\rm c}|\) with associated standard errors. The dashed lines mark \(\alpha_{\rm B}\) values of \(-3/2\) and \(-5/3\). (d) The equivalent for \(|\sigma_{\rm r}|\) bins. account for effects arising from alignment or imbalance of the Elsasser fields. For this paper \(\lambda\) was taken to be the correlation scale, measured as the time scale over which the correlation function, \[C(\tau)=\langle\delta\mathbf{B}(t+\tau)\cdot\delta\mathbf{B}(t)\rangle, \tag{4}\] where \(\delta\mathbf{B}(t)=\mathbf{B}(t)-\langle\mathbf{B}\rangle\), decreases by a factor of \(e\); which was then converted to a length scale using the Taylor hypothesis (Isaacs et al., 2015; Chen et al., 2020). \(\delta b\) was taken to be the square root of the value of the magnetic field second-order structure function, \[S_{2}(\tau)=\langle|\mathbf{B}(t+\tau)-\mathbf{B}(t)|^{2}\rangle, \tag{5}\] at large scales, where it reaches a steady value, in velocity units (Chen et al., 2020). The resulting \(\tau_{\rm nl}\) for each of the 1894 intervals used in the previous section is shown in Figure 7(a). As the correlation scale increases with distance and the magnetic fluctuation amplitudes decrease, the outer scale nonlinear time is seen to increase with distance from the Sun. If \(\tau_{\rm nl}\) is taken to be constant for a given plasma parcel over its journey from the Sun, then the turbulence age is estimated as \(A_{\rm t}=T/\tau_{\rm nl}\), where \(T\) is the travel time from the Sun. However, calculated this way, \(\tau_{\rm nl}\) was found to increase at such a rate with distance that \(A_{\rm t}\) would decrease with distance, which clearly cannot be correct. The assumption that the nonlinear time is constant was therefore abandoned and the following integral instead considered, \[A_{\rm t}(t)=\int_{0}^{t}\frac{dt^{\prime}}{\tau_{\rm nl}(t^{\prime})}. \tag{6}\] Taking the solar wind speed, \(V_{\rm sw}\), to be constant with distance and performing a change of variables gives, \[A_{\rm t}(r)=\frac{1}{V_{\rm sw}}\int_{r_{0}}^{r}\frac{dr^{\prime}}{\tau_{\rm nl }(r^{\prime})}. \tag{7}\] It was then assumed that \(\tau_{\rm nl}\) follows a power law, \(\tau_{\rm nl}\propto r^{a}\). Figure 7(a) gives justification to this assumption, with \(\tau_{\rm nl}\) appearing reasonably well captured by a such Figure 4: (a) The gradient from a least squares linear fit, \(\gamma\), of the magnetic spectral index, \(\alpha_{\rm B}\), against absolute cross helicity, \(|\sigma_{c}|\), for sets of intervals binned by heliocentric distance, \(r\). (b) \(\gamma\) of \(\alpha_{\rm B}\) against \(r\) for sets of intervals binned by \(|\sigma_{c}|\). (c) \(\gamma\) of \(\alpha_{\rm B}\) against absolute residual energy, \(|\sigma_{r}|\), for sets of intervals binned by \(r\). (d) \(\gamma\) of \(\alpha_{\rm B}\) against \(r\) for sets of intervals binned by \(|\sigma_{r}|\). 95% confidence intervals for each \(\gamma\) are marked. In each case a dotted line marks \(\gamma=0\) to illustrate the statistical significance, or lack thereof, of each \(\gamma\). The data points for bins containing fewer than 10 intervals are not shown. a function. If \(\tau_{\rm nl}\) is measured at some distance \(r_{\rm m}\) to be \(\tau_{\rm nl,m}=\tau_{\rm nl}(r=r_{\rm m})\) then \(\tau_{\rm nl}=(r/r_{\rm m})^{a}\tau_{\rm nl,m}\). Performing the integral gives, \[A_{\rm t}(r)=\frac{1}{1-a}\frac{r_{\rm m}^{a}}{V_{\rm sw}\tau_{\rm nl,m}}[r^{1- a}-r_{0}^{1-a}]. \tag{8}\] The value of \(a\) was calculated to be 1.85 by performing a fit of \(\tau_{\rm nl}\) against distance as shown in Figure 7(a). This value was used for all intervals. For each interval \(r_{\rm m}\) and \(\tau_{\rm nl,m}\) were taken to be the values as calculated for that interval, as was the case for \(V_{\rm sw}\). A value had to be set for \(r_{0}\), \(13R_{\odot}\) was used for each interval -- this being a value below all heliocentric distances of the intervals used. While setting a value too far from the Sun will result in a systematically underestimated \(A_{\rm t}\), note that what is important for this present analysis is the relative \(A_{\rm t}\) between points, rather than the absolute \(A_{\rm t}\). The results are shown in 7(b), with a least squares fit demonstrating that \(A_{\rm t}\) increases with distance. \(A_{\rm t}\gg 1\) for all intervals, consistent with Chen et al. (2020), which would suggest well developed turbulence, and so appears to undermine the suggestion that the turbulence age may be behind the transition, though, as stated above, the form of \(\tau_{\rm nl,m}\) used here does not take into account imbalance or alignment of the Elsasser fields. The intervals were binned by the calculated \(A_{\rm t}\) and the mean \(\alpha_{\rm B}\) for each bin determined with associated standard error, the results are shown in Figure 7(c). Unlike in the cases of the trend with \(r\), \(\sigma_{\rm c}\) or \(\sigma_{\rm r}\), there is no clear trend of \(\alpha_{\rm B}\) with \(A_{\rm t}\). Nevertheless, the analysis of the previous section was repeated to attempt to separate any dependence of \(\alpha_{\rm B}\) on \(A_{\rm t}\) from the dependence on \(r\). Analogous to the above analysis, the intervals were binned according to distance and \(\gamma\) was calculated for each, with associated 95% confidence intervals, and is shown in Figure 7(d). Only 5 bins have associated error bars do not include zero. From this, and Figure 7(c), it follows that the evidence for turbulence age being the parameter underlying the transition in the spectral index is far weaker than is the case for cross helicity. ### Dependence on further parameters Other parameters possibly underlying the variation in \(\alpha_{\rm B}\) were considered and the above statistical analysis repeated for each. The 1894 intervals of the previous two sections were used for each parameter examined. Chen et al. (2021) found \(\alpha_{\rm B}\) to depend on wind type, which the solar wind velocity is a common proxy for. Further Shi et al. (2021) reported a trend of increasing index with increasing velocity. The mean solar wind velocity, \(V_{\rm sw}\), against \(r\) for each interval is shown in Figure 8(a), clearly showing the acceleration of the solar wind from the Sun. Figure 8(b) shows the results of separating any dependence of \(\alpha_{\rm B}\) on \(V_{\rm sw}\) from its apparent dependence on \(r\), with the intervals being binned by \(r\), and \(\gamma\) with associated confidence interval being determined for each. 5 bins have associated confidence intervals that do not include zero and the sign of \(\gamma\) is inconsistent across these bins. The evidence for \(V_{\rm sw}\) as the underlying parameter is therefore weak. A further parameter considered was the sampling angle -- the angle between the mean magnetic field and mean solar wind velocity in the spacecraft frame. Solar wind turbulence is known to be anisotropic (Horbury et al., 2012; Chen, 2016) meaning that different properties may be observed depending on the angle PSP's path makes with the background field, potentially explaining the observed index trend with distance. The measured angles, \(\theta_{\rm BV}\), for each of the intervals used are shown against \(r\) in Figure 8(c). There is a clear trend with distance, with greater \(\theta_{\rm BV}\) values tending to be observed further from the Sun. A similar analysis to the above is shown in Figure 8(d), the intervals here again binned by \(r\) to isolate any trend with \(\theta_{\rm BV}\). Only 3 of the bins have \(\gamma\) which are statistically different from zero and so there is little evidence for a trend with the sampling angle once the trend with distance is taken into account. The magnitude of the magnetic field fluctuations, both unnormalised and normalised by the background field, were also considered. The latter is a factor in determining the turbulence strength and so may plausibly play a role in the transition of \(\alpha_{\rm B}\). \(\delta B\) was calculated as in Figure 5: The dependence of the residual energy, \(\sigma_{r}\), on the cross helicity, \(\sigma_{c}\). The circle marks \(\sigma_{c}^{2}+\sigma_{r}^{2}=1\). Note that a positive \(\sigma_{c}\) does not always correspond to an excess of energy in the Elsasser variable propagating away from the Sun, as the direction of \(\mathbf{B}\) has not been altered in the calculation of \(\sigma_{c}\) to ensure this. Figure 6: (a) The gradient from a least squares linear fit, \(\gamma\), of the spectral index, \(\alpha_{\rm B}\), against absolute cross helicity, \(|\sigma_{c}|\), for sets of intervals binned by absolute residual energy, \(|\sigma_{r}|\). (b) \(\gamma\) of \(\alpha_{\rm B}\) against \(|\sigma_{r}|\) for sets of intervals binned by \(|\sigma_{c}|\). 95% confidence intervals for each \(\gamma\) are marked. The data points for bins containing fewer than 10 intervals are not shown. Figure 7: (a) The dependence of the nonlinear time, \(\tau_{\rm nl}\), on heliocentric distance, \(r\). (b) The dependence of the turbulence age, \(A_{\rm t}\), on \(r\) with the age calculated using Equation 8. (c) The mean spectral index, \(\alpha_{\rm B}\), for intervals binned by \(A_{\rm t}\) with associated standard errors. (d) The gradient from a least squares linear fit, \(\gamma\), of \(\alpha_{\rm B}\) against \(A_{\rm t}\) for sets of intervals binned by \(r\) with 95% confidence intervals. Section 3.3 but was not converted to velocity units. The unnormalised values against distance are shown in Figure 8(e) and the normalised in Figure 8(g). While there is a very clear negative trend with distance in the unnormalised case there is no clear trend in the normalised case. Similar analysis to the above yields Figures 8(f) and 8(h) for the unnormalised and normalised case respectively. In the case of the unnormalised fluctuation magnitude only 3 bins have a \(\gamma\) statistically different from zero, in the case of the normalised fluctuation it is only 2, and so the evidence for either underlying the transition in \(\alpha_{\rm B}\) is weak. ### The velocity field spectral index To aid with the interpretation of the above results, that point to cross helicity as the underlying parameter behind the dependence of the magnetic field spectral index with distance, the dependence of the velocity field spectral index, \(\alpha_{\rm v}\), on cross helicity was considered. The lower cadence of the velocity measurements compared to the magnetic field measurements makes obtaining a good measure of \(\alpha_{\rm v}\) considerably more difficult than obtaining a good measure of \(\alpha_{\rm B}\). SPAN-I moments were used, rather than fits, to measure \(\alpha_{\rm v}\) due to noise in the fits data at high frequencies. The data were divided into one hour intervals, only those with a resolution of at least 11 seconds were used. The selection criteria described in Section 3.2 were also applied. For each interval a fast Fourier transform was performed to produce a velocity power spectrum which was then smoothed by averaging over a sliding window of a factor of two. Many values for \(\alpha_{\rm v}\) were then obtained by calculating \(\alpha_{\rm v}\) over frequency ranges set by a sliding window, \(0.2\,f^{*}<f_{\rm sc}<f^{*}\), with \(f^{*}\) ranging from \(f_{\rm max}\), the maximum available frequency, down to \(0.5\,f_{\rm max}\). These ranges are selected as \(f_{\rm max}\) is in the MHD inertial range for all intervals. The resulting set of indices were subject to a moving mean of a constant number of data points, with the variance associated with each mean recorded. The mean corresponding to the smallest variance was then selected as the final value for \(\alpha_{\rm v}\) for the interval, the process being designed to select a frequency range to measure \(\alpha_{\rm v}\) over which the value of \(\alpha_{\rm v}\) is as close to constant as possible. The process was deemed to have performed sufficiently well when the minimum variance was below \(10^{-4}\). Discarding intervals where this was not the case left 757 intervals. An additional 12 intervals, for which unphysical values of \(\alpha_{\rm v}\) were calculated and which contained heliospheric current sheet crossings or where the velocity distribution was not well captured, were also discarded. The measured \(\alpha_{\rm v}\) against \(|\sigma_{\rm c}|\) for the remaining intervals is shown in Figure 9, with \(\sigma_{\rm c}\) determined as in previous sections. There is no evidence for a trend of the \(\alpha_{\rm v}\) with \(|\sigma_{\rm c}|\), with the running mean being consistent with \(-3/2\) for all values of \(|\sigma_{\rm c}|\). ## 4 Discussion In this paper a transition in the magnetic field spectral index has been shown from \(-5/3\) far from the Sun to Figure 8: The dependence of (a) the solar wind speed, \(V_{\rm sw}\); (c) the sampling angle, \(\theta_{\rm BV}\); (e) the magnitude of the magnetic field fluctuations, \(\delta B\), squared and (g) the normalised fluctuation magnitude, \(\delta B/B_{0}\), squared on the heliocentric distance, \(r\). (b) The gradient from a least squares linear fit, \(\gamma\), of the spectral index, \(\alpha_{\rm B}\), against \(V_{\rm ex}\) for sets of intervals binned by \(r\), with 95% confidence intervals, and the equivalent for \(\gamma\) of \(\alpha_{\rm B}\) against (d) \(\theta_{\rm BV}\), (f) \(\delta B^{2}\) and (h) \(\delta B^{2}/B_{0}^{2}\). The data points for the largest \(r\) bin in (b) and (d) are not shown due to having large associated confidence intervals, to allow the confidence intervals of other bins to be seen clearly. \(-3/2\) close to the Sun, with the transition occurring at around \(50R_{\odot}\). This is in agreement with previous observations (Chen et al., 2020; Shi et al., 2021; Sioulas et al., 2023). A saturation of \(\alpha_{\rm B}\) at \(-3/2\) as the Sun is approached is shown clearly for the first time. To gain insight into the physical mechanism responsible, the variation of the index with distance was separated from its variation from a number of other parameters plausibly responsible for the transition. Of all variables considered, the normalised cross helicity was found to be the only parameter to show a significant underlying effect on the spectral index. Previous work has found \(\alpha_{\rm B}\) to vary with \(\sigma_{\rm c}\)(Podesta & Borovsky, 2010; Chen et al., 2013; Wicks et al., 2013; Bowen et al., 2018; Sioulas et al., 2023), this paper builds on those findings by rigorously isolating the variation with \(\sigma_{\rm c}\) from variation with other parameters of the solar wind. This result contrasts with Bowen et al. (2018), who argued that the residual energy is the main controlling parameter, and Shi et al. (2021), who argued for the turbulence age. However, the analysis presented here does not exclude the possibility of a secondary, weaker dependence on these parameters. There is no evidence for a similar trend for the velocity spectrum, which appears to be consistent with a \(-3/2\) scaling regardless of the cross helicity. This is in agreement with observations at 1 au (Podesta & Borovsky, 2010; Chen et al., 2013; Bowen et al., 2018) and Shi et al. (2021), which found no trend of \(\alpha_{\rm v}\) with distance using PSP data. Some existing models of imbalanced turbulence do predict different behaviour for imbalanced compared to balanced turbulence (Lithwick et al., 2007; Chandran, 2008; Beresnyak & Lazarian, 2008; Schekochihin, 2022) but none predict the results obtained. It is possible that excess magnetic energy in some regions, represented by a negative residual energy, could manifest as current sheets and Li et al. (2011) found the presence of current sheets was associated with steeper magnetic spectra. This would be consistent with the found trend of the magnetic index on residual energy. In agreement with this potential connection Dunn et al. (2023) found discontinuities in the solar wind to be associated with steeper spectra. However, if such discontinuities were behind the transition in \(\alpha_{\rm B}\) it would be expected that the dependence of \(\alpha_{\rm B}\) on \(|\sigma_{\rm r}|\) would be stronger than its dependence on \(|\sigma_{\rm c}|\), which is the opposite to what has been found here. The apparent tendency for the residual energy to be maximised for a given cross helicity (Figure 5) may provide a means by which the cross helicity could influence the index through this mechanism despite this. An alternative explanation for the transition could lie in the potentially different behaviour of imbalanced, compared to balanced, turbulence. The imbalanced regions are, for example, where the theorised "helicity barrier" is thought to be active (Meyrand et al., 2021). Under certain conditions a forward cascade of cross helicity meets a reverse cascade of magnetic helicity near the ion gyroscale, limiting the energy that can cascade forward for the dominant Elsasser field. The resulting buildup in energy at the gyroscale could result in a shallower spectrum, hence explaining the different observed scalings for different levels of imbalance. However, this would not account for why there is only a transition in the magnetic spectral index and not the velocity index, a challenge any potential explanation has to overcome. The mechanism behind the observed behaviour of \(\alpha_{\rm B}\) and \(\alpha_{\rm v}\) remains an open question. The found strong dependence of the magnetic spectral index on the cross helicity perhaps points to an area where a new model of imbalanced MHD turbulence could be developed. Such a model would more fully capture the behaviour of the solar wind fluctuations than existing models and may better account for the impact of imbalance in MHD turbulence in general. JRM is supported by STFC studentship grant ST/V506989/1. CHKC is supported by UKRI Future Leaders Fellowship MR/W007657/1. CHKC and AL are supported by STFC Consolidated Grants ST/T00018X/1 and ST/X000974/1. JRM and AL acknowledge support from the Perren Exchange Programme. We thank Lloyd Woodham for providing the SPAN fits dataset and Alexander Schekochihin for help Figure 9: The velocity field spectral index, \(\alpha_{\rm V}\), of one hour PSP intervals against the absolute cross helicity, \(|\sigma_{\rm c}|\). The red line is a 50-point running mean. The dashed lines mark the spectral index values \(-3/2\) and \(-5/3\).
2310.09769
Cell-Free Massive MIMO Surveillance Systems
Wireless surveillance, in which untrusted communications links are proactively monitored by legitimate agencies, has started to garner a lot of interest for enhancing the national security. In this paper, we propose a new cell-free massive multiple-input multiple-output (CF-mMIMO) wireless surveillance system, where a large number of distributed multi-antenna aided legitimate monitoring nodes (MNs) embark on either observing or jamming untrusted communication links. To facilitate concurrent observing and jamming, a subset of the MNs is selected for monitoring the untrusted transmitters (UTs), while the remaining MNs are selected for jamming the untrusted receivers (URs). We analyze the performance of CF-mMIMO wireless surveillance and derive a closed-form expression for the monitoring success probability of MNs. We then propose a greedy algorithm for the observing vs, jamming mode assignment of MNs, followed by the conception of a jamming transmit power allocation algorithm for maximizing the minimum monitoring success probability concerning all the UT and UR pairs based on the associated long-term channel state information knowledge. In conclusion, our proposed CF-mMIMO system is capable of significantly improving the performance of the MNs compared to that of the state-of-the-art baseline. In scenarios of a mediocre number of MNs, our proposed scheme provides an 11-fold improvement in the minimum monitoring success probability compared to its co-located mMIMO benchmarker.
Zahra Mobini, Hien Quoc Ngo, Michail Matthaiou, Lajos Hanzo
2023-10-15T08:02:28Z
http://arxiv.org/abs/2310.09769v1
# Cell-Free Massive MIMO Surveillance Systems ###### Abstract Wireless surveillance, in which untrusted communications links are proactively monitored by legitimate agencies, has started to garner a lot of interest for enhancing the national security. In this paper, we propose a new cell-free massive multiple-input multiple-output (CF-mMIMO) wireless surveillance system, where a large number of distributed multi-antenna aided legitimate monitoring nodes (MNs) embark on either observing or jamming untrusted communication links. To facilitate concurrent observing and jamming, a subset of the MNs is selected for monitoring the untrusted transmitters (UTs), while the remaining MNs are selected for jamming the untrusted receivers (URs). We analyze the performance of CF-mMIMO wireless surveillance and derive a closed-form expression for the monitoring success probability of MNs. We then propose a greedy algorithm for the observing vs, jamming mode assignment of MNs, followed by the conception of a jamming transmit power allocation algorithm for maximizing the minimum monitoring success probability concerning all the UT and UR pairs based on the associated long-term channel state information knowledge. In conclusion, our proposed CF-mMIMO system is capable of significantly improving the performance of the MNs compared to that of the state-of-the-art baseline. In scenarios of a incidence number of MNs, our proposed scheme provides an 11-fold improvement in the minimum monitoring success probability compared to its co-located mMIMO benchmarker. ## I Introduction While the development of wireless communication systems has been dramatically improved throughout the wireless generations, there is an increased demand for further improved wireless security. More specifically, infrastructure-free or user-controlled wireless communication networks, such as device-to-device (D2D) and mobile ad-hoc communications, make public safety more vulnerable to threats. Unauthorised or malicious users may misuse the available wireless infrastructure-free networks to perform illegal activities, commit cyber crime, and jeopardize public safety. This has motivated researchers to propose new surveillance methods, termed as proactive monitoring, to allow the legitimate monitor to observe or even degrade the quality of untrusted communication [1]. In proactive monitoring, the legitimate monitor operates in a full-duplex (FD) mode and sends a jamming signal to interfere with the reception of the UR, thereby degrading the rate of the untrusted link. This improves the monitoring performance [2]. The applications and different features of proactive monitoring in wireless information surveillance have been studied in the literature in various scenarios, such as MIMO systems [2, 3], unmanned aerial vehicle (UAV) networks [4], cognitive radio networks [5], relay-aided monitoring links [6, 7], and non-orthogonal multiple access (NOMA) networks [8]. However, current studies tend to investigate the simple scenario of a single untrusted link and/or a single legitimate monitor for direct monitoring. These assumptions are optimistic, because realistic systems are likely to have more than one untrusted communication links in practice. But in the face of a distributed deployment of untrusted pairs, it is impractical to arrange for the direct monitoring of each and every untrusted pair by relying on a single monitor. Xu and Zhu [9] studied proactive monitoring using a single monitor observing multiple untrusted pairs in scenarios associated with quality-of-service-constrained untrusted links, while Li _et al._[10] used proactive monitoring with relaying features to increase the signal-to-interference-plus-noise ratio (SINR) of the untrusted link, which results in higher rate of the untrusted link, and hence higher observation rate. Moreover, Moon _et al._[11] looked into proactive monitoring, relying on a group of single-antenna intermediate relay nodes harnessed for supporting a distant legitimate monitor, which act either as a jammer or as an observer node. However, Moon _et al._[11] only focused on the single-untrusted-link scenario under the overly optimistic assumption of having perfect global channel state information (CSI) knowledge of all links at the monitor. Therefore, the study of how to efficiently perform a realistic surveillance operation using multiple monitors in the presence of multiple untrusted pairs is meaningful and important, but is still an open problem at the time of writing. To address the need for reliable information surveillance in complex practical scenarios, we are inspired by the emerging concept of CF-mMIMO to propose a new proactive monitoring system, termed as _CF-mMIMO surveillance_. CF-mMIMO breaks the concept of cells or cell boundaries by using numerous distributed access points for simultaneously serving a smaller number of users [12]. It enjoys all benefits granted by the mMIMO and coordinated multi-point concepts relying on joint transmission, and hence, can guarantee ubiquitous coverage for all the users. Our CF-mMIMO information surveillance system compromises a large number of spatially distributed
2305.03794
Commensurate and incommensurate 1D interacting quantum systems
Single-atom imaging resolution of many-body quantum systems in optical lattices is routinely achieved with quantum-gas microscopes. Key to their great versatility as quantum simulators is the ability to use engineered light potentials at the microscopic level. Here, we employ dynamically varying microscopic light potentials in a quantum-gas microscope to study commensurate and incommensurate 1D systems of interacting bosonic Rb atoms. Such incommensurate systems are analogous to doped insulating states that exhibit atom transport and compressibility. Initially, a commensurate system with unit filling and fixed atom number is prepared between two potential barriers. We deterministically create an incommensurate system by dynamically changing the position of the barriers such that the number of available lattice sites is reduced while retaining the atom number. Our systems are characterised by measuring the distribution of particles and holes as a function of the lattice filling, and interaction strength, and we probe the particle mobility by applying a bias potential. Our work provides the foundation for preparation of low-entropy states with controlled filling in optical-lattice experiments.
Andrea Di Carli, Christopher Parsonage, Arthur La Rooij, Lennart Koehn, Clemens Ulm, Callum W Duncan, Andrew J Daley, Elmar Haller, Stefan Kuhr
2023-05-05T18:49:43Z
http://arxiv.org/abs/2305.03794v2
# Commensurate and incommensurate 1D interacting quantum systems ###### Abstract We use dynamically varying microscopic light potentials in a quantum-gas microscope to study commensurate and incommensurate 1D systems of interacting bosonic atoms in an optical lattice. Such incommensurate systems are analogous to doped insulating states that exhibit atom transport and compressibility. Initially, a commensurate system with unit filling and fixed atom number is prepared between two potential barriers. We deterministically create an incommensurate system by dynamically changing the position of the barriers such that the number of available lattice sites is reduced while retaining the atom number. Our commensurate and incommensurate systems are characterised by measuring the distribution of particles and holes as a function of the lattice filling, and interaction strength, and we probe the particle mobility by applying a bias potential. Our work provides the foundation for preparation of low-entropy states with controlled filling in optical lattice experiments. ## Introduction Quantum-gas microscopes [1; 2; 3] offer a unique tool for quantum simulation of many-body quantum systems [4; 5; 6; 7] in optical lattices with single-atom imaging resolution and control. The addition of tailored static light potentials has made it possible to create box-like traps and cancel the harmonic confinement, enabling the study of homogeneous systems [8; 9] and topological and magnetic phases in restricted dimensions [10; 11; 12]. It is experimentally challenging to control the number of particles [13] and the filling, which can dramatically change the properties of the quantum system. The addition or removal of a particle is analogous to doping in semiconductors, and it is relevant to the physics of doped antiferromagnet high-T\({}_{c}\) superconductors [14]. Recent experimental studies using quantum-gas microscopes have shed light on the role of doping in Fermi-Hubbard systems [15], by observing bulk transport properties [16; 17; 18], string patterns [19], incommensurate magnetism [20], magnetic polarons [21; 22; 23], and hole pairing via magnetic coupling [12]. The lattice filling is equally relevant for bosonic many-body quantum systems. In the case of a commensurate particle number, i.e., an integer filling fraction, a homogeneous system can attain a Mott-insulating phase [24; 25], while for systems with incommensurate fillings interesting quantum phases have been predicted, such as supersolid and crystalline phases [26; 27] and the Bose-glass phase in the presence of a disordered potential [28; 29; 30; 24; 26; 27; 28; 29; 31; 24; 30], or defect-induced superfluidity [31]. In this work, using dynamic control over the shape of the light potential at a microscopic level, we deterministically prepare and study incommensurate bosonic quantum systems. We initially prepare commensurate 1D bosonic quantum systems at unit filling between repulsive potential barriers, before the confining potential is dynamically changed to reduce the number of available lattice sites while retaining the atom number. With this incommensurate filling, the system is no longer a Mott insulator, which we probe by studying the mobility of particles when subjected to a bias potential. The incommensurate systems with delocalised atoms on a localised background also feature nontrivial site occupation probabilities in the ground state [32]. Our technique to use a dynamic light potential to deterministically prepare a low-entropy incommensurate quantum state can also be applied to study many-body quantum systems in different confining potentials or lattice geometries [33; 34]. ## Preparation of (in)commensurate systems To prepare and detect our 1D commensurate and incommensurate quantum systems with up to six atoms, we employ a quantum-gas microscope which allows for single-atom-resolved detection of bosonic \({}^{87}\)Rb atoms, using a setup similar to earlier studies [2; 35]. Initially, we create a degenerate 2D quantum gas of around 400 atoms in a single antinode of a vertical optical lattice in the \(z\)-direction overlapped with two horizontal optical lattice beams (wavelength \(\lambda=1064\,\)nm) in the \(x\)- and \(y\)-directions (Methods). Using a digital micromirror device (DMD) and light at 666 nm wavelength, we produce two repulsive potential barriers of rectangular shape covering \(3\times 10\) lattice sites each (Fig. 1a) that are projected onto the optical lattice by a high-resolution microscope objective. We create ten independent commensurate one-dimensional systems with unit filling by initially preparing a 2D Mott-insulating state. This is done by changing the lattice potential of both horizontal beams from 0 to \(V_{x}=V_{y}=50(2)\,E_{\mathrm{r}}\) within \(500\,\mathrm{ms}\). Here, \(E_{\mathrm{r}}=\hbar^{2}/2m\lambda^{2}\) is the recoil energy, with \(m\) being the atomic mass of \({}^{87}\)Rb. During the quantum phase transition from a superfluid to the Mott insulator, the open geometry of the repulsive potential in the \(y\)-direction allows for the redistribution of residual entropy towards the outer regions thus enhancing the preparation fidelity in the centre. The experimental procedure is illustrated in Fig. 1b together with a phase diagram in Fig. 1c. Initially, our commensurate 1D system in a Mott-insulating state (Fig. 1b, panel i) is brought into the superfluid regime (Fig. 1b, panel ii). The position of the repulsive potential barrier is then moved to reduce the number of available lattice sites while retaining the atom number (Fig. 1b, panel iii). As a result, when the 1D system is brought back into the strongly interacting regime, it can no longer form a Mott insulator with unit filling, as shown in the phase diagram (Fig. 1c). To characterise the commensurate and incommensurate 1D systems at each stage of the experimental sequence (Fig. 2a), we record the parity of the atom number (Fig. 2b-e) on each lattice site (Methods), as due to light-assisted collisions during the fluorescence imaging we measure the atom number modulus two [2]. From this, we calculate the probability of finding holes as a function of the lattice site (Fig. 2f-i) and a histogram of the number of holes (Fig. 2j-m) per 1D system. We post-select the datasets by excluding 1D systems (white dashed lines Fig. 2b-e) in which the wrong parity is measured or in which an atom is detected at the position of the potential barrier (Supplementary Information). After this post-selection we retain on average 70% of the 1D systems, creating effectively a low-temperature subset of the measured datasets [35]. We initially measure the preparation fidelity of five atoms on five lattice sites in the strongly interacting regime. In this scenario with commensurate filling, each atom is localised on a single lattice site. We measure 96(2)% of the systems with the expected atom number (Fig. 2j), and in 4(2)% of the cases we find two empty sites equally distributed across the system (Fig. 2f). We attribute these to our non-zero initial temperature and to excitations arising from technical noise. To enter the superfluid regime, the \(x\)-lattice potential is decreased from \(V_{i}=50(2)\,E_{\mathrm{r}}\) to \(V_{c}=2.8(4)\,E_{\mathrm{r}}\) within \(150\,\mathrm{ms}\), thereby increasing \(J/U\). Here, \(J\) is the tunnelling rate, and \(U\) is the onsite interaction in the Bose-Hubbard model (Methods). We keep the \(y\)-lattice at \(V_{y}=50(2)E_{\mathrm{r}}\) to prevent tunneling between the 1D systems. The atoms within the superfluid 1D system become delocalised and due to the atom number fluctuations, we observe an increased number of empty sites which have a uniform spatial distribution (Fig. 2g). We now move the position of one of the potential barriers within \(200\,\mathrm{ms}\), such that the system size is reduced to four sites while retaining the five initial atoms, creating a doped system with incommensurate filling. As a consequence, we observe an odd number of holes (Fig. 2l). Then, the \(x\)-lattice potential is increased to \(V_{f}=16(1)\,\mathrm{E}_{\mathrm{r}}\) within \(200\,\mathrm{ms}\) to bring the incommensurate systems back into the strongly interacting regime, leading to the suppression of hole pairs (Fig. 2m). The distribution of holes, which now correspond to sites occupied by two atoms, shows a higher probability on the central two sites (Fig. 2i). The occupation of the central sites is energetically favourable due to the boundary, as predicted by our simulations of the single-band Bose-Hubbard model (Methods). ### Strongly and weakly interacting systems We study the difference between an incommensurate and commensurate 1D system when transitioning from the weakly to the strongly interacting regime. Specifically, we prepare an incommensurate system with five atoms on four lattice sites, and compare it to one with commensurate fillings of five and four atoms on five and four sites, respectively (Fig. 3a). We use the same ex Figure 1: **1D system preparation, experimental scenario and phase diagram.****a**, Left: fluorescence image of a Mott insulator of \({}^{87}\)Rb atoms in the presence of two repulsive potential barriers, visible as hollow rectangles in the centre, Middle: corresponding atom distribution. Right: Magnification of the central region, highlighting the individual 1D systems with five atoms and the location of the repulsive potential (grey shaded areas). **b**, Sketch of the procedure to generate an incommensurate (doped) 1D quantum system: (i) Initial preparation of a Mott insulating state, (ii) transition to the superfluid regime, and reduction of the number of lattice sites by moving the potential barrier, (iii) transition into the strongly interacting regime. **c**, Illustration of the phase diagram for the 1D Bose-Hubbard model for finite particle number indicating the path followed through stages (i)-(iii). perimental procedure to prepare the commensurate and incommensurate systems, the only difference being that repulsive barriers are not moved for the commensurate ones. The number of observed holes is compared to our numerical simulations (Fig. 3), taking into account the time-varying potential during the entire experimental procedure (Methods), for both \(T=0\) and \(T=0.15\,U\). The latter is the measured temperature of the initial 2D Mott insulator, which is an upper bound because the effective temperature for the 1D systems is lower as a result of the reduced entropy in the centre region (Fig. 1c) and the post-selection. In the strongly interacting regime, \(J/U\ll 1\), we observe on average less than 0.2 holes in the commensurate system, as it attains a Mott-insulating state. In contrast, in the incommensurate system, we observe close to one hole (Fig. 3a), due to the appearance of a doubly occupied site resulting from one delocalised atom on a localised background. As we increase \(J/U\) to enter the weakly interacting or superfluid regime, the number of observed holes increases in all three cases, in good agreement with the numerical simulation (Methods). Using the same data sets, we evaluated the probabilities of detecting holes in each 1D system (Fig. 3b and c). As \(J/U\) is increased, we observe that for the commensurate system with 4 atoms on 4 sites, the probability of observing zero holes decreases below 0.5 while the occurrence of two holes increases accordingly. This is well captured Figure 2: **Experimental procedure for doping a Mott insulator.****a**, Time-dependent variation of the \(x\)-lattice potential, \(V_{x}\). Numbers in circles indicate the stages at which measurements are performed: (1) after preparation of a commensurate system in a Mott-insulating state at \(V_{i}=50(2)\,E_{r}\); (2) after preparation of a superfluid state in a shallow lattice, \(V_{c}=2.8(4)\,E_{r}\); (3) after creating an incommensurate system by dynamically compressing the superfluid, (4) after increasing the lattice depth to reach the strongly interacting regime again, \(V_{f}=16(1)\,E_{r}\). **b-e**, Reconstructed lattice occupation of one experimental realisation, showing the repulsive potential (grey), atoms (dark red) and holes (light red). White dashed lines indicate rows excluded from the statistics by post-selection. **f-i**, Hole probability of each site. **j-m**, Probability vs number of holes for the same system. Each histogram is obtained by averaging over 260-380 independent 1D systems, and all error bars are the 95% Clopper-Pearson confidence intervals. by the numerical simulation that take into account the intensity ramps used to change the lattice depths. In the case of incommensurate filling, the increase in the observed number of holes is less pronounced, as the expected number of holes per 1D system in the superfluid is only \(\approx 1.6\) at zero temperature. In the strongly interacting regime, we observe more holes in the commensurate systems than predicted by the finite temperature simulations (Fig. 3a), which is attributed to loss of two atoms and particle-hole pair excitations [36] due to heating from intensity noise on the trapping lasers. Assuming that in 30% of 1D systems we had lost one atom, a loss of two atoms is expected 10% of the time, and these two-atom losses are not accounted for in the post-selection as the parity is conserved. ### Atom number, variance and hole distribution vs density The specific number of atoms and sites available in an incommensurate system can lead to non-trivial ground-state occupations that depend on the system size. We have so far considered 5 atoms on 4 sites, and we now compare different incommensurate states with 4, 5 and 6 particles to a Mott insulator with unit or double site occupancy in the strongly interacting regime. For each 1D system, we evaluate as a function of the average particle density, \(n\), the detected atom number normalised by the number of sites before dynamic compression, \(\tilde{N}\), (Fig. 4a). We also calculate the variance, \(\sigma\), from the mean atom number parity (Methods) (Fig. 4b). As expected, we observe Mott-insulating states with \(n=1\) (4 atoms on 4 sites), where \(\tilde{N}\approx 1\) and \(n=2\) (4 atoms on 2 sites), where \(\tilde{N}\approx 0\), as doubly occupied sites are detected as empty sites due to light-assisted collisions. The observed atom number decreases with increasing density, in agreement with our numerical calculations for the ground states at \(T=0\) (Methods) indicating adiabatic state preparation. The variance, \(\sigma\), which is a measure for the compressibility for short-range density fluctuations [37], is lowest at integer densities in the Mott-insulating state. It attains its maximum value of \(\sigma=0.25\) for non-integer densities [2], again in agreement with the numerical calculations at zero temperature (Fig. 4b). In the case of two additional atoms on a localised background, the symmetry of the system plays an important role. For 5 atoms on 3 sites (\(n=5/3\)), it is energetically unfavourable for both additional atoms to be on the same lattice site. We observed that doubly occupied sites have a higher probability to be found on the outer sites compared to the central site (Fig. 4e). The state \(|2,1,2\rangle\) is favourable (Extended Data, Fig. 7b), as it couples to both \(|1,2,2\rangle\) and \(|2,2,1\rangle\), reducing the kinetic energy of the ground state, which is analytically given by \(\frac{1}{\sqrt{2}}|2,1,2\rangle+\frac{1}{2}|1,2,2\rangle+\frac{1}{2}|2,2,1\rangle\) in the limit of \(U/J\rightarrow\infty\) (Supplementary Information). This state is robust against the presence of a weak harmonic confining potential and the small offsets from the adjacent potential walls (Supplementary Information). This is in contrast to the state with 6 atoms on 4 sites (\(n=6/4\)), for which we observe the additional atoms mostly on the inner two sites (Fig. 4d). Specifically, out of the systems Figure 3: **Strongly and weakly interacting commensurate and incommensurate 1D systems.****a**, Number of holes per 1D system vs \(J/U\) for 5 atoms on 5 lattice sites (black circles), 4 atoms on 4 sites (red squares), 5 atoms on 4 sites (blue diamonds). Error bars show the standard error. Our numerical simulations show the number of holes for \(T=0\) (dashed lines) and for \(T=0.15\,U\) (dotted lines). **b**, Number of holes per 1D system vs \(J/U\) in a commensurate system with 4 atoms on 4 sites, showing the probabilities of zero (red), two (orange) and four holes (brown) respectively. **c**, Number of holes in an incommensurate system with 5 atoms on 4 sites, showing the probabilities of finding one hole (blue), three holes (cyan). Each data point is obtained by averaging over \(110-190\) independent 1D quantum systems, and error bars in **b** and **c** are the Clopper–Pearson 95% confidence intervals. post-selected to have two empty sites, we observe the empty sites (corresponding to sites with two atoms) next to each other in 76(7)% of the cases. In 55(7)% of the cases the two empty sites are in the centre, corresponding to the observation of state \(|1,2,2,1\rangle\) (Extended Data, Fig. 7a). Unlike the \(n=5/3\) system, the \(n=6/4\) system is very sensitive to additional potential offsets, such that the inclusion of the harmonic confinement and wall potentials leads to a favoured occupation of the central sites, while in a perfect box potential the predicted density profile is flat. All our observations are well explained by the single-band Bose-Hubbard model, while being consistent with previous numerical calculations beyond the single-band model [32]. The inclusion of higher bands was shown to result in repulsion effects and fragmentation of the on-site density. While such effects will be present here, their observation would require the ability to probe the atomic wave function with sub-lattice-site resolution. ### Particle mobility A key feature of doped insulators is their response to external probes, e.g., when measuring their compressibility and particle mobility. To show this we investigate the response of a commensurate and incommensurate system when subject to a gradient potential of the form \(\hat{H}_{g}=\Delta E\,\sum_{i=1}^{N}i\,\hat{n}_{i}\), where \(\Delta E\) denotes the energy shift per lattice site and \(\hat{n}_{i}\) the local number operator. We use a deep lattice, \(V_{x}=16(1)\,E_{r}\), \(J/h=9(1)\,\mathrm{Hz}\), and a magnetic bias field that is slowly increased within \(500\,\mathrm{ms}\). When the bias field is applied to a commensurate system of four atoms on four sites, the hole distribution remains almost unchanged (Fig. 5a - c) [38]. In the incommensurate system of five atoms on four lattice sites, the hole distribution is skewed in the direction of the force produced by the gradient (Fig. 5d - f), showing that the doped insulator has a different and nonzero compressibility compared to the undoped state. We quantify this effect by computing the centre of mass, \(\bar{x}\), of the histograms in Figs. 5a to 5f as a function of \(\Delta E\). While there is no change of the centre of mass for the commensurate system (Fig. 5g, red squares), for the incommensurate system (Fig. 5g, blue diamonds), \(\bar{x}\) increases with \(\Delta E\), showing that the incommensurate (doped) state is partially delocalised [32; 39; 40]. The centre-of-mass shift is sensitive to the specific shape of the potential barriers and the harmonic confinement (Supplementary Information), which is accounted for by the numerical simulation of the system dynamics. ### Discussion We have studied the effects of commensurability in one-dimensional bosonic quantum systems. Key to this is our ability to produce engineered dynamical light potentials at the scale of single lattice sites. Starting from a commensurate filling with a known atom number be Figure 4: **Atom number, variance, and site occupation of commensurate and incommensurate systems.****a**, Observed atom number normalised by the initial number of atoms, \(\bar{N}\), vs density, \(n\), for systems of 6 atoms prepared on 6 sites dynamically compressed to 5 and 4 sites (black), 5 atoms on 5 sites compressed to 4 and 3 sites (blue), and 4 atoms on 4 sites compressed to 3 and 2 sites (red). Experimental values are shown as squares, theoretical ones as circles. The statistical error of the experimental values are smaller than the size of the datapoints. **b**, Atom number variance, using the same densities as in **a**. **c-f**, Site-resolved hole probability with increasing density, for 4 atoms on 3 sites, 6 atoms on 4 sites, 5 atoms on 3 sites and 4 atoms on 2 sites, respectively. Error bars in **b-f** are the 95% Clopper-Pearson confidence intervals. tween static potential barriers, we moved the barriers to change the number of available lattice sites, producing incommensurate systems. To characterise our degree of control of the state preparation, we characterised these systems by measuring the occurrence of holes from strong to weak interactions. For incommensurate systems, featuring delocalised atoms on a localised background, we observed non-trivial site occupation probabilities, in agreement with our numerical calculations. Studying the response of our system to a potential gradient, we observed particle mobility and compressibility of the incommensurate systems, while the commensurate ones remain in an incompressible Mott-insulating state. Our work can be extended to generate larger doped systems or to dope systems with holes instead of particles. Introducing disorder to incommensurate systems leads to a way to further explore the transitions between superfluid and Bose glass for few-boson systems and the effect on compressibility [24; 28; 30]. For ladder systems, control over the atom number can lead to further interesting effects including the realisation of a 'rung Mott insulator', predicted in a two-leg ladder with half filling [41; 42]. Our methods for generating dynamically controlled potentials can also be used to adiabatically prepare low entropy states with controlled incommensurability or doping in both bosonic and fermionic systems. ## Methods ### Experimental procedure The experiment starts with a cloud of \(2\times 10^{9}\)\({}^{87}\)Rb atoms in a magneto-optical trap (MOT) loaded from a 2D\({}^{+}\) MOT. The cloud is compressed by increasing the MOT's magnetic quadrupole field and cooled to \(2\,\mu\)K using Raman grey molasses cooling on the D2 line [43]. We load about \(1\times 10^{8}\) atoms into a crossed optical dipole trap (CODT), formed by two laser beams at 1070 nm wavelength and 200 W power, intersecting at a \(17(1)^{\circ}\) angle. After this, the atoms are loaded into the focus of a dimple trap, which is moved using a translation stage, transporting the atoms into the'science' chamber equipped with a high-resolution microscope objective (\(\mathrm{NA}=0.69\)). About \(3\times 10^{6}\) atoms are transferred into another CODT that is formed by the optical lattice beams with the retro-reflected beams blocked, before they are cooled further using evaporative cooling and loaded into the vertical optical lattice. We use a position-dependent microwave transfer in a magnetic field gradient to create a two-dimensional cloud of thermal atoms in a single anti-node of the vertical optical lattice [2]. Then, a dimple trap at 850 nm wavelength is shone in through the microscope objective, while keeping the vertical lattice depth at \(V_{z}=20(1)E_{r}\). We use a magnetic quadrupole field to tilt the trap, such that we evaporatively cool the atoms to create a Bose-Einstein condensate. We now shine in the blue-detuned light shaped by the DMD to create the repulsive potential barriers, before the two horizontal optical lattice beams are turned on. We detect the individual atoms via Figure 5: **Response of Mott insulator and doped insulator to a bias potential.****a-c**, Hole distributions of a commensurate system (four atoms on four sites) for maximum energy shifts per lattice site \(\Delta E/J=0(1),5(1),9(1)\) resulting from the bias potential. **d-f,** Hole distributions of an incommensurate system (five atoms on four sites) for the same potential. **g**, Centre-of-mass shift, \(\bar{x}\), measured in lattice sites, relative to the original distribution as a function the energy shift \(\Delta E\), for a commensurate (red squares) and incommensurate (blue diamonds) system, together with the corresponding numerical simulations of the ground state (dashed lines) and the ensemble average (dotted lines). Errors of \(\Delta E/J\) and \(\bar{x}\) are calculated via error propagation from the gradient calibration and from the counting statistics, respectively. Each histogram is obtained by averaging over \(100-300\) 1D systems. fluorescence imaging using the microscope objective, similar to previous works [2]. To freeze the atom distribution prior to detection, the optical lattice depth is first increased to \(50(2)E_{r}\) in \(500\,\mu\)s in all three axes, and then to \(3000E_{r}\) in \(2\,\)ms. The Lucy-Richardson algorithm is used to deconvolve the fluorescence images and reconstruct the lattice occupation with high fidelity [44]. ### Programmable dynamic light potentials We use a DMD (ViALUX V-9001) to create the repulsive potential barriers using blue-detuned light at \(666\,\)nm wavelength from an amplified diode laser. The DMD image is projected onto the atoms by the high-resolution objective such that we can use \(18\times 18\) DMD pixels per lattice site to control our custom potentials. A dedicated software allows us to specify initial and final positions of the potential barriers in the reference frame of the optical lattice. We can also set the number of different patterns (frames) to be displayed on the DMD. An initial frame is displayed during the system preparation, and after a trigger pulse, a sequence of frames are displayed moving the barrier to reduce the system size by a discrete number of lattice sites. We program the DMD to suppress the dark time between successive frames. The transition time between frames is \(8\,\mu\)s during which the mirrors are released and the next configuration of mirrors is switched on. We observe a phase drift of the optical lattices [45], from one realisation of the experiment with respect to the next one, resulting in position shift of approximately \(0.05\,a_{\mathrm{l}}\) on average, where \(a_{\mathrm{l}}=\lambda/2\) is the lattice spacing. This position shift is measured by fitting the position of single atoms in the fluorescence image. We use this information to shift the position of the DMD pattern for the next measurement to follow the phase drift. In this way the repulsive potential stays in the same place with respect to the optical lattice, with an estimated deviation of \(<0.05\) times the lattice spacing. To calibrate the intensity of the \(666\,\)nm light, a repulsive potential is projected onto the centre \(5\times 4\) sites of an \(n=2\) shell in a Mott insulating state (Fig. 6a). The number of atoms observed in the region as a function of the set voltage for the laser intensity regulation shows a peak (Fig. 6b) when the light shift caused by the repulsive light potential is equal to \(U\). In this case, all sites covered by the repulsive potential have single occupancy and form an \(n=1\) Mott insulating shell [12]. When increasing the light intensity of the repulsive potential further such that the light shift equals \(2U\), we eventually see no atoms in the region once again. We verified that compressing the quantum gas in the superfluid regime using the dynamic DMD potential does not result in significant atom loss. To quantify this, we have reversed the position of the repulsive potential barriers to its original position over the same timescale as for the compression, then transferred the system back into the Mott-insulating state. We found that with static barriers we detected on average \(4.81(5)\) atoms on five sites, while when moving and reversing the potential barriers we detected on average \(4.64(7)\) atoms. ### Numerical simulation of system dynamics For strongly interacting systems towards the Tonks-Girardeau limit with filling above unity, it is known that higher bands need to be accounted for as the on-site repulsion results in a fragmentation of the density of the particles in the ground state within individual sites [32]. However, for the case considered here, the quantum gas microscope can resolve between single sites and not for the particle density within a site. Therefore, we consider the single-band (lowest energy) Bose-Hubbard model to model the atoms in the one-dimensional optical lattices. Figure 6: **Calibration of the repulsive barriers.****a**, Fluorescence image of atoms in a Mott insulating state with an inner \(n=2\) shell (visible as mostly empty sites), illuminated with a square repulsive potential in a region of \(5\times 4\) sites, indicated by the red square. For the three images, three different intensities of the \(666\,\)nm light were used, corresponding to potential heights of \(0\), \(U\), and \(2U\). The potential depths of the optical lattices were \(V_{x}=V_{y}=20(1)\,E_{r}\) and \(V_{z}=35(5)\,E_{r}\), such that \(U/h=940(100)\,\)Hz. **b**, Observed atom number within the region highlighted by the red square in **a**, as function of the set voltage of the laser intensity regulation. Each data point is obtained by averaging the counted atom number in three images, the error bars are standard error. The peak of the graph represents a potential height \(U\) at set voltage of \(1.23(8)\)V. The Hamiltonian is [24; 25] \[\hat{H}=-J\sum_{i=1}^{N}\left(\hat{a}_{i}^{\dagger}\hat{a}_{i+1}+h.c.\right)+ \frac{U}{2}\sum_{i=1}^{N}\hat{n}_{i}(\hat{n}_{i}-1)+\sum_{i=1}^{N}\epsilon_{i} \hat{n}_{i}, \tag{1}\] where \(N\) is the number of sites, \(\hat{a}_{i}^{\dagger}\) and \(\hat{a}_{i}\) are the bosonic creation and annihilation operators respectively, \(\hat{n}_{i}=\hat{a}_{i}^{\dagger}\hat{a}_{i}\) is the number operator, \(J\) is the tunneling strength between neighbouring lattice sites, \(U\) is the on-site interaction energy, and \(\epsilon_{i}\) is the local energy shift due to an external potential. We calculate \(J\) and \(U\) from the Wannier functions, while \(\epsilon_{i}\) accounts for the weak harmonic confinement and the impact of the wall potential, taking into account the calibrated height of the potential barriers and the point-spread function of the microscope. We emulate the dynamics of the experiment numerically, accounting for the full Hilbert space of the finite lattice with fixed particle number in all cases. The state is initialised in the Mott insulator of the lattice with commensurate filling and we numerically implement the same protocol for higher filling factors. For the ramps between deep and shallow lattice potentials, we take the first-order Trotter-Suzuki decomposition of the time evolution unitary, with a maximum error for individual steps of \(10^{-6}\). We simulate the discrete steps of the potentials caused by the discrete frames of the DMD pattern when moving the barriers. To account for an initial thermal distribution of the state, we evolve each initial eigenstate individually and calculate the final non-zero temperature state as the sum of the evolved states weighted by the Boltzmann distribution. Overall, we find good agreement between the zero temperature numerical results and the experiment. The imaging quench to a deep lattice was simulated across a range of starting \(J/U\) values for both commensurate and incommensurate densities to ensure that it does not introduce non-adiabatic effects and results in a frozen density profile. As the same imaging quench is used in each of the experimental realisations, we exclude it from the simulation. There is a small probability that the initial one-dimensional system of the experiment has one more or one less atom than the number of lattice sites. We have simulated the impact of this non-perfect state preparation to confirm that the observation of additional holes is not due to excitations from non-adiabatic effects, as these would not be captured by our numerical protocol with fixed atom number. We emulate the imperfect preparation by simulating the case of one additional atom and one less atom. We then combine the results with those assuming a perfect initial state preparation, to mimic what we observe experimentally using the post-selection process. ### Definition and calculation of observables The numerical simulations compute the full wave function, and for comparison with the experimental results we calculate the local parity operator, \(\hat{s}_{i}\), of the \(i\)th site, \[\hat{s}_{i}=\frac{1}{2}\left[(-1)^{\hat{n}_{i}-1}+1\right], \tag{2}\] with the local number operator \(\hat{n}_{i}\). We use the lattice occupation to measure the parity of the atom number a single lattice site, \(s_{i}=\langle\hat{s}_{i}\rangle\), and the mean atom number parity on \(M\) lattice sites, \(\bar{n}=\sum_{i=1}^{M}s_{i}/M\), where \(M\) is given by the size of the 1D systems multiplied by the number of realisations. From this, we calculate the variance of the atom number \(\sigma=\bar{n}(1-\bar{n})\), shown in Fig. 4b. For the datasets in Fig. 5g, we compute the centre of mass in the case of the incommensurate system (5 atoms on 4 sites) using \(\bar{x}=\sum_{i=1}^{4}i\,w_{i}/\sum_{i=1}^{4}w_{i}\), where \(w_{i}\) is the probability to find a hole (i.e., the extra atom on a doubly occupied site) on lattice site \(i\). To match this for the commensurate system, we use \(\bar{w}_{i}=1-w_{i}\) instead of \(w_{i}\), such that for the histograms of both commensurate and incommensurate systems the centre of mass shift of the atoms is calculated. ### Calibration of bias potential To calibrate the magnetic field used for the bias potential, we follow a method used in previous studies [46; 47]. Starting with an \(n=1\) Mott Insulator, we modulate the power of the \(x\)-lattice beams by \(20\,\%\) for \(30\,\mathrm{ms}\). When the frequency of the modulation matches the interaction energy, \(U\), the atoms tunnel to already occupied sites which leads to an increase in holes and doubly-occupied sites. When a magnetic field gradient is applied, causing an energy offset per site of \(\Delta E\), the atoms tunnel when the frequency of the modulation matches \(U\pm\Delta E\). The number of atoms counted in a central region as a function of \(\Delta E\) shows two inverted peaks (Extended Data, Fig. 8), the position which we identify by fitting a double Lorentzian, yielding \(U\pm\Delta E\). We repeated this measurement for different field gradients, and fit a linear regression model from which we obtain the error bars for \(\Delta E/J\) shown in Fig. 5g. ## Acknowledgements We acknowledge support by the Engineering and Physical Sciences Research Council (EPSRC) through the Programme Grant DesOEQ [grant number EP/P009565/1], the Quantum Technology Hub in Quantum Computing and Simulation [EP/T001062/1], and the New Investigator Grant [EP/T027789/1]. We thank Ilian Despard, Andres Ulibarrena, Harikesh Ranganath and Maximilan Ammenwerth for their work during the early stages of the experimental setup.
2302.08666
Quantum computing for data science
I provide a perspective on the development of quantum computing for data science, including a dive into state-of-the-art for both hardware and algorithms and the potential for quantum machine learning
Barry C. Sanders
2023-02-17T03:04:48Z
http://arxiv.org/abs/2302.08666v1
# Quantum computing for data science ###### Abstract I provide a perspective on the development of quantum computing for data science, including a dive into state-of-the-art for both hardware and algorithms and the potential for quantum machine learning. ## 1 Introduction Quantum information theory transforms the very foundations of information theory and computing [1, 2]. The pre-quantum (known as 'classical') scientific framework allows objective information to be labelled by integers such as bit strings (e.g., representing text characters by 7-bit strings according to the American Standard Code for Information Interchange, or ASCII), which is foundational to information theory. Processing information can be executed under the rules of Boolean logic, manifested as concatenations of one-bit operations such as NOT and two-bit operations such as NAND. Quantum information changes the information game entirely by allowing coherent superpositions of informational states, which, following the quantum principle of complementarity, can be thought of as being both corpuscular (particle-like) and undular (wave-like). For example, a three-bit string 010 becomes quantumly a label for a quantum state \(|010\rangle\) (element of Hilbert space), which could be manifested physically, say, by three electrons that are spin-down, spin-up and spin-down, with a spin-down state labelled \(|0\rangle\) and a spin-up state labelled \(|1\rangle\) in Dirac, or bra-ket, notation [3]. Superposing this three-electron state with its orthogonal complement would be \(|101\rangle\). For state normalisation implicit throughout this paper, the superposition of these two states is \(|010\rangle+|101\rangle\), which, in binary representation, is a superposition of the numbers 2 and 5. These superpositions of information states can be processed quantumly, i.e., in a way that preserves coherence. Ideally, this superposition state can be transformed by an arbitrary unitary map (isometry on Hilbert space). Realistically, open-system effects such as noise and loss can impede performance, but almost-unitary maps (such as completely positive trace-preserving maps [1] that are close to being unitary) can suffice for useful quantum-information processing provided that the quantum version of error correction is employed in a fault-tolerant way [4]. An early motivation for quantum computing was for simulating physics, particularly for simulating quantum systems in a way that is natural for quantum descriptions [5], i.e., by using quantum computing. Since that original idea, remarkable quantum algorithms have arisen, where remarkable refers to delivering superior performance compared to classical algorithms, such as efficient computation meaning computational resources such as run-time and number of bits or qubits, being quantum bits, scale no worse than polynomially with respect to the size of the problem quantified by how many bits are required for specifying the problem instance [1]. One example of quantum algorithms being advantageous is for efficiently solving number factorization, whose hardness is subexponential (almost exponential) using the best known classical algorithm [6]. Another example is a provably quadratically faster algorithm for function inversion (hence, unstructured search) [7]. Quantum annealing is speculated to enhance optimisation methods in some cases [8]. Building quantum computers is rapidly advancing, albeit small-scale and noisy, making quantum computing now viable (in a limited way) and commercial [9, 10, 11]. Quantum computing is arguably a disruptive technology, i.e., a technology that is both innovative and market-changing [12], if quantum computing eventually delivers a game-changing application such as for data science [12, 13]. Some big 'ifs' stand in the way of quantum computing being disruptive, though. My view is that quantum computing faces two huge 'ifs', i.e., challenges: (1) building a quantum computer and (2) making said quantum computer useful. In a way, the latter is harder than the former. Scaling a quantum computer to large size (many qubits) and large depth (able to pass through many cycles of quantum logic gates, defined as unitary or near-unitary maps of quantum information states) is technically extremely challenging but conceivable. Figuring out, on the other hand, what to do with a quantum computer that is truly useful and transformative is, at best, challenging because of a limit to our imagination of what could be the next great quantum algorithm and, at worst, hopeless if no new great quantum algorithm exists. Thus, quantum computing is a wonderful, scary, risky adventure, which we need to navigate with our eyes wide open. ## 2 Nexus between quantum computing and data science Now I elaborate on the nexus between quantum computing and data science, as any meaningful overlap between these two galvanising research fields is exciting. Data science [14] concerns the full pipeline from capturing and maintaining data to processing and analysing these data and, finally, communicating or executing data-driven action [15]. Computing is a key part of this pipeline, not only for exploiting quantum speedup [16] or superior accuracy [17] but also for the possibility of storing information in superposition in a quantum random access memory [18] as well as considering cyberattacks on data storage and processing. Secure data storage could involve quantum cryptography [19], and quantum-secure use of servers could need to exploit methods such as blind quantum computing [20] or quantum homomorphic encryption [21, 22]. The full impact of quantum information processing on data science is not yet known and needs extensive study. Here, we focus here strictly on the impact of quantum computing per se on data science. I mentioned a little about quantum computing being about superpositions of information states and processing by unitary or near-unitary maps. As quantum computing is rather subtle in its nature, let us now take a moment to appreciate how it works. I would like to illustrate how quantum computing works by considering a quantum simulation, executable on a quantum computer, of Schrodinger's famous example of a cat whose existential state can be in a life-death superposition, so to speak [23]. Our purpose is not to philosophise here but rather to recast Schrodinger's cat paradox [24] as quantum information processing thereby illustrating how quantum computing works. Here, I cast the cat's quantum state of being alive (\(b=0\)) or dead (\(b=1\)) as being represented by a quantum state \(|b\rangle\). Similarly, we can consider a radioactive nucleus, that decays by emitting an alpha (\(\alpha\)) particle. \(\alpha\) decay can be regarded as the emission of an \(\alpha\) particle, equivalent to a He\({}^{2+}\) ion, by quantum tunnelling out of the remaining nucleus [25]. The undecayed nucleus can be regarded as being an electromagnetic potential trap for the \(\alpha\) particle, with the nuclear interior being attractive and the nuclear boundary being repulsive. We represent the undecayed nucleus by \(|0\rangle\) and the decayed nucleus by \(|1\rangle\). Left alone the initial nuclear state \(|0\rangle\) evolves into a superposition such as \(|\pm\rangle:=|0\rangle\pm|1\rangle\) over one half-life with choice of \(\pm\) indicating the coherence phase between not having decayed and having decayed. To make clear the connection with quantum information processing, Schrodinger's cat paradox is depicted in Fig. 2 as a cat in a box with poison gas triggered by an \(\alpha\)-particle detection. The correlation between \(\alpha\) decay and the cat dying is shown in Fig. 2, with the resultant entangled nuclear-cat state shown in Fig. 2 as well. We can write this entangled state as \(|00\rangle+|11\rangle\). This state is extremely important in quantum information science and is often called a Bell state. Just as we have resources like qubits and gates, this resource is sometimes called an ebit, short for a pair of maximally entangled qubits. We can think of Schrodinger's cat paradox in terms of a quantum circuit. The input is initialised to \(|00\rangle\) for an undecayed nucleus and a living cat. Then one waits for precisely one half-life to achieve \(|0\rangle\mapsto|+\rangle\). If decay were a unitary process for one degree of freedom (not actually the case for nuclear decay but didactically convenient) then decay could be described by a quantum logic gate known as the Hadamard gate (H), whose unitary implies H: \(|1\rangle\mapsto|-\rangle\) as well, which would mean that a decayed nucleus implausibly returns to its undecayed state. The killing of the cat depicted in Fig. 2 is effecting the quantum logic gate known as CNOT, which stands for the controlled-not operation. The CNOT gate is equivalent to the XOR gate: flip the second qubit, i.e., apply the quantum NOT gate, if the first qubit is \(|1\rangle\) but not if the first qubit is \(|0\rangle\). Mathematically, for any two qubits in the computational basis, written as \(|b\;b^{\prime}\rangle\) for any bits \(b\) and \(b^{\prime}\), the mapping is \(\mbox{CNOT}:|b\;b^{\prime}\rangle\mapsto|b\;b^{\prime}\oplus b\rangle\). The CNOT gate can be extended in an obvious way, as a NOT gate controlled by more than one qubit, such as the controlled-controlled-NOT, or Toffoli, gate \(\mbox{CNOT}:|b\;b^{\prime}\;b^{\prime\prime}\rangle\mapsto|b\;b^{\prime}\;b^{ \prime}b^{\prime\prime}\oplus b\rangle\), which is valuable as a quantum version of repetition codes [26]. For Schrodinger's cat paradox, waiting a half-life means the first qubit is a superposition of \(|0\rangle\) and \(|1\rangle\), not specifically one of these two states, and the output is then the Bell state. Bell-state preparation, described by the procedure to manufacture the Schrodinger cat paradox, is a workhorse of some quantum computers, such as for ion-trap quantum computing [27]. The Hadamard gate plays a key role in quantum computing. An outer product of Hadamard gates maps a qubit string by \(|00\cdots 0\rangle\mapsto|++\cdots+\rangle\), which is a superposition of all strings of qubits of given length. Some quantum algorithms commence this way, which is at the heart of Figure 2: Above: the \(\alpha\) particle is detected, which causes the release of a poison gas that kills the cat; in Dirac notation, the nuclear-cat entangled state is a superposition of the nucleus being undecayed and the cat being alive with the orthogonal state of the nucleus having decayed and the cat having died. some references to quantum computing as parallelisation, but parallelisation is only part of the quantum computing story [2]. ## 3 Building a quantum computer Of course we would not be performing quantum computing using cats (although this terminology is used popularly for quantum computing [23], and the term "two-component Schrodinger cat states" is used for some type of superconducting quantum computing [28] although I prefer the term "entangled coherent state" for such states [29]) and instead resort to realisable quantum computers. I now list the current favourite platforms with examples of how the qubits are manifested [9, 10], and I sidestep the growing and intriguing areas of so-called continuous variable quantum computing [30, 31] and qudit quantum computing [32]. Photonic quantum computing was an early candidate and remains a strong contender. A single-rail qubit could be manifested as a superposition of no photon and one photon in a particular spatiotemporal mode; in contrast a dual-rail qubit is a single photon with the logical qubit inherent in the polarisation or which-path or which-time (\(|\mathrm{early}\rangle\) and \(|\mathrm{late}\rangle\), say) state of the photon. In magnetic resonance, the qubit can be the nuclear spin state, which can be controlled through hyperfine interactions with electronic qubits in the atom. Trapped ions can be superpositions of atomic excited and ground states, and controlled quantum logic gates can be achieved by exploiting collective motion of the ions leading to phononic sidebands of the atomic energy states. Neutral atomic qubits are also superpositions of ground and excited electronic states, with entangling two-qubit gates achievable by controlled collisions between atoms. Solid-state systems could be semiconductors, with the qubit inherent in the spin of a trapped electron, and superconducting quantum computing can involve quantized flux or superpositions of charge or using microwave photons in the resonator. A hybrid semiconductor-superconductor system could lead to topologically protected qubits. Quantum computers can be realised according to different approach to processing quantum information. The gate'model' is about making a universal primitive set of gates, such as H and CNOT discussed earlier, as well as another gate called T, which maps \(\ket{0}\mapsto\ket{0}\) and \(\ket{1}\mapsto\mathrm{e}^{\mathrm{i}\nicefrac{{\pi}}{{4}}}\ket{1}\). Any multi-qubit unitary map can be decomposed efficiently into an efficiently scaling number of computer cycles, with each cycle comprising these single-qubit gates, including the identity gate, plus CNOT gates. Measurement-based quantum computing, on the other hand, begins by entangling all qubits together and then carving out a circuit by measuring unwanted qubits in an appropriate basis. The remaining qubits are sequentially measured, with measurement basis determined by previous measurement outcomes; the final measurement outcomes provide the desired solution. Adiabatic quantum computing is about slow, controlled, continuous-time evolution of a Hamiltonian to realise approximately the target unitary map. Topological quantum computing involves braiding anyonic worldlines and could be realised via the fractional quantum Hall effect. This list describes universal quantum computers; purpose-built quantum computers such as quantum annealing--adiabatic or diabatic [33]--and boson sampling [34] enrich the set of quantum computers in development. Quantum computer technology is quite impressive today, with quantum annealing reaching thousands of qubits and gate-based quantum computing having over a hundred qubits. Quantum computing primacy has arguably arrived [35, 36]. Certainly, quantum computing technology has surpassed what I imagined would be the status today, but much more is needed to make quantum computers genuinely useful. ## 4 Applied quantum algorithms for data science A plethora of quantum algorithms has been developed [37]. Although factorisation is the most stunning of the quantum algorithms, near-term algorithms are the current focus. Two especially enticing approaches to exploiting noisy intermediate-scale quantum (NISQ) computers [38] involve optimisation [39], which can use variational quantum algorithms [39, 40], and quantum machine learning [41]. In both cases, a quantum advantage remains an open question. A business analysis of the potential for commercially viable quantum computing commercially is in combinatorial optimisation [42]. This analysis identifies three business verticals, which are niches that serve needs of a specific set of customers. They identify the verticals of financial portfolio management, computing lowest-energy configurations with applications to material design and to pharmaceuticals, and predicting rare failures for advanced manufacturing. The near-term gain would, one hopes, arise from exploiting quantum-inspired algorithms [43]. Intriguingly, even without a quantum advantage, a theoretical study suggests that a quantum economic advantage for companies developing quantum computers can emerge even if quantum computers do not offer a quantum computational advantage [44]. This argument for a quantum computational advantage, even if quantum computers do not pay off, is based on modeling a duopoly comprising a quantum computing company vs a classical computing company. Their model shows a "quantum economic advantage", even without quantum computing advantage, partially through market creation. Quantum machine learning aims to achieve quantum-enhanced machine learning, such as enhancing unsupervised, supervised, generative or transfer learning. Here, I summarise Wiebe's list of key questions that a "quantum machine learner" should ask [45]. Wiebe relates quantum-enhanced machine learning specifically to certain potential advantages over using classical computing. He identifies four sought-after quantum enhancements to machine learning, namely, fewer computational steps for training or classifying, reducing sample complexity, generating a new data model, and using quantum optimisation for loss functions. As a second track of quantum machine learning research, Wiebe summarises how a quantum device could be natural for classifying or extracting features. To this end, he lists examples, namely, quantum Boltzmann machines and quantum neural networks, quantum algorithms such as for principal component analysis, and a quadratic time reduction for quantum-enhanced function invertion applied to data-training. Consultants are involved in prognosticating commercial applications of quantum algorithms, and I share a little bit here of such endeavours. Gartner, Inc, anticipates various quantum computing potential applications [46]. They regard chemistry applications as first past the post, with such applications requiring somewhere around one hundred to two hundred qubits. As the number of qubits and quantum computational cycles increase, they forecast quantum optimisation and then quantum machine learning as being viable, followed by material science, with each of these application areas needing hundreds to thousands of qubits. These forecasts are in line with the business forecast discussed above [42]. Gartner's analysis culminates with the expectation that, eventually, quantum computers will sole "unknown problems" requiring hundreds of thousands of qubits. A November 2020 TechRepublic report summarises six experts' predictions for 2021 [47] beginning with IBM predicting it will achieve 127 qubits, which indeed happened in 2021. The aforementioned Gartner, Inc, foresaw that cloud providers would incorporate quantum capability, which is so. KnowBe4, a company that focuses on security awareness training, predicted that quantum computing would break traditional public-key cryptography; did not happen, and I do not see this happening in the foreseeable future. Lux Research foreshadowed advances in optimising quantum hardware to reduce resource needs for running quantum algorithms. Finally, Forrester predicted that 2021 would be a trough of disillusionment, which is the third of five phases of the Gartner hype cycle [48, 49]. The hype cycle is a plot of maturity or visibility of an emerging technology as a function of time, and these five phases are the "technology trigger", followed by the "peak of inflated expectations", then the "trough of disillusionment", subsequently the "slope of enlightenment" (waning interest with failure to deliver with survival dependent on satisfying early adopters) and, finally, the "plateau of productivity". ## 5 Conclusions and outlook We have reviewed the idea and advantages of quantum computing, the connection with data science emphasising the computational side such as optimisation and machine learning, and took an excursion into what consulting firms are saying. Now we take stock of all this information to figure out where quantum computing is today and how it affects data science and its applications and verticals. Clearly, quantum computers are a game changer for computing, at least at a conceptual level. The idea that hard problems can be solved efficiently on a quantum computer shows that quantum computing is more powerful, at least in one type of application. That quantum computing is _provably_ quadratically superior, by some measure, at inverting functions, also points to quantum computing being superior in some way to classical computing. On the other hand, quantum computers today are small and noisy, and, even if they were large and coherent, their application to data science entails only speculative advantages. Whether or not quantum computers are advantageous for data science might only be resolved by building quantum computers, testing algorithmic performance on such computers, and seeing whether an advantage has been found or not; this empirical approach is already ongoing in the field. Fortunately, quantum-inspired algorithms, which typically arise as ways to challenge a purported quantum advantage by solving the same problem as well or better on a classical computer, are giving beneficial computational results at present that arguably would not be happening if quantum computing research were not occurring. Given the unknowns of quantum computing, including how much quantum computers can scale up and whether we will have good algorithms that exploit the potential of quantum computing for data science, an approach I call "quantum-aware, quantum-ready, quantum-active" is prudent. This approach means that, at this stage, we stakeholders in quantum computing, including data scientists, need to be aware of when and how quantum computing could change the game. We maintain awareness by testing how well existing quantum computers or simulators thereof perform on problems of interest. If and when quantum computing matters for the application at hand, it is time to become quantum-ready, which means train up on the technology and prepare for transitioning to quantum computing so that, when the time comes, we are ready to use it without delay. Finally, in the optimistic scenario that quantum computing has become a disruptive technology whose adoption is necessary, we have reached the quantum-active stage where data scientists need to be fully engaged in using quantum computing to solve problems. We need to remember that quantum computing is a high-risk high-reward venture and treat it accordingly. ## Acknowledgments This project has been funded by the Alberta Government and by NSERC. Thanks to S. L. Sanders who prepared the figures over two decades ago; those figures have been valuable for my presentations on Schrodinger's cat ever since but have not been published before.
2310.16891
Detecting Detached Black Hole binaries through Photometric Variability
Understanding the connection between the properties of black holes (BHs) and their progenitors is interesting in many branches of astrophysics. Discovering BHs in detached orbits with luminous companions (LCs) promises to help create this map since the LC and BH progenitor are expected to have the same metallicity and formation time. We explore the possibility of detecting BH-LC binaries in detached orbits using photometric variations of the LC flux, induced by tidal ellipsoidal variation, relativistic beaming, and self-lensing. We create realistic present-day populations of detached BH-LC binaries in the Milky Way (MW) using binary population synthesis where we adopt observationally motivated initial stellar and binary properties, star formation history and present-day distribution of these sources in the MW based on detailed cosmological simulations. We test detectability of these sources via photometric variability by Gaia and TESS missions by incorporating their respective detailed detection biases as well as interstellar extinction. We find that Gaia is expected to resolve 300--1,000 (700--1,500) detached BH--LC binaries with SNR>10 (1) depending on the photometric precision and details of supernova physics. Similarly, the number of resolved BH--LC binaries with TESS are ~50--200 (140--350). We find that 136^{+15}_{-15} BH--LC binaries would be common between Gaia and TESS. Moreover, between ~60--70 (50--200) BH--LC binaries identifiable using photometry with SNR >10 may also be resolved using Gaia's radial velocity (astrometry).
Chirag Chawla, Sourav Chatterjee, Neev Shah, Katelyn Breivik
2023-10-25T18:00:04Z
http://arxiv.org/abs/2310.16891v4
# Detecting Detached Black Hole binaries through Photometric Variability ###### Abstract Understanding the connection between the properties of black holes (BHs) and their progenitors is interesting in many branches of astrophysics. Discovering BHs in detached orbits with luminous companions (LCs) promises to help create this map since the LC and BH progenitor are expected to have the same metallicity and formation time. We explore the possibility of detecting BH-LC binaries in detached orbits using photometric variations of the LC flux, induced by tidal ellipsoidal variation, relativistic beaming, and self-lensing. We create realistic present-day populations of detached BH-LC binaries in the Milky Way (MW) using binary population synthesis where we adopt observationally motivated initial stellar and binary properties, star formation history and present-day distribution of these sources in the MW based on detailed cosmological simulations. We test detectability of these sources via photometric variability by _Gaia_ and TESS missions by incorporating their respective detailed detection biases as well as interstellar extinction. We find that _Gaia_ (TESS) is expected to resolve \(\sim 700\)-\(1,500\) (\(\sim\) 100-400) detached BH-LC binaries depending on the photometric precision and details of supernova physics. We find that \(\sim 369\) BH-LC binaries would be common both in _Gaia_ and TESS. Moreover, between \(\sim 80-270\) (\(\sim\) 70-290) of these BH-LC binaries can be further characterised using _Gaia_'s radial velocity (astrometry) measurements. ## 1 Introduction Recent observations of merging double compact-object (CO) binaries by the LIGO-Virgo-KAGRA (LVK) detectors have reignited the interest to understand the astrophysical origins of CO binaries (Abbott et al., 2016, 2016, 2019; Abbott et al., 2021, 2021). A variety of ongoing and upcoming missions including the Zwicky Transient Facility (ZTF, Bellm et al., 2018), the Rubin Observatory's Legacy Survey of Space and Time (LSST, Ivezic et al., 2019), the All-Sky Automated Survey for Supernova (ASAS-SN, Shappee et al., 2014; Kochanek et al., 2017), SDSS-V (Kollmeier et al., 2017) and eROSITA (Predehl et al., 2021) are expected to unravel hundreds to thousands of COs including cataclysmic variables, supernova explosions, gamma-ray bursts, and X-ray binaries. Although understanding the demographics of dark remnants in general, and BHs in particular, is interesting for many branches of astrophysics, a model independent map connecting BHs to their stellar progenitors remains elusive due to challenges in the detailed theoretical modeling of the supernova physics (Patton and Sukhbold, 2020; Patton et al., 2022; Fryer et al., 2022) and scarcity of discovered BHs where constraints to the progenitor properties are available (e.g., Breivik et al., 2019; El-Badry et al., 2022). BH-LC binaries in detached orbits, discovered in large numbers, can be instrumental in improving this gap in our understanding of the details of how high-mass stars evolve, explode, and form COs (Breivik et al., 2017; Chawla et al., 2022; Shikauchi et al., 2023). In particular, if the distance to the LC is known (e.g., via _Gaia_ astrometry), meaningful constraints can be placed on the metallicity and age of the LC (and thus the BH's progenitor) through stellar evolution models, asteroseismology, and spectroscopy (e.g. Lin et al., 2018; Angus et al., 2019; Bellinger, E. P. et al., 2019). It is ex pected that \(\sim 10^{8}-10^{9}\) stellar-mass BHs are present in the present-day MW (Brown and Bethe, 1994; Olejak, A. et al., 2020; Sweeney et al., 2022). Of these, roughly \(10^{4}-10^{5}\) are expected to be in binaries with a non-BH. An overwhelming \(70-98\%\) are expected to have LC in a detached orbit. In contrast, BHs in potentially mass transferring systems is only \(2-30\%\). (Breivik et al., 2017; Wiktorowicz et al., 2019; Chawla et al., 2022). Recent advances in time-domain astronomy promises to provide unprecedented constraints on BHs in detached orbits via high-precision astrometric, photometric, and spectroscopic measurements. BHs can be characterized using several techniques: (a) astrometrically constraining the orbit of a LC by observing its motion around an unseen primary (van de Kamp, 1975; Gould and Salim, 2002; Tomsick and Muterspaugh, 2010), (b) spectroscopically measuring the RV of the LC as it moves around the BH (Zeldovich and Guseynov, 1966; Trimble and Thorne, 1969), and (c) phase-curve analysis of orbital photometric modulations of the LC induced by its dark companion (Shakura and Postnov, 1987; Khruzina et al., 1988). Of course, at least for some sources, a combination of more than one of the above methods may become useful. The prospect of detecting BHs in detached BH-LC binaries via ongoing astrometry and RV surveys like _Gaia_ and LAMOST has been extensively explored (Barstow et al., 2014; Breivik et al., 2017; Mashian and Loeb, 2017; Chawla et al., 2022; Janssens et al., 2022). Although the estimated number of detectable BH-LC binaries is model dependent (because of model uncertainties, e.g., in SNe physics), all of these studies predict that _Gaia_ could possibly discover \(10-10^{3}\) BH-LC binaries in detached orbits during its 10 year mission. In addition, the knowledge of the stellar parameters like luminosity, age, and mass of the LC can help constrain the mass of the BH as well as its progenitor's properties in a model independent way (Fuchs and Bastian, 2005; Andrews et al., 2019; Shahaf et al., 2019; Chawla et al., 2022). Several non-interacting BH-LC candidates have been discovered in star clusters (Giesers et al., 2018; Giesers et al., 2019) and in the Large Magellanic Cloud (Shenar et al., 2022; Lennon, D. J. et al., 2022; Saracino et al., 2021; Shenar et al., 2022). In the Galactic field also, several discoveries of candidate BH-LC binaries in detached orbits, including BHs in triples, have been proposed by studies using photometric and spectroscopic observations (Qian et al., 2008; Casares et al., 2014; Khokhlov et al., 2018; Liu et al., 2019; Thompson et al., 2019; Rivinius, Th. et al., 2020; Gomez and Grindlay, 2021; Jayasinghe et al., 2021). However, significant debate persists on the candidature of many of these systems (e.g., it has been suggested that the unseen companion may actually be a low-luminosity sub-giant companion or stellar binary instead of a BH in some of the candidate systems; van den Heuvel and Tauris (2020); El-Badry and Quataert (2021); El-Badry and Burdeg (2022); El-Badry et al. (2022); El-Badry et al. (2022); (b). Most recently, using _Gaia_'s DR3 several groups have reported a number of possible dormant CO-LC candidate binaries using astrometry (Andrews et al., 2022; Gaia Collaboration et al., 2023; Chakrabarti et al., 2023; El-Badry et al., 2022, 2023; Shahaf et al., 2022), photometry (Gomel, R. et al., 2023) and spectroscopy (Fu et al., 2022; Jayasinghe et al., 2023; Nagarajan et al., 2023; Tanikawa et al., 2023; Zhao et al., 2023), which indicates that indeed a large population of CO-LC binaries in detached orbits do exist in nature and are waiting to be found. Observations of photometric variability in stars due to planetary transits have revolutionized the field by detecting thousands of exoplanets from various wide-field ground-based missions including HAT (Bakos et al., 2004), TrES (Alonso et al., 2004), XO (McCullough et al., 2005), WASP (Pollacco et al., 2006) and KELT (Pepper et al., 2007) and space missions like CoRoT (Auvergne, M. et al., 2009), _Kepler_(Borucki et al., 2011), and TESS (Ricker et al., 2014). Periodic variability in the observed LC flux is also expected in compact orbits around BHs. For example, in a compact enough orbit to a BH, the sky-projected surface area of an LC may show orbital phase-dependent changes resulting in the so-called ellipsoidal variations (EV) in the total observed flux. In addition, relativistic beaming (RB) of the LC's light may be strong enough to be detectable if it's orbit is close-enough to a BH. Furthermore, if the geometry is favorable, light from the LC may be lensed by the BH, and the magnification due to this self-lensing (SL) within the binaries may be large enough to be detectable. Stellar binaries have already been detected using such photometric variations both in eclipsing and non-eclipsing configurations (Morris, 1985; Thompson et al., 2012; Herrero, E. et al., 2014; Nie et al., 2017). While microlensing surveys such as OGLE and MACHO have reported a number of isolated compact object candidates (Abdurrahman et al., 2021; Lam et al., 2022; Mroz et al., 2022; Sahu et al., 2022), detection of compact objects in orbit around an LC remains illusive. Nevertheless, recent theoretical studies estimate that a significant number of detached BH-LC binaries (\(\sim 10-100\)) may be observed by phase-curve analysis of their LCs with ongoing photometric surveys such as TESS, ZTF, LSST and _Kepler_(Masuda and Hotokezaka, 2019; Gomel et al., 2020; Wiktorowicz et al., 2021; Hu et al., 2023). Using our realistic simulated Galactic populations of BH-LC binaries presented in the context of _Gaia_'s astrometric detectability (Chawla et al., 2022, hereafter Paper I), we investigate the ability of _Gaia_ and TESS to resolve detached BH-LCs via photometric orbital modulation induced through tidal and relativistic effects. A similar analysis for LSST would also be very interesting, however, a realistic analysis for signal to noise ratio (SNR) is not straightforward for LSST at this time. We discuss the details of our simulated models in section 2. In section 3 we describe how we calculate SNR taking into account the detection biases and our adopted detection criteria. In section 4 we present our key results for the intrinsic as well as detectable BH-LCs. We discuss possibilities from follow-up studies in section 5 and conclude in section 6. ## 2 Numerical Methods The synthetic populations used in this study are described in detail in Paper I. Nevertheless, we present the crucial details relevant for this study for completeness. We create representative present-day BH-LC populations using the state-of-the-art Python-based rapid binary population synthesis (BPS) suite COSMIC(Breivik et al., 2020) which employs the SSE/BSE evolutionary framework to evolve single and binary stars (Hurley et al., 2000; Hurley et al., 2002). Using COSMIC we generate a population of zero-age main sequence (ZAMS) binaries by assigning initial ages, metallicities (Z), masses, semi-major axes (\(a\)), and eccentricities (\(Ecc\)). The initial age and metallicity of each binary is sampled from the final snapshot of the **m12i** model galaxy from the Latte suite of the Feedback In Realistic Environments (FIRE-2) simulations (Wetzel et al., 2016; Hopkins et al., 2018). The single-star stellar evolution tracks used in COSMIC only incorporate metallicities in the range \(\log(\mathrm{Z}/\mathrm{Z}_{\odot})~{}=~{}-2.3\) and \(0.2\), where \(\mathrm{Z}_{\odot}=0.02\) is the solar metallicity. Hence, we confine the metallicities of our simulated binaries within this range and assign the limiting values for metallicities in **m12i** which are outside this range. The stellar and orbital parameters for the ZAMS population such as mass, orbital period (\(P_{\mathrm{orb}}\)), \(Ecc\) and mass ratio (\(q\leq 1\)) are sampled from observationally motivated probability distribution functions. The primary mass is sampled from the Kroupa (2001) initial stellar mass function (IMF) between \(M_{\mathrm{min}}/M_{\odot}=0.08\) and \(M_{\mathrm{max}}/M_{\odot}=150\) and the secondary mass is assigned in the range \(M_{\mathrm{min}}\) to the primary mass using a flat \(q\) distribution (Mazeh et al., 1992; Goldberg and Mazeh, 1994). We assume initially thermal \(Ecc\) distribution (e.g., Jeans, 1919; Ambartsumian, 1937; Heggie, 1975). The initial \(a\) are drawn to be uniform in \(\log\) with an upper bound of \(10^{5}R_{\odot}\) and inner bound such that \(R_{\mathrm{peri}}\leq R_{\mathrm{RL}}/2\)(Han, 1998), where \(R_{\mathrm{peri}}\) is the pericenter distance and \(R_{\mathrm{RL}}\) is the Roche radius. COSMIC uses several modified prescriptions beyond the standard BSE implementations described in Hurley et al. (2002) to evolve the binary population from ZAMS to get the present-day BH-LC population in the MW. For a detailed description of these modifications see Breivik et al. (2020). The properties of the present day BH-LC binary population depends strongly on the outcome of the Roche-overflow mass transfer from the BH progenitor and the natal kick imparted during BH formation. We adopt critical mass ratios as a function of donor type derived from the adiabatic response of the donor radius and its Roche radius (Belczynski et al., 2008) to determine whether mass transfer remains dynamically stable or leads to a common envelope (CE) evolution (Webbink, 1985). For CE evolution COSMIC uses a formulation based on the orbital energy; the CE is parameterized using two parameters \(\alpha\)(Livio and Soker, 1984) and \(\lambda\) where, \(\alpha\) denotes the efficiency of using the orbital energy to eject the envelope and \(\lambda\) defines the binding energy of the envelope based on the donor's stellar structure (Tout et al., 1997). We adopt \(\alpha=1\) and that \(\lambda\) depends on the evolutionary phase of the donor (see Appendix A of Claeys et al., 2014). We adopt two widely used explosion mechanisms for BH formation via core-collapse supernovae (CCSNe): "rapid" and "delayed" (Fryer et al., 2012). While these two prescriptions introduce several differences in the BHs' birth mass function and natal kicks, the most prominent for our study is the presence (absence) of a mass gap between (3 and 5 \(M_{\odot}\)) NSs and BHs produced via core-collapse SNe (CCSNe) in the rapid (delayed) prescription. We refer to the model populations created using the rapid (delayed) prescription as rapid (delayed) model. We assign BH natal kicks with magnitude \(v(1-f_{\mathrm{FB}})\), where \(v\) is drawn from a Maxwellian distribution with \(\sigma=265\)(Hobbs et al., 2005) and \(f_{\mathrm{FB}}\) is the fraction of mass fallback from the outer envelope of the exploding star (e.g. Belczynski et al., 2008). ### Synthetic Milky-Way population Using COSMIC, we evolve binaries until the distribution of present-day properties for BH-LC binaries saturate to a high degree of accuracy (for a more detailed description see Paper I). Typically, while creating this representative population we only simulate the evolution of a fraction of the total MW mass as defined by galaxy **m12i**. In order to obtain the correct number of BH-LC binaries in the MW at present, we up-scale the simulated BH-LC population by a factor proportional to the ratio of the entire stellar mass of **m12i** (\(M_{\mathbf{m12i}}\)) and the total simulated single and binary stellar mass (\(M_{\rm sim}\)) in **COSMIC**.1 The total number of the present-day BH-LC binaries in the MW is then defined as Footnote 1: We do not simulate single stars. Instead we adopt an observationally motivated binary fraction (Duchene and Kraus, 2013; Moe and Stefano, 2017) to estimate the equivalent total mass from the simulated binary mass. \[N_{\rm BH-LC,MW}=N_{\rm BH-LC,sim}\frac{M_{\mathbf{m12i}}}{M_{\rm sim}}. \tag{1}\] To produce a MW-representative population of present-day BH-LC binaries, we sample (with replacement) \(N_{\rm BH-LC,MW}\) binaries from our simulated population of BH-LC binaries to assign a complete set of stellar and orbital parameters including mass, metallicity, luminosity, radius, eccentricity, and \(P_{\rm orb}\). Each binary is also assigned a Galactocentric position by locating the start-particle closest to the given binary in age and metallicity in the **m12i** galaxy model with an offset following the Ananke framework (Sanderson et al., 2020). Hence, we preserve the complex correlations between Galactic location, metallicity, and stellar density in the MW. Finally, each binary is assigned a random orientation by specifying the Campbell elements, inclination (\(i\)) with respect to the line of sight, argument of periapsis (\(\omega\)) and longitude of ascending node (\(\Omega\)). We create 200 MW realizations for each model (rapid and delayed) to investigate the variance associated with the random assignments of each binary in the procedure described above. We incorporate the effect of interstellar extinction and reddening by calculating extinction (\(A_{v}\)) using the python package mwdust(Drimmel et al., 2003; Marshall et al., 2006; Bovy et al., 2016; Green et al., 2019) based on the position of binaries in the galaxy and include these correction in estimating the TESS and _Gaia_ magnitudes. ## 3 Photometric variability and detection TESS and _Gaia_ are both all-sky surveys despite different primary observing goals and strategies. For both, the number and epochs of observations of any particular source over the full mission duration are dependent on its Galactic coordinates. We consider three physical processes which can introduce orbital modulation in the LC's observed flux: ellipsoidal variation due to tidal distortion of the LC (EV), relativistic beaming (RB), and self-lensing (SL). Below we describe how we estimate the signal to noise ratio in TESS and _Gaia_ photometry and our detectability conditions. ### SNR calculation The relevant SNR for TESS observation for a source undergoing photometric variations can be written as (Sullivan et al., 2015) \[\left[S/N\right]_{TESS}=\frac{\sqrt{\frac{1}{2\pi}\int\left(\frac{\Delta \mathcal{F}}{\langle\mathcal{F}\rangle}\right)^{2}d\phi}}{\sigma_{30}/\sqrt{N}}. \tag{2}\] \(\Delta\mathcal{F}\equiv\mathcal{F}(\phi)-\langle\mathcal{F}\rangle\), where \(\langle\mathcal{F}\rangle\) is the orbital phase \(\phi\)-averaged flux, \(\sigma_{30}\) is the per-point combined differential photometric precision of TESS with 30-minute cadence. \(\sigma_{30}\) depends on the source's reddening corrected TESS magnitude and calculated using the python package ticgen(Barclay, 2017; Stassun et al., 2018). \(N\) is the number of photometric data points with 30-minute cadence for the source given its Galactic location using the tool, tess-point(Burke et al., 2020). Similarly, for _Gaia_ we define the SNR as \[\left[S/N\right]_{Gaia}=\frac{\sqrt{\frac{1}{2\pi}\int\left(\Delta G(\phi) \right)^{2}d\phi}}{\sigma_{G}/\sqrt{N}}, \tag{3}\] where \(\Delta G\equiv G(\phi)-\langle G\rangle\), where \(\langle G\rangle\) is \(\phi\)-averaged _Gaia_ magnitude. \(G(\phi)\) is calculated from \(\mathcal{F}(\phi)\) and effective temperature (\(T\)). \(\sigma_{G}\) is the photometric precision of _Gaia_ with \(8-10\) sec cadence and depends on \(G\).2\(N\) is the location-dependent number of data points obtained during _Gaia_'s 10-year observation estimated for each source using _Gaia_'s Observation Forecast Tool (GOST)3. The magnitudes for the photometric variability of course depend on the physical process responsible. Footnote 2: The slight difference between the SNR expressions stems from the differences in the dimensions of reported \(\sigma_{30}\) and \(\sigma_{G}\). Footnote 3: [https://gaia.esac.esa.int/gost/](https://gaia.esac.esa.int/gost/) #### 3.1.1 Ellipsoidal Variation The tidal force of the BH distorts the shape of the LC, elongating it along the line joining the center of the BH-LC system. The surface flux distribution also changes due to gravity darkening (Zeipel, 1924; Kopal, 1959). Due to the orbital motion of the LC, its net sky-projected area varies resulting in a periodic modulation of the observed flux. The observed flux as a function of phase (\(\phi\)) due to EV is (Morris and Naftilan, 1993)- \[\mathcal{F}(\phi) = \mathcal{F}_{0}[1+\left(\frac{\alpha}{9}\right)\left(\frac{R_{ \rm LC}}{a}\right)^{3}(2+2q)(2-3\sin^{2}i) \tag{4}\] \[+ \left(\frac{\alpha}{9}\right)\frac{1+e\;\cos\phi}{1-e^{2}}\left( \frac{R_{\rm LC}}{a}\right)^{3}(3q)(2-3\sin^{2}i)\] \[-(\alpha)\frac{1+e\ \cos\phi}{1-e^{2}}\left(\frac{R_{\rm LC}}{a}\right)^{3}(q)( \sin^{2}i)(\cos(2\omega+2\phi-\pi))],\] where, \(\mathcal{F}(\phi)\) represents the flux of the LC as a function of the orbital phase (\(\phi\)), \(\mathcal{F}_{0}\) denotes the luminosity in the absence of the BH, and \(\alpha\) is defined as \[\alpha=\frac{15u(2+\tau)}{32(3-u)}, \tag{5}\] where \(u\) and \(\tau\) are the limb and gravity darkening coefficients, respectively (Morris & Naftilan, 1993; Engel et al., 2020). For simplicity, we ignore the contribution from limb and gravity darkening in this study (i.e., \(u=0.3\), \(\tau=0.4\), \(\alpha=0.125\)) since they are expected to have a small effect on the overall results. #### 3.1.2 Relativistic Beaming The photometric modulation due to the relative motion between the LC and the observer is known as relativistic beaming. The radial component of the orbital motion of the LC induced by the BH causes a phase-dependent flux variation due to relativistic effects, such as the doppler effect, time dilation, and aberration of light (van Kerkwijk et al., 2010; Bloemen et al., 2011). The amplitude of the photometric variation resulting from beaming is proportional to the radial velocity semi-amplitude of the LC (\(K_{\rm LC}\)), and can be expressed as (Loeb & Gaudi, 2003) \[\frac{\Delta\mathcal{F}}{\langle\mathcal{F}\rangle} = 4\alpha_{\rm RB}\frac{K_{LC}}{c}\] \[= 2830\ \alpha_{\rm RB}\sin i\left(\frac{P_{\rm orb}}{\rm days} \right)^{-1/3}\] \[\times \left(\frac{M_{\rm BH}+M_{\rm LC}}{M_{\odot}}\right)^{-2/3}\left( \frac{M_{\rm BH}}{M_{\odot}}\right),\] where, \(\alpha_{\rm RB}\) is obtained by integrating the frequency-dependent term \[\alpha_{\rm RB,\nu}=\frac{1}{4}\left(3-\frac{d\ \ln\mathcal{F}_{\nu}}{d\ \ln\nu}\right) \tag{7}\] over the frequency range of the band-pass (Loeb & Gaudi, 2003). The value of bolometric \(\alpha_{\rm RB}=1\) and it deviates from 1 while considering only a bandpass of wavelength range (Loeb & Gaudi, 2003; Zucker et al., 2007). In the black-body approximation for a wide range of surface temperatures the value of \(\alpha_{\rm RB}\) remains close to 1 (Shyper, 2017). In this study, we have assumed \(\alpha_{\rm RB}\approx 1\) for simplicity. #### 3.1.3 Self Lensing For BH-LC binaries with orbits aligned with the line-of-sight, the BH acts as a lens magnifying the LC thus producing a sudden shift in its luminosity during occultation. This generates a periodic spike or self-lensing signal every time the BH eclipses its companion (Leibowitz & Hube, 1971; Maeder, 1973; Gould, 1995; Rahvar et al., 2010). The amplitude of the modulation in the light-curve depends on the magnification factor which is a function of binary separation, BH mass, and the inclination of the binary orbit with respect to the sky plane. The amplitude of the SL signal is adopted from Witt & Mao (1994) as \[\mu_{SL}=\frac{1}{\pi}[c_{F}F(k)+c_{E}E(k)+c_{\Pi}\Pi(n,k)] \tag{8}\] where \(F\), \(E\), and \(\Pi\) are complete elliptic integrals of first, second and third kind and the coefficients \(c_{F}\), \(c_{E}\), and \(c_{\Pi}\) are defined as \[c_{F} = -\frac{b-r}{r^{2}}\frac{4+(b^{2}-r^{2})/2}{\sqrt{4+(b-r)^{2}}}\] \[c_{E} = \frac{b+r}{2r^{2}}\sqrt{4+(b-r)^{2}}\] \[c_{\Pi} = \frac{2(b-r)^{2}}{r^{2}(b+r)}\frac{1+r^{2}}{\sqrt{4+(b-r)^{2}}}\] \[n = \frac{4br}{(b+r)^{2}}\] \[k = \sqrt{\frac{4n}{4+(b-r)^{2}}}, \tag{9}\] where, the impact parameter \[b=\frac{a\cos(i)}{R_{E}}\frac{1-e^{2}}{1+e\cos(f-\omega)}. \tag{10}\] Here, \[f=\tan^{-1}\left(\frac{1}{\tan\ \Omega\ \cos\ i}\right), \tag{11}\] and \(r\) is the ratio of the LC radius and the BH's Einstein radius, \(R_{\rm LC}/R_{\rm E}\). The Einstein radius \[R_{\rm E}=\sqrt{\frac{4GM_{\rm BH}}{c^{2}}\frac{D_{\rm LS}D_{\rm L}}{D_{\rm S }}} \tag{12}\] where, \(D_{\rm LS}\), \(D_{\rm L}\), and \(D_{\rm S}\) are the source-lens, lens-observer, and source-observer distances, respectively. For SL, \(D_{\rm L}\approx D_{\rm S}\) and \(D_{\rm LS}\) is given as; \[D_{\rm LS}=a\sin(i)\frac{1-e^{2}}{1+e\cos(f-\omega)}. \tag{13}\] The amplitude of the SL signal is defined as \[\frac{\Delta\mathcal{F}}{\langle\mathcal{F}\rangle}=\mu_{\rm SL}-1 \tag{14}\] where \(\mu_{SL}\) is the magnification factor by which brightness of LC is modified during eclipses. ### Detection criteria We employ three necessary conditions to determine the detectability of a particular detached BH-LC binary through photometric variations either using _Gaia_ or TESS: 1. \(\mathrm{SNR}\geq 1\). 2. The average apparent magnitude of the LC _G_\((m_{\mathrm{LC}})\leq 20\) (25), the limiting magnitudes for _Gaia_ (TESS). 3. At least one full orbit of the BH-LC binary must be observed. For TESS, this condition can be expressed as \(P_{\mathrm{orb}}\leq t_{\mathrm{dur}}\), where \(t_{\mathrm{dur}}\) is the duration of observation given the Galactic coordinates of the source. For _Gaia_ we consider the full extended mission duration of 10 years by imposing \(P_{\mathrm{orb}}/\mathrm{yr}\leq 10\). We call the subset of BH-LC binaries in each of our MW realisations that satisfy the above conditions as the 'optimistic' set for detectable BH-LC binaries. Essentially, this is a collection of sources that are resolvable through photometric variations. However, even when the SNR for photometric variability is high enough to be resolved, false positives may come from several other sources of uncertainties (Brown, 2003; Sullivan et al., 2015; Kane et al., 2016; Simpson et al., 2022). To minimize the possibility of false positives, we impose an additional condition, \(M_{\mathrm{BH}}\geq M_{\mathrm{LC}}\). We call the subset of BH-LC binaries satisfying this additional condition as the 'pessimistic' set. This additional condition \(M_{\mathrm{BH}}\geq M_{\mathrm{LC}}\) ensures that a potential candidate LC exhibiting the desired photometric variation not only is the dominant source of light, but also is lower mass compared to the dark companion. ## 4 Results In this section, we describe the key properties of the present-day simulated BH-LC populations and discuss the detectable populations from EV, RB or SL. We have already discussed the intrinsic populations without any restrictions in Paper I. We encourage readers to refer to Paper I for an exhaustive discussion on the present-day intrinsic BH-LC properties in the MW. Here we highlight a few key properties for which binary interactions and the choice of supernova physics leave clear imprints directly influencing their detectability via photometric variations. A view to the intrinsic properties also helps illuminate the effects of selection biases in identifying the detectable populations. Throughout this work we focus only on detached BH-LC binaries with \(P_{\mathrm{orb}}/\mathrm{yr}\leq 3\) and \(P_{\mathrm{orb}}/\mathrm{yr}\leq 10\) keeping in mind the maximum observation durations of TESS and _Gaia_. The sizes of the TESS and _Gaia_-detectable BH-LC populations for each set of binary models and observational selection cuts are summarized in Table 1. ### Intrinsic BH-LC population Figure 1 shows the distributions of \(M_{\mathrm{BH}}\), \(Ecc\), and \(M_{\mathrm{LC}}\) of the present day detached BH-LC populations for the rapid and delayed models with the relevant upper limits on \(P_{\mathrm{orb}}\). The characteristics of the intrinsic present-day populations depend strongly on the SNe explosion mechanism and the adopted binary evolution model which encodes mass transfer physics, stellar winds, and tidal evolution. However, we do not find significant differences between these distributions corresponding to \(P_{\mathrm{orb}}/\mathrm{yr}\leq 3\) and \(P_{\mathrm{orb}}/\mathrm{yr}\leq 10\). The \(M_{\mathrm{BH}}\) distribution spans the complete allowed \(3-45M_{\odot}\) for both rapid and delayed models, however, striking difference is apparent near the so-called 'lower mass-gap' between \(3-5M_{\odot}\). While, both core-collapse SNe and AIC contribute in populating the BHs with \(3\leq M_{\mathrm{BH}}/M_{\odot}\leq 5\) in the delayed model, the BHs in this mass range in the rapid model are produced from AIC only (Fryer et al., 2012). As a result, the rapid (delayed) model consists \(\sim 1\%\) (\(\sim 55\%\)) of BH-LC binaries with \(M_{\mathrm{BH}}/M_{\odot}\leq 5\). Of the BH-LCs in the delayed model with \(M_{\mathrm{BH}}\) in the lower mass-gap, \(\sim 10\%\) are produced from AIC and the rest are from core-collapse SNe. The \(Ecc\) distribution of the present-day population of BH-LC binaries transforms from the initially-assumed thermal distribution through binary stellar evolution including tides, mass loss, mass transfer, and natal kicks during BH formation. The rapid and delayed SNe prescriptions, through differences in the wind mass loss, birth mass function of BHs, and the details of the explosion mechanism, produce differences in the \(Ecc\) distributions of present-day BH-LC binaries. We find that about \(70\%\) of BH-LCs have near-circular (\(Ecc\leq 0.1\)) orbits in the rapid model, in contrast to only \(40\%\) of such systems in the delayed model. We find that the BHs with post-main sequence (PMS) companions in detached orbits with \(P_{\mathrm{orb}}\leq 3\mathrm{yr}\) shows a spread of \(0.1\leq M_{\mathrm{LC}}/M_{\odot}\leq 20\). BH-PMS with \(20\leq M_{\mathrm{LC}}\leq 35M_{\odot}\) have \(P_{\mathrm{orb}}\geq 10\mathrm{yr}\). BH-MS binaries with \(M_{\mathrm{LC}}\leq 0.1M_{\odot}\) are typically found in semi-detached state. There appears to be a dearth of systems with MS companions with \(18\leq M_{\mathrm{LC}}/M_{\odot}\leq 34\). This is because a relatively larger fraction of binaries have \(P_{\mathrm{orb}}/\mathrm{yr}\geq 10\) in this range. About \(2.1\%\) (\(1.4\%\)) of the present-day BH-LC population contains MS com panions with \(M_{\rm LC}/M_{\odot}\geq 35\) in the rapid (delayed) model. These are young binaries with ages \(\leq 10\) Myr, had initially short-period (\(P_{\rm orb}/{\rm yr}\leq 5\)) orbits, which led to mass accretion via Roche-lobe overflow (RLOF) by the LC's progenitor from the BH's progenitor (Tout et al., 1997). ### _Gaia_ and TESS Detections Table 1 summarises the expected number of detections by _Gaia_ and TESS via EV, RB, and SL with SNR\(\geq 1\). In addition, we list the expected numbers where \(M_{\rm BH}\geq M_{\rm LC}\). For both _Gaia_ and TESS, photometric variations from EV and RB are significantly easier to detect compared to SL which leads to roughly an order of magnitude fewer detectable sources. This is expected for several reasons. In general, the geometric probability for SL is low, especially because here we consider only detached binaries which requires higher \(P_{\rm orb}\). Even when the orientation allows SL, the magnification is typically low. Even if the maximum SL signal \(\mu_{\rm SL,max}=[1+4/(R_{\rm LC}/R_{\rm E})^{2}]^{1/2}\) is greater than the photometric precision, the large impact parameter (\(300-600R_{\rm E}\)) and short transit duration (\(\sim 1\) hr) makes detection challenging with the cadence we have considered for _Gaia_ and TESS. The overall low yield from SL is consistent with past studies (Rahvar et al., 2010; Masuda and Hotokezaka, 2019; Wiktorowicz et al., 2021). \begin{table} \begin{tabular}{c|c|c c c c|c c c c|c c c} \hline \hline \multicolumn{1}{c|}{ Model} & \multicolumn{2}{c|}{Type} & \multicolumn{8}{c|}{Gaia} & \multicolumn{8}{c}{TESS} \\ \hline & & \multicolumn{8}{c}{Optimistic} & \multicolumn{8}{c|}{Pessimistic(\(M_{\rm BH}\geq M_{\rm LC}\))} & \multicolumn{8}{c}{Optimistic} & \multicolumn{8}{c}{Pessimistic(\(M_{\rm BH}\geq M_{\rm LC}\))} \\ \cline{3-13} & & \multicolumn{2}{c|}{EV} & \multicolumn{1}{c}{RB} & \multicolumn{1}{c|}{SL} & \multicolumn{1}{c|}{EV} & \multicolumn{1}{c}{RB} & \multicolumn{1}{c|}{SL} & \multicolumn{1}{c|}{EV} & \multicolumn{1}{c}{RB} & \multicolumn{1}{c|}{SL} & \multicolumn{1}{c|}{EV} & \multicolumn{1}{c}{RB} & \multicolumn{1}{c}{SL} \\ \hline \multirow{3}{*}{rapid} & MS & \(765^{+34}_{29}\) & \(1,193^{+43}_{-43}\) & \(77^{+13}_{-28}\) & \(678^{+34}_{-28}\) & \(920^{+37}_{-36}\) & \(63^{+11}_{-9}\) & \(176^{+16}_{-16}\) & \(227^{+19}_{-2}\) & \(3^{+3}_{-2}\) & \(116^{+13}_{-12}\) & \(149^{+15}_{-16}\) & \(1^{+2}_{-1}\) \\ & PMS & \(256^{+18}_{-208}\) & \(305^{+20}_{-22}\) & \(24^{+5}_{-6}\) & \(211^{+16}_{-17}\) & \(238^{+17}_{-18}\) & \(24^{+5}_{-5}\) & \(78^{+13}_{-9}\) & \(105^{+13}_{-11}\) & \(1^{+2}_{-1}\) & \(58^{+10}_{-9}\) & \(47^{+18}_{-9}\) & \(1^{+1}_{-9}\) \\ & total & \(1,024^{+34}_{-38}\) & \(501^{+40}_{-40}\) & \(102^{+13}_{-18}\) & \(88^{+33}_{-33}\) & \(1,159^{+38}_{-48}\) & \(86^{+10}_{-10}\) & \(253^{+22}_{-12}\) & \(334^{+22}_{-22}\) & \(175^{+16}_{-16}\) & \(196^{+18}_{-18}\) & \(2^{+3}_{-3}\) \\ \hline \multirow{3}{*}{delayed} & MS & \(544^{+24}_{-26}\) & \(659^{+30}_{-30}\) & \(58^{+48}_{-49}\) & \(458^{+29}_{-29}\) & \(538^{+48}_{-28}\) & \(44^{+48}_{-48}\) & \(117^{+14}_{-16}\) & \(118^{+14}_{-12}\) & \(0\) & \(75^{+10}_{-10}\) & \(75^{+12}_{-12}\) & \(0\) \\ & PMS & \(163^{+14}_{-17}\) & \(189^{+19}_{-17}\) & \(18^{+7}_{-5}\) & \(157^{+12}_{-17}\) & \(181^{+16}_{-18}\) & \(18^{+7}_{-5}\) & \(67^{+14}_{-11}\) & \(41^{+9}_{-9}\) & \(0\) & \(62^{+9}_{-9}\) & \(34^{+8}_{-8}\) & \(0\) \\ \cline{1-1} & total & \(707^{+35}_{-33}\) & \(850^{+38}_{-37}\) & \(76^{+11}_{-11}\) & \(615^{+32}_{-32}\) & \(719^{+35}_{-36}\) & \(62^{+10}_{-10}\) & \(183^{+22}_{-17}\) & \(159^{+17}_{-16}\) & \(0\) & \(136^{+17}_{-13}\) & \(109^{+14}_{-13}\) & \(0\) \\ \hline \end{tabular} Note. – Number of detached BH–LC binaries in the Milky Way predicted in by models for the _Gaia_ and TESS-detectable populations from EV, RB and SL. The numbers and errors denote the median and the spread between the 10th and 90th percentiles across the Milky-Way realisations. \end{table} Table 1: BH–LC detectable population in the Milky Way Figure 1: \(M_{\rm BH}\), \(Ecc\) and \(M_{\rm LC}\) distributions of the present-day detached BH–LC binaries in the MW from our rapid (top) and delayed (bottom) models. The bright and faded curves represent detached BH–LCs with \(P_{\rm orb}/{\rm yr}\leq 3\) and 10, respectively. The red and blue curves represents BH–MS and BH–PMS, respectively. The distributions from the rapid and delayed models show significant differences between \(M_{\rm BH}/M_{\odot}=3\)–5. BHs in this range come only from AIC of NSs in the rapid model, whereas, the delayed model allows BH formation both via AIC and CCSNe in this range. Figure 3: Number of BH–LC binaries resolvable via photometric variability by TESS (right) and _Gaia_ (left) for the rapid model. Blue, red, and green denote populations resolvable via RB, EV, and SL signals, respectively. Numbers written in white, black, and blue denote total numbers for each set, number of systems with two and all three resolvable signals, respectively. For both telescopes, there are large (\(\sim 50\%\)) overlaps between populations resolvable via EV and RB. The relatively small fraction of BH–LCs with detectable SL would also be detectable either via RB or EV or both. (The equivalent figure for our delayed model is presented in the Appendix.) Figure 2: The reverse cumulative distribution of the expected detections of BH–LC binaries by EV (left), RB (middle) and SL (right) using TESS (red) and _Gaia_ (blue) as a function of SNR for the rapid (top) and delayed (bottom) models. Solid and dashed lines represent the median number of detectable BH–LC binaries adopting the optimistic and pessimistic cuts (subsection 3.2). The shaded regions represent the spread between the 10th and the 90th percentiles due to statistical fluctuations between our 200 independent MW realisations. Of course, SNR\(\geq 1\) may not be enough for an actual discovery. Hence, we study the expected number of detections as a function of the SNR. Figure 2 shows the reverse cumulative distribution of detached BH-LC binaries detectable via the various channels of photometric variations. For rapid, in case of _Gaia_, the total number of detections using the optimistic cut with SNR\(\geq 1\) (\(\geq 10\)) is 1502 (1074). The corresponding number using the pessimistic cut is 1162 (751). Similarly, for TESS, the expected numbers of detections with SNR\(\geq 1\) and 10 for the optimistic (pessimistic) cut are 387 and 194 (250 and 80). Contribution from EV and RB are typically close to each other for both _Gaia_ and TESS. Detected systems with EV and RB also have a high overlap (Figure 3); detectable detached BH-LC binaries with EV and RB have an overlap of roughly 50% (67%) for TESS (_Gaia_). In both _Gaia_ and TESS, almost all detectable systems via SL can also be detected via either RB or EV or both. Overall, the expected number of detections in the rapid model is higher by a factor of \(\approx 2\) compared to the delayed model. This is simply because the rapid model contains a higher proportion of higher-mass BHs compared to the delayed model (e.g., Fryer et al., 2012). Note that, in our models, we adopt a very conservative lowest mass (\(M_{\rm BH}/M_{\odot}>3\)) for BHs. Thus, these simulated numbers are for expected detectable BH-LC binaries with at least \(M_{\rm BH}/M_{\odot}>3\). This should reduce the possibility that the unseen object is a white dwarf or NS (Fonseca et al., 2021; Romani et al., 2022). Moreover, our additional condition used in the pessimistic cut, \(M_{\rm BH}/M_{\rm LC}\geq 1\), should reduce false positives even further. Nevertheless, the confidence in identifying the nature of the dark component ultimately would depend on the estimated errors in the mass and followup observations (e.g., Ganguly et al., 2023; Chakrabarti et al., 2023; El-Badry et al., 2022; Shahaf et al., 2023). We envisage that while photometric variability can identify the interesting targets, multi-wavelength followup observations and RV followup will help to clearly identify the nature of the dark component in these binaries. Interestingly, similar to the case of astrometrically detectable BH-LC binaries presented in Paper I, we find that the photometrically detectable BH-LC binaries also show little dependence on the BH mass (Figure 4). This can be somewhat counter-intuitive since the strength of the signal increases with increasing \(M_{\rm BH}\) for all physical effects we have considered here (see section 3). This is because, the detectability more strongly depends on the photometric precision of _Gaia_ and TESS compared to the signal strength. The photometric precision, on the other hand, depends strongly on the magnitude of the LC (Rimoldini, Lorenzo et al., 2023) and does not depend at all on \(M_{\rm BH}\). As a result, the \(M_{\rm BH}\) distribution of the detectable population is expected to closely resemble the intrinsic one. This is in contrast to BH populations detected from other more traditional observations like X-ray, radio, and GWs (Jonker et al., 2021; Liodine et al., 2023). ### Key properties of the BH-LCs detectable via photometric variations Overall, the distributions of key observable properties for the detectable population are very similar to those of the intrinsic population. Moreover, the TESS and _Gaia_-detectable populations are very similar in properties. Detectable differences in the population properties come from the differences in the adopted supernova prescription. The distributions of the detected population through photometric variability show a wide spread in both Z and \(M_{\rm BH}\) for both the rapid and delayed models (Figure 5). The \(M_{\rm BH}\) distribution shows distinct features for the rapid and delayed models. The lower mass gap (3-5\(\,M_{\odot}\)) in the intrinsic population for the rapid model remains apparent also in the detectable population; \(\sim 4\%\) (\(\sim 3\%\)) of the TESS (_Gaia_) detectable BH-LC binaries contain BHs with \(3\leq M_{\rm BH}/M_{\odot}\leq 5\) in the rapid model, in contrast to \(\sim 65\%\) (\(\sim 56\%\)) in the delayed model. Of course, in the rapid model all detectable BHs in the mass gap must come from AIC of NSs. In contrast, in the delayed model, most (\(\sim 85-92\%\)) de Figure 4: The ratio between the detectable binaries and intrinsic population (detection fraction) as a function of \(M_{\rm BH}\) for the BH–LCs detectable via photometric variations using TESS (solid) and _Gaia_ (dashed) for the rapid (orange) and delayed (blue) models. The vertical and horizontal error bars represent the 10–90th percentiles and the bin size in \(M_{\rm BH}\), respectively. The detection fraction is not strongly dependent on \(M_{\rm BH}\) for photometrically detectable BH–LC populations from both SNe models. tectable BHs in this mass range come from core-collapse SNe while the rest come from AIC. Contribution from AIC is a little higher (14% and 8% for TESS and _Gaia_) in the delayed model for detectable BHs with PMS companions. The metallicities of the detectable detached BH-LCs show a wide spread, \(-2.4\leq\log(\rm Z/Z\odot)\leq 0.2\). A majority of the detectable population consist of young BH-LC's with \(\rm Z\geq 0.02\). As a result, \(M_{\rm BH}/M_{\odot}\leq 20\) in the detectable population. The wide spread in metallicities is particularly interesting. Using astrometric and photometric observations it may be possible to put constraints on the LC properties including metallicity and age. Based on these constraints, it may be possible to constrain the age and metallicity of the BH's progenitor in real systems. Furthermore, if the mass of the BHs can also be determined via photometric variations, astrometric solutions, or follow-up observations, then a metallicity-dependent map connecting BHs with their progenitors may be found. Apart from the \(M_{\rm BH}\) distribution, the orbital eccentricities can also differentiate between different SN explosion mechanisms (also see Paper I). Majority (\(\sim 60-98\%\)) of the photometrically detectable BH-LCs in all our models go through at least one mass transfer or common-envelope episode, which erases the initial orbital eccentricity. Thus, the final orbital eccentricity is almost entirely dependent on the natal kicks the BHs receive, which later can be further modified by tides depending on \(P_{\rm orb}\) and the time since BH formation. Under the fallback-modulated prescription for natal kicks, the BHs in the delayed model typically receive larger kicks compared to those in the rapid model. As a result, Figure 5: Distributions of \(M_{\rm BH}\) and progenitor metallicity of the detached BH–LC binaries detectable through photometric variability. Red circle and blue plus represent populations detectable using TESS and _Gaia_, respectively. Lines and shades in the histograms represent median and the spread between the 10th and 90th percentiles in each bin. Left and right figures denote the rapid and delayed models. Figure 6: \(P_{\rm orb}\) vs \(Ecc\) for BH–LCs detectable via photometric variations using TESS (red circle) and _Gaia_ (blue plus) for our rapid (top) and delayed (bottom) models. the detectable BH-LC binaries in the delayed model contain a much larger fraction (\(49-56\%\)) of \(Ecc>0.1\) orbits compared to those in the rapid model (\(8-14\%\)). Figure 6 shows \(P_{\rm orb}\) vs \(Ecc\) for the detectable populations. The rapid (delayed) model contains about \(92\%\) (\(50\%\)) and \(86\%\) (\(44\%\)) BH-LC binaries with \(Ecc\leq 0.1\) in the TESS and _Gaia_ detected populations. Because of the relatively stronger natal kicks the BHs receive, the delayed model contains a much higher fraction (\(\sim 30-40\%\)) of BH-LC binaries with \(Ecc>0.5\) compared to the rapid model (\(5-9\%\)) in the TESS as well as _Gaia_ detected populations. These observable differences in the \(Ecc\) distributions can be really interesting if detached BH-LCs are indeed discovered in large numbers through photometric as well as astrometric channels. Since, the final orbital \(Ecc\) is essentially dependent on natal kicks, a careful study of the \(Ecc\) distribution for these systems should help in constraining poorly understood natal kick physics (Repetto et al., 2017; Atri et al., 2019; Andrews and Kalogera, 2022; Shikauchi et al., 2023). We find that the \(P_{\rm orb}\) distributions for the detectable BH-LCs exhibit diverse ranges extending up to \(\sim 100\) days for TESS and \(\sim 10\) years for _Gaia_, essentially limited by the observation duration (Figure 7, 8). At first glance this is counter-intuitive because the signal is expected to be stronger for shorter \(P_{\rm orb}\) for all channels of photometric variability (Equation 4, 6). This can be understood as a consequence of subtle effects from the formation channel of the BH-LC binaries detectable through photometric variations. Most (\(\approx 60-86\%\) for TESS and \(84-88\%\) for _Gaia_) detectable BH-LCs have gone through at least one CE episode during their evolution. The eventual detached configuration, for the majority of the detectable BH-LCs thus depends on when the CE ends. All else kept fixed, a lower-mass LC would require a larger orbital decay before the CE can be ejected. This introduces a correlation between \(P_{\rm orb}\) and \(M_{\rm LC}\) (Figure 7, 8). A higher \(M_{\rm LC}\) means a brighter target, which in turn means lower photometric noise, all else kept fixed. Thus, a combination of population properties as well as selection biases effectively reduces the strong \(P_{\rm orb}\) dependence of the signal strength in the detectable population. Overall, we find that CE evolution plays a major role in shaping the properties of the BH-LCs detectable through photometric variability. For BH-MS binaries in the rapid and delayed models, \(89\%\) and \(79\%\) (\(80\%\) and \(85\%\)) of the TESS (_Gaia_) detectable populations go through at least one CE evolution. In case of BH-PMS binaries, between \(10\) to \(15\%\) of the detectable systems go through CE evolution more than once. The detectable BH-PMS binaries also show interesting clustering in the \(P_{\rm orb}\) vs \(M_{\rm LC}\) plane. The short-\(P_{\rm orb}\) group contains systems with significantly higher \(M_{\rm LC}\) compared to that with longer \(P_{\rm orb}\). The more compact BH-PMS binaries initially had massive progenitors (\(M_{\rm LCZAMS}\geq 20\,M_{\odot}\)) in tight orbits (\(P_{\rm orb}\lesssim 10^{2}\) days). For these binaries, the CE is initiated by mass transfer from the LC and at the time of observation, the LC is a stripped Helium star. These are all younger than \(8\,\)Myr at the time of observation. Because of the prevalence of CE evolution in the detectable BH-LCs, the properties of the observed populations may be able to put meaningful constraints on the uncertain aspects of CE evolution (Ivanova et al., 2013; Hirai and Mandel, 2022; Renzo et al., 2023). ## 5 Combination of Different Detection Channels Discovery of a population of stellar BHs in detached orbits with a LC is almost certainly going to receive a huge boost by combining various methods and followup studies. Indeed, several studies have identified candidate BH-LC binaries via various methods and combinations of them using _Gaia_'s third data release (DR3; Andrews et al., 2022; Fu et al., 2022; Gomel, R. et al., 2023; Jayasinghe et al., 2023; Shahaf et al., 2022; El-Badry et al., 2022). In Paper I, we highlighted that a large population of detached BH-LC binaries may be resolvable by _Gaia_'s astrometry and that astrometry alone is likely to put strong enough constraints on the dark object's mass to clearly indicate a BH. Furthermore, we highlighted that _Gaia_'s RV with a spectral resolution of \(R\sim 11,500\) for stars brighter than \(G=17\)(Cropper, M. et al., 2018; Soubiran, C. et al., 2018; Sartoretti, P. et al., 2023), itself could resolve the orbital motion for \(\sim 50-120\) astrometrically resolvable binaries depending on the model assumptions. Of course, once the candidates are identified, spectroscopic followup using higher-precision instruments can significantly improve these yields, but since _Gaia_'s RV will automatically become available without any need for extensive followup, we only focus on that. Figure 9 shows the reverse cumulative distribution of the RV semi-amplitude for BH-LCs brighter than \(G=17\) and resolvable through photometric variability by TESS and _Gaia_. The vertical line shows _Gaia_'s spectral resolution cutoff for \(G\leq 17\). We find that \(207^{+19}_{-18}\) (\(268^{+25}_{-19}\)) and \(83^{+12}_{-11}\)( \(124^{+13}_{-12}\) ) BH-LCs in the TESS (_Gaia_) resolved population would also be resolved with the help of spectroscopy in the rapid and delayed models, respectively. Interestingly, \(25\%\)-\(60\%\) of all photometrically detectable BH-LC binaries brighter than \(G=17\), (\(15\)-\(30\%\) overall) are expected to have RV re solvable by _Gaia_. Thus, a combination of photometric detection and _Gaia_'s RV analysis is expected to provide credence to these discoveries and allow better characterisation of orbital and stellar properties. Figure 10 shows the detection fraction as function of \(P_{\rm orb}\) for detached BH-LCs detected via TESS and _Gaia_ photometry and _Gaia_'s astrometry. In case of _Gaia_'s astrometry, the detection fraction monotonically increases until it saturates for \(P_{\rm orb}\gtrsim 100\) days. This of course is easy to understand; the larger the orbit, the easier it is to resolve via astrometry. The trend for the photometrically resolvable populations is more nuanced. In this case, both for _Gaia_ and TESS, the detection fraction first increases with increasing \(P_{\rm orb}\), peaks around \(P_{\rm orb}/{\rm day}=10\)-\(100\) (\(P_{\rm orb}/{\rm day}=100\)-\(1,000\)) for TESS (_Gaia_) before decreasing. The peak is created due to the competition between two separate effects. The photometric variability signal depends strongly on \(P_{\rm orb}\), the more compact the orbit, the stronger the signal. As a result, for sufficiently large \(P_{\rm orb}\), the signal is simply too weak resulting in a decrease in the detection fraction. On the other hand, most detectable BH-LCs come from CE evolution. As a result, there is a distinct correlation between \(P_{\rm orb}\) and \(M_{\rm LC}\) (Figure 7, 8) and as a result, the magnitude. Hence, as \(P_{\rm orb}\) increases, the photometric variability is easier to detect because of the lower noise for the brighter LCs. The different locations of the peaks for _Gaia_ and TESS are reflective of their different observation duration. Figure 11 shows the expected yields for detached BH-LCs from different detection channels and the overlap. We find between \(11-19\%\) (depending on the adopted Figure 8: Same as Figure 7 but for BH–LCs detectable through photometric variability using _Gaia_. Figure 7: Distribution of \(P_{\rm orb}\) vs \(M_{\rm LC}\) for detached BH–LCs detectable via photometric variability using TESS. Red circles and blue crosses represent BH–MS and BH–PMS binaries, respectively. Note the correlation between \(P_{\rm orb}\) and \(M_{\rm LC}\), especially for the detectable BH–MS binaries. SNe model) of the photometrically detectable BH-LCs would also be resolvable via astrometry. On the other hand, between \(14-50\%\) of the photometrically detectable BH-LCs are expected to have sufficiently large RV that this can be resolved by _Gaia_'s RV. Overall, about \(5-16\%\) of all BH-LCs could be detectable from astrometry, photometry, as well as RV. ## 6 Summary and Discussions We have explored the possibility of detecting detached BH-LC binaries via photometric variability with TESS and _Gaia_. We create highly realistic present-day BH-LC populations using the BPS suite COSMIC(Breivik et al., 2020) taking into account a metallicity-dependent star formation history and the complex correlations between age, metallicity, and location of stars in the Milky Way (Wetzel et al., 2016; Hopkins et al., 2018; Sanderson et al., 2020). We have used two widely adopted SNe explosion mechanisms, rapid and delayed (Fryer et al., 2012), to create two separate populations of present-day BH-LC binaries. We have shown the key observable features of the intrinsic BH-LC populations adopting appropriate \(P_{\rm orb}\) limits (see subsection 4.1) as well as those that are expected to be detected via photometric variability (see subsection 4.2). Using 200 realisations to take into account statistical fluctuations, taking into account different physical sources for photometric variability, TESS and _Gaia_ selection biases, and three-dimensional extinction and reddening, we have generated a highly realistic population of detectable detached BH-LC binaries in the Milky Way at present (subsection 2.1, 3.2). In addition to detection through photometric variability, we have also analysed _Gaia_'s RV and astrometry to find relative yield and sources that could be detectable via multiple channels (see section 5). * We predict about \(100-400\) and \(700-1500\) detached BH-LC binaries may detected by TESS and _Gaia_ through photometric variability arising primarily from EV and RB. * The photometrically detectable BH-LCs are expected to have wide range in metallicity and host BHs spanning a wide range in mass (see Figure 5). This is potentially interesting since in such systems, if the LC properties such as age and metallicity can be observationally constrained, we may be able to find a direct connection between the BHs and their progenitor properties. * The detection fraction is not strongly dependent on the BH mass (Figure 4). Thus, the detectable BHs are expected to be similar in properties to the intrinsic BHs in detached BH-LC binaries. * The orbital \(Ecc\) is essentially determined by BH natal kicks. As a result, if detected in large numbers, the \(Ecc\) distribution can put constraints on natal kicks from core-collapse SNe. Figure 10: Detection fraction of detached BH–LC binaries as a function of \(P_{\rm orb}\) via _Gaia_’s astrometry (blue), TESS photometry (orange), and _Gaia_’s photometry (green) for the rapid model. Dots and error bars represent median, 10th, and 90th percentiles in each bin. Detection fraction via astrometry increases with increasing \(P_{\rm orb}\) until it saturates for \(P_{\rm orb}/{\rm day}\gtrsim 10^{2}\). In contrast, detection fraction from photometry exhibits a peak. At large \(P_{\rm orb}\), the decrease from the peak detection fraction is due to reduced photometric variability signal, whereas, at the small \(P_{\rm orb}\), the decrease is due to the correlation between \(P_{\rm orb}\) and \(M_{\rm LC}\) in the BH-LC binaries (Figure 7,8). (The equivalent figure for our delayed model is presented in the Appendix.) Figure 9: The reverse cumulative distribution of the RV semi-amplitude for detached BH–LCs resolvable via photometric variability using _Gaia_ (blue) and TESS (red). The solid and dashed lines denote the rapid and delayed models. Lines denote the median and the shaded regions denote the 10th and 90th percentiles from statistical fluctuations. Black vertical line denotes the minimum resolvable \(K\) by _Gaia_ for \(G\leq 17\). * Since a majority (\(\sim 60-90\%\)) of BH-LCs detectable through photometric variability using TESS and _Gaia_ go through at least one CE episode, there is an interesting correlation between \(P_{\rm orb}\) and \(M_{\rm LC}\), especially for BH-MS binaries (Figure 7, 8). It will be interesting to verify this trend. Moreover, since this stems primarily from the energetics of envelope ejection, if detected in large numbers as we predict, this population may put meaningful constraints on the various uncertain aspects of CE physics. * A significant fraction of photometrically detectable BH-LC binaries may also be detectable via _Gaia_'s RV and astrometry (5-16% are detectable via all three methods, Figure 9, Figure 11), thus helping provide stronger constraints on their properties. In Paper I, we showed the potential of _Gaia_'s astrometry for detecting and characterizing detached BH-LC binaries in large numbers. In this work we show that a combination of photometry, RV, and astrometry can significantly increase the number of identified detached BH-LC candidates. Especially, many detached BH-LCs are expected to be detectable via astrometry, RV, as well as photometry. Once identified, followup observations using more sophisticated instruments may improve the characterisation of these candidates even further. Our models suggest that we are on the verge of discovering a treasure trove in BH binaries, while the recent BH discoveries from Gaia astrometry (El-Badry et al., 2022; 2023; Chakrabarti et al., 2023) whet our enthusiasm. CC acknowledges support from TIFR's graduate fellowship. SC acknowledges support from the Department of Atomic Energy, Government of India, under project no. 12-R&D-TFR-5.02-0200 and RTI 4002. NS acknowledges TIFR's visiting summer research program during which this project was initiated. All simulations were done using cloud computing on Azure. The Flat-iron Institute is supported by the Simons Foundation. Astropy(Astropy Collaborationetal., 2013; Price-Whelan et al., 2018); COSMIC(Breivik et al., 2020); mwdust(Bovy et al., 2016); isochrones(Morton, 2015); matplotlib(Hunter, 2007); numpy(van der Walt et al., 2011); scipy(Virtanen et al., 2020); ticgen(Barclay, 2017; Stassunetal., 2018);tess-point(Burkeetal., 2020);pandas(Wes McKinney, 2010);sympy(Meurer et al., 2017)
2303.10300
Designing the pressure-dependent shear modulus using tessellated granular metamaterials
Jammed packings of granular materials display complex mechanical response. For example, the ensemble-averaged shear modulus $\left\langle G \right\rangle$ increases as a power-law in pressure $p$ for static packings of soft spherical particles that can rearrange during compression. We seek to design granular materials with shear moduli that can either increase {\it or} decrease with pressure without particle rearrangements even in the large-system limit. To do this, we construct {\it tessellated} granular metamaterials by joining multiple particle-filled cells together. We focus on cells that contain a small number of bidisperse disks in two dimensions. We first study the mechanical properties of individual disk-filled cells with three types of boundaries: periodic boundary conditions (PBC), fixed-length walls (FXW), and flexible walls (FLW). Hypostatic jammed packings are found for cells with FLW, but not in cells with PBC and FXW, and they are stabilized by quartic modes of the dynamical matrix. The shear modulus of a single cell depends linearly on $p$. We find that the slope of the shear modulus with pressure, $\lambda_c < 0$ for all packings in single cells with PBC where the number of particles per cell $N \ge 6$. In contrast, single cells with FXW and FLW can possess $\lambda_c > 0$, as well as $\lambda_c < 0$, for $N \le 16$. We show that we can force the mechanical properties of multi-cell granular metamaterials to possess those of single cells by constraining the endpoints of the outer walls and enforcing an affine shear response. These studies demonstrate that tessellated granular metamaterials provide a novel platform for the design of soft materials with specified mechanical properties.
Jerry Zhang, Dong Wang, Weiwei Jin, Annie Xia, Nidhi Pashine, Rebecca Kramer-Bottiglio, Mark D. Shattuck, Corey S. O'Hern
2023-03-18T00:51:53Z
http://arxiv.org/abs/2303.10300v2
# Designing mechanical response using tessellated granular metamaterials ###### Abstract Jammed packings of granular materials display complex mechanical response. For example, the ensemble-averaged shear modulus \(\langle G\rangle\) increases as a power-law in pressure \(p\) for static packings of spherical particles that can rearrange during compression. We seek to design granular materials with shear moduli that can either increase _or_ decrease with pressure without particle rearrangements even in the large-system limit. To do this, we construct _tessellated_ granular metamaterials by joining multiple particle-filled cells together. We focus on cells that contain a small number of bidisperse disks in two dimensions. We first study the mechanical properties of individual disk-filled cells with three types of boundaries: periodic boundary conditions, fixed-length walls, and flexible walls. Hypostatic jammed packings are found for disk-filled cells with flexible walls, but not in cells with periodic boundary conditions and fixed-length walls, and they are stabilized by quartic modes of the dynamical matrix. The shear modulus of a single cell depends linearly on \(p\). We find that the slope of the shear modulus with pressure, \(\lambda_{c}<0\) for all packings in single cells with periodic boundary conditions where the number of particles per cell \(N\geq 6\). In contrast, single cells with fixed-length and flexible walls can possess \(\lambda_{c}>0\), as well as \(\lambda_{c}<0\), for \(N\leq 16\). We show that we can force the mechanical properties of multi-cell granular metamaterials to possess those of single cells by constraining the endpoints of the outer walls and enforcing an affine shear response. These studies demonstrate that tessellated granular metamaterials provide a novel platform for the design of soft materials with novel mechanical properties. + Footnote †: preprint: APS/123-QED ## I Introduction Granular materials represent an interesting class of physical systems that are composed of individual macroscopic particles that interact via dissipative, contact forces [1]. As a result of the dissipative particle interactions, granular materials come to rest in the absence of external driving, such as applied shear or vibration. Because of this, they frequently occur in amorphous states lacking long-range positional order. Further, granular systems can undergo a jamming transition, where they develop nonzero bulk _and_ shear moduli when they are compressed to large packing fractions [2; 3]. There have been numerous computational [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15] and experimental [16; 17; 18; 19; 20; 21; 22; 23; 24; 25] studies of the structural and mechanical properties of jammed granular materials. In particular, it has been shown that the shear modulus \(G\) depends sensitively on the number of contacts and anisotropy of the interparticle contact network [2; 11; 12; 26; 27; 15; 28]. For example, in jammed packings of frictionless spherical particles with purely repulsive linear spring interactions, we have shown that the shear modulus of individual packings decreases with increasing pressure as long as the contact network does not change during the compression [15]. However, the range of pressure \(\Delta p\) over which the contact network does not change decreases with increasing system size, \(\Delta p\sim N^{-1}\), where \(N\) is the number of particles in the system. Thus, in the large-\(N\) limit, granular packings undergo frequent irreversible particle rearrangements to new jammed packings after each \(\Delta p\) increment. During compression, each new contact network typically possesses an increased number of contacts, and thus the shear modulus increases with pressure. In fact, studies have shown that the ensemble-averaged shear modulus scales as \(\langle G\rangle\sim p^{0.5}\) in the large \(pN^{2}\) limit for jammed packings of spherical particles with purely repulsive linear spring interactions [11]. In this article, we design granular metamaterials for which the shear modulus can either decrease or increase with increasing pressure with no particle rearrangements. The lack of particle rearrangements provides robust material properties that are reversible during compression and decompression and shear strain cycling. We will leverage the recent findings that for granular packings with small \(N\), particle rearrangements are rare and the shear modulus depends linearly on pressure in the absence of rearrangements. We will first consider systems in two dimensions, but these concepts can easily be extended to three dimensions. We envision tessellated granular metamaterials that are made up of many individual cells that each contain a small number of grains, i.e. \(N<16\), and are bounded by four freely jointed elastic walls. The disks within each cell are jammed with typically an isostatic number of contacts. See Fig. 1. The mechanical response of each cell is highly anisotropic, i.e., its shear modulus depends on the angle \(\theta\) of the applied shear relative to the orientation of the confining walls. We find that the shear modulus of each cell obeys \(G_{c}=G_{c0}+\lambda_{c}p\), where \(G_{c}=G_{c0}\) at \(p=0\), and we determine the sign and magnitude of \(\lambda\) as a function \(\theta\), \(N\), and the ratio of the particle and wall stiffnesses. We vary the size of the tessellated granular metamaterials by adding multiple copies of individual cells together, e.g. by generating an \(n\times n\) array of cells that share the confining walls. We identify the regimes where the shear modulus of the full system is similar to that for the individual cells. In particular, we find that large tessellated granular metamaterials can possess shear moduli that _decrease_ with increasing pressure and that these materials retain the anisotropy of the individual cells. The remainder of the article is organized as follows. In Sec. II, we describe the computational methods, including the particle-particle, particle-wall, and wall-wall potential energies, the protocols for generating disk-filled single cells and collections of multiple cells, and the methods for calculating the pressure, shear stress, and shear modulus of these structures. In Sec. III, we present the results on how the boundary conditions, individual disk packing configuration, and the ratio of the particle to wall stiffness affect the relation between the shear modulus and pressure in single cells, as well as coupled systems composed of \(\mathcal{N}_{c}=n^{2}\) cells. In Sec. IV, we provide conclusions and discuss promising directions of future research, such as the mechanical response of tessellated granular metamaterials in three dimensions. We also include three Appendices. In Appendix A, we show that Maxwell-like counting arguments can be used to determine the minimum number of particle-particle and particle-wall contacts in jammed disk packings within single cells with fixed-length and flexible walls. In Appendix B, we determine analytical expressions for the dependence of the components of the stiffness matrix on the angle of the applied simple shear strain for jammed disk packings in single cells. In Appendix C, we verify that the pressure-dependence of the single-cell shear modulus is related to the second derivative of the packing fraction at jamming onset \(\phi_{J}\) with respect to shear strain \(\gamma\) for an example disk-filled cell with fixed-length walls. ## II Methods We study individual cells containing jammed packings of \(N\) bidisperse disks: \(N/2\) small and \(N/2\) large disks with diameter ratio \(\sigma_{l}/\sigma_{s}=1.4\). We consider three types of boundary conditions for the cells as illustrated in Fig. 2: (a) periodic boundary conditions in square cells with side length \(L_{0}\), (b) cells with four straight walls of fixed length \(L_{0}\), and (c) cells with four flexible walls such that adjacent vertices are connected by linear springs with preferred length \(L_{0}\). For boundary condition (c), the connected walls are freely jointed such that the angle between them can change without energy cost. Within each cell, we consider frictionless disks that interact via pairwise, purely repulsive linear spring forces. The corresponding interparticle potential energy is given by \[U^{pp}(r_{jk}^{pp})=\frac{\epsilon_{pp}}{2}\left(1-\frac{r_{jk}^{pp}}{\sigma_ {jk}}\right)^{2}\Theta\left(1-\frac{r_{jk}^{pp}}{\sigma_{jk}}\right), \tag{1}\] where \(\epsilon_{pp}\) gives the strength of the repulsive interactions, \(r_{jk}^{pp}\) is the distance between the centers of disks \(j\) and \(k\), \(\sigma_{jk}\) is the sum of the radii of disks \(j\) and \(k\), and \(\Theta(\cdot)\) is the Heaviside step function. The repulsive force on disk \(j\) from \(k\) is \(\tilde{f}_{jk}^{pp}=-(dU^{pp}/dr_{jk}^{pp})\hat{r}_{jk}^{pp}\), where \(\hat{r}_{jk}^{pp}\) is a unit vector pointing from the center of disk \(k\) to the center of disk \(j\). For boundary condition (a), there are only interparticle interactions. For boundary conditions (b) and (c), we also consider repulsive interactions between the disks and walls using the purely repulsive linear spring potential energy, \[U^{pb}(r_{ji}^{pb})=\frac{\epsilon_{pb}}{2}\left(1-\frac{r_{ji}^{pb}}{R_{j}} \right)^{2}\Theta\left(1-\frac{r_{ji}^{pb}}{R_{j}}\right), \tag{2}\] Figure 1: Illustration of a _tessellated_ granular metamaterial, made up of 36 individual cells. Each cell contains the same jammed bidisperse packing of \(N=4\) disks that are confined by four freely jointed, flexible walls. The interior cells share all four walls and the edge cells share three walls. To generate the collection of disk-filled cells, we first create a disk packing within a single cell, connect multiple copies of this disk-filled cell, fix the outer blue vertices, and then allow the disks and interior red vertices to relax during energy minimization. The variation in the disk shading between different cells indicates the types of cells based on their adjacent cells. where \(\epsilon_{pb}\) is the strength of the repulsive interactions between the disks and walls, \(r_{ji}^{pb}\) is the shortest distance between the center of disk \(j\) and the \(i\)th wall, and \(R_{j}\) is the radius of disk \(j\). The repulsive force on disk \(j\) from the \(i\)th wall is \(\hat{f}_{ji}^{pb}=-(dU^{pb}/dr_{ji}^{pb})\hat{r}_{ji}^{pb}\), where \(\hat{r}_{ji}^{pb}\) is the unit vector pointing to the center of disk \(j\) and perpendicular to the \(i\)th wall. For the flexible wall boundary conditions, we consider interactions between the wall endpoints using the linear spring potential energy, \[U^{bb}(r_{i,i+1}^{bb})=\frac{\epsilon_{bb}}{2}\left(1-\frac{r_{i,i+1}^{bb}}{L _{0}}\right)^{2}, \tag{3}\] where \(\epsilon_{bb}\) is the characteristic energy scale of the linear spring potential, \(r_{i,i+1}^{bb}\) is the distance between endpoints \(i\) and \(i+1\), and \(L_{0}\) is equilibrium length for the \(i\)th wall. The force on endpoint \(i\) from endpoint \(i+1\) in the \(i\)th wall is \(\hat{f}_{i}^{bb}=-(dU_{i}^{bb}/dr_{i,i+1}^{bb})\hat{r}_{i,i+1}^{bb}\), where \(\hat{r}_{i,i+1}^{bb}\) is the unit vector pointing from endpoint \(i+1\) to \(i\). We calculate the stress tensor \(\Sigma_{\alpha\beta}\) (with \(\alpha\), \(\beta=x\), \(y\)) of the tessellated granular metamaterials using the virial expression. For cells with periodic boundary conditions, the total potential energy is \(U=\sum_{j<k}^{N}U^{pp}(r_{jk}^{pp})\) and stresses are generated only from interparticle forces. The total stress tensor is thus \(\Sigma_{\alpha\beta}=\Sigma_{\alpha\beta}^{pp}\), where \[\Sigma_{\alpha\beta}^{pp}=\frac{1}{A}\sum_{j>k}^{N}f_{jk\alpha}^{pp}r_{jk\beta} ^{pp}, \tag{4}\] \(A\) is the area of the cell, \(f_{jk\alpha}^{pp}\) is the \(\alpha\)-component of the force on disk \(j\) from \(k\), and \(r_{jk\beta}^{pp}\) is the \(\beta\)-component of the separation vector from the center of disk \(k\) to the center of disk \(j\). For cells with physical walls as shown in Fig. 2 (b) and (c), the forces between the walls and particles also contribute to the stress tensor. For cells with walls of fixed length, the total potential energy is \(U=\sum_{j<k}^{N}U^{pp}(r_{jk}^{pp})+\sum_{j=1}^{N}\sum_{i=1}^{4}U^{pb}(r_{ji}^ {pb})\). In this case, the total stress tensor is \(\Sigma_{\alpha\beta}=\Sigma_{\alpha\beta}^{pp}+\Sigma_{\alpha\beta}^{pb}\), where \[\Sigma_{\alpha\beta}^{pb}=\frac{1}{A}\sum_{i=1}^{4}\sum_{j=1}^{N}f_{ji\alpha} ^{pb}r_{ji\beta}^{pb}, \tag{5}\] \(f_{ji\alpha}^{pb}\) is the \(\alpha\)-component of the force on disk \(j\) from \(i\)th wall of the cell, and \(r_{ji\beta}^{pb}\) is the \(\beta\)-component of the separation vector from the contact point between wall \(i\) and disk \(j\) to the center of disk \(j\). For cells with flexible walls, in addition to the interparticle and particle-wall interactions, the walls store potential energy. Thus, the total potential energy is \(U=\sum_{j<k}^{N}U^{pp}(r_{jk}^{pp})+\sum_{j=1}^{N}\sum_{i=1}^{4}U^{pb}(r_{ji}^ {pb})+\sum_{i=1}^{4}U^{bb}(r_{i,i+1}^{pb})\). The total stress tensor is \(\Sigma_{\alpha\beta}=\Sigma_{\alpha\beta}^{pp}+\Sigma_{\alpha\beta}^{pb}+ \Sigma_{\alpha\beta}^{bb}\) for cells with flexible walls, where \[\Sigma_{\alpha\beta}^{bb}=\frac{1}{A}\sum_{i=1}^{4}f_{i\alpha}^{bb}r_{ij}^{ bb}, \tag{6}\] \(f_{i\alpha}^{bb}\) is the \(\alpha\)-component of the spring force from wall \(i\), and \(r_{i\beta}^{bb}\) is the \(\beta\)-component of the vector with the same length as wall \(i\) pointing in the same direction as \(\hat{f}_{i}^{bb}\). The pressure of the cell is \(p=(\Sigma_{xx}+\Sigma_{yy})/2\) and the shear stress is \(\Sigma=-\Sigma_{xy}\). We use \(\epsilon_{pp}/\sigma_{s}^{2}\) for the units of stress and shear modulus and \(\epsilon_{pp}\) for units of energy. To generate jammed disk packings within a single cell, we first place \(N\) disks randomly in the cell at a dilute packing fraction, \(\phi<10^{-3}\). We then apply an affine isotropic compressive strain to the disk positions and decrease the length of the walls by \(\Delta L\) to achieve a small packing fraction increment, \(\Delta\phi/\phi=2\Delta L/L_{0}=2\times 10^{-3}\), followed by potential energy minimization using the fast inertia relaxation engine (FIRE) algorithm [29]. During energy minimization, the disk positions change, while the endpoints of the fixed-length walls for the boundary conditions depicted in Fig. 2 (a) and (b) are held fixed. However, the endpoints of the flexible walls in Fig. 2 (c) are allowed to move during energy minimization. After energy minimization, we calculate the pressure \(p\) of the cell. If \(p\) is less than the target pressure \(p_{t}\), we again compress the system by \(\Delta\phi/\phi\) and perform energy Figure 2: Illustration of cells that contain \(N=6\) bidisperse disks with three different boundary conditions: (a) periodic boundary conditions in a square cell with side length \(L_{0}\), (b) a cell with four straight walls of fixed length \(L_{0}\), and (c) a cell with four flexible walls such that adjacent vertices are connected by linear springs with preferred length \(L_{0}\). To generate jammed disk packings within each cell, we successively compress the system, fixing the blue vertices after the compression, and then allowing the disks and red vertices to relax. (d) Illustration of the application of simple shear strain \(\gamma=0.2\) to an originally square cell (solid line) at angle \(\theta\) relative to the \(x\)-axis, which generates the parallelogram-shaped cell indicated by the dashed-dotted line. minimization. If \(p>p_{t}\), we return to the previous disk and wall configuration, compress the system by \(\Delta\phi/2\), and perform energy minimization. To generate cells with disk packings at jamming onset, we repeat this process until the cell pressure satisfies \(|p-p_{t}|/p_{t}<10^{-4}\) with \(p_{t}=10^{-7}\). For all three boundary conditions in Fig. 2 (a)-(c), we generate \(10^{4}\) disk packings at jamming onset. To investigate the mechanical response of the disk-filled cells as a function of pressure, we apply isotropic compression to the cells to achieve a range of \(p_{t}\) values that are logarithmically spaced between \(10^{-7}\) to \(10^{-2}\). To ensure that the shape of the cells does not significantly deviate from a square, we fix the endpoints of the walls for all three types of boundary conditions when generating the disk-filled cells with pressures above jamming onset. To calculate the shear modulus of a single cell \(G_{c}\) at an angle \(\theta\) relative to the \(x\)-axis, we first rotate the cell clockwise by \(\theta\), as shown in Fig. 2 (d). Determining \(G_{c}(\theta)\) allows us to assess the anisotropy of the mechanical response of single cells. We then apply successive small steps of simple shear strain \(\Delta\gamma=5\times 10^{-9}\) (where \(x\) is the shear direction and \(y\) is the shear gradient direction) to the disks and walls with each strain step followed by potential energy minimization. Note that after the applied simple shear strain, the walls remain fixed during energy minimization for all three boundary conditions. We obtain the shear modulus for a single cell by calculating \(G_{c}=d\Sigma_{c}/d\gamma\), where \(\Sigma_{c}\) is the shear stress of a single cell. We build large-scale tessellated granular metamaterials by joining multiple copies of a given disk-filled cell with flexible walls, e.g. the collection of 36 coupled cells in Fig. 1. After joining the cells, we perform potential energy minimization with the outermost (blue) wall endpoints held fixed, while the internal (red) endpoints, as well as the disk positions, are allowed to relax. Disks within a given cell only interact with other disks and the walls of that cell. Interior wall endpoints have four connections to other walls, while exterior wall endpoints have either two or three connections to other walls. The shear modulus \(G\) of the collection of cells is calculated in the same way as that for a single cell. In particular, we first rotate the aggregate by \(\theta\) clockwise, and then we apply small successive steps of simple shear strain, \(\Delta\gamma=5\times 10^{-9}\), with each step followed by energy minimization, where the outer vertices are held fixed and the inner vertices, as well as the disks, are allowed to relax. The total shear stress \(\Sigma\) of the tessellated granular metamaterial is the sum of \(\Sigma^{pp}\) and \(\Sigma^{pb}\) for all cells and the unique contributions to \(\Sigma^{bb}\) for all of the cell walls. The shear modulus of the tessellated granular metamaterial is given by \(G=d\Sigma/d\gamma\). After we apply each simple shear strain step followed by energy minimization to tessellated granular metamaterials, we calculate the displacement field \(\mathcal{F}_{pq}\) of all cell wall endpoints. We find the strain field that minimizes the total non-affine displacement of all endpoints for a given cell and simple shear strain step [30]: \[\mathcal{F}_{pq}=\sum_{s=x,y}X_{ps}\left(Y^{-1}\right)_{sq}, \tag{7}\] where \[X_{ps}=\sum_{i=1}^{4}r_{ci,p}r_{ci,s}^{0}, \tag{8}\] and \[Y_{sq}=\sum_{i=1}^{4}r_{ci,s}^{0}r_{ci,q}^{0}. \tag{9}\] Here, \(r_{ci,s}^{0}\) and \(r_{ci,s}\) are the \(s\)th component of the separation vector from the center of mass of a given cell Figure 3: Distinct \(N=4\) bidisperse disk-filled cells at jamming onset for (a) periodic boundary conditions (dashed lines), (b) fixed-length physical walls, and (c) flexible physical walls. The solid red lines indicate interparticle contacts. The non-gray background colors indicate shared interparticle contact networks in panels (b) and (c). The disks shaded in dark gray are ‘rattler’ particles with no interparticle contacts. to its \(i\)th endpoint before and after the applied simple shear strain, respectively. We subtract the applied simple shear strain \(\gamma\) from \(\mathcal{F}_{xy}\) to determine the non-affine displacement field. ## III Results In this section, we describe the results for the mechanical response of single disk-filled cells, as well as large collections of cells. In Sec. III.1, we enumerate all of the distinct \(N=4\) bidisperse disk packings in single cells at jamming onset for all three boundary conditions. We determine whether the shear modulus for single disk-filled cells \(G_{c}\) increases or decreases with pressure over the full range of \(\theta\) in Sec. III.2. We find that \(G_{c}\) for cells with periodic boundary conditions nearly always decreases with pressure (for all shear angles), while \(G_{c}\) can either decrease or increase with pressure for single cells with (both fixed-length and flexible) physical walls. We further show that the slope of the shear modulus versus pressure \(\lambda_{c}=dG_{c}/dp\) for single disk-filled cells can be tuned by varying the particle-wall interaction energy \(\epsilon_{pb}\) and wall stiffness \(\epsilon_{bb}\). Finally, in Sec. III.3, we emphasize that the sign and magnitude of \(\lambda_{c}\) for a single disk-filled cell can be maintained even in a large collection of disk-filled cells since the assembly prevents particle rearrangements. We then show that the mechanical response of large collections of disk-filled cells can deviate from the single-cell behavior when we allow the outer cell walls to relax and change their positions during energy minimization. ### Single disk-filled cells with \(N=4\) We first illustrate the different types of jammed bidisperse disk packings that occur in single cells with periodic boundary conditions and physical walls. In Fig. 3, we show all possible jammed disk-filled cells with \(N=4\). We find 3 distinct jammed packings for single cells with periodic boundary conditions [31], 6 distinct packings for square cells with fixed-length walls [32], and 7 distinct packings for cells with flexible walls. For \(N=2\), there is only one distinct jammed disk-filled cell with the disks arranged along the diagonal of the cell. The boundary conditions of the cells affect the number of interparticle contacts at the onset of jamming. The numbers of degrees of freedom for the disk-filled cells are the following: periodic boundary conditions, \(N_{d}=2N^{\prime}-1\), fixed-length physical walls, \(N_{d}=2N^{\prime}+1\), and flexible physical walls, \(N_{d}=2N^{\prime}+2\), where \(N^{\prime}=N-N_{r}\) and Figure 4: (a)-(c) Shear modulus \(G_{c}(\theta)\) versus pressure \(p\) for a single disk-filled cell in configuration 1 (periodic boundary conditions) in Fig. 3 (a), configuration 4 (fixed-length walls) in Fig. 3 (b), and configuration 5 (flexible walls) in Fig. 3 (c). The color of each curve indicates the angle \(\theta\) of the applied shear strain, which varies from 0 (blue) to \(\pi/2\) (red). The inset gives \(G_{c}^{\prime}(\theta)=(G_{c}(\theta)-G_{c0}(\theta))/\lambda_{c}(\theta)\) for the data in the main panel of (a). (d)-(f) The slope of \(G_{c}(\theta)\) versus \(p\), \(\lambda_{c}(\theta)\), plotted as a function of \(\theta/\pi\) for the configurations in Fig. 3 (except those with rattter disks). The different colors and symbols correspond to different configurations of the disk-filled cells with (d) periodic boundary conditions, (e) fixed-length physical walls, and (f) flexible physical walls (blue circles: configuration 1; red crosses: 2; purple upward triangles: 3; magenta squares: 4; yellow diamonds: 5; and green left triangles: 6). The horizontal dashed lines indicate \(\lambda_{c}=0\). \(N_{r}\) is the number of rattler disks. (See Appendix A.) For mechanically stable disk packings [33], the number of contacts must satisfy \(N_{c}\geq N_{d}\). For \(N=4\), in periodic boundary conditions, we find that the jammed bidisperse disk packings are either isostatic (\(N_{c}=N_{d}\), \(N^{\prime}=4\), \(N_{c}=7\) for configuration 1) or hyperstatic (\(N_{c}>N_{d}\), \(N^{\prime}=4\), \(N_{c}=8\) for configuration 2, and \(N^{\prime}=4\), \(N_{c}=9\) for configuration 3). In the disk-filled cells with fixed-length walls, all of the packings are isostatic (\(N^{\prime}=4\), \(N_{c}=9\) for configurations 1-4, \(N^{\prime}=3\), \(N_{c}=7\) for configuration 5, and \(N^{\prime}=2\), \(N_{c}=5\) for configuration 6). In the disk-filled cells with flexible walls, most of the jammed bidisperse disk packings are _hypostatic_ (\(N_{c}<N_{d}\), \(N^{\prime}=4\), \(N_{c}=9\) for configurations 1-5 and \(N^{\prime}=3\), \(N_{c}=7\) for configuration 7). In contrast, configuration 6 in Fig. 3 (c) is hyperstatic (\(N^{\prime}=4\), \(N_{c}=13\)). Hypostatic jammed packings have only been reported for packings of non-spherical particles [8; 14] and particles with shape and size degrees of freedom [34; 35; 36]. Our results indicate that jammed packings of spherical particles can also be hypostatic in cells with flexible walls. We have shown in previous studies that jammed hypostatic packings are stabilized by quartic modes [8; 14; 35], which do not occur in isostatic and hyperstatic packings. Indeed, we find that hypostatic disk-filled cells at jamming onset possess \(N_{d}-N_{c}\) quartic modes. For \(N>4\), we also find that jammed disk packings are isostatic in cells with periodic boundary conditions, either isostatic or hyperstatic for cells with fixed-length walls, and either isostatic, hyperstatic or hypostatic for cells with flexible walls. At large \(N\) (\(N>16\)), we find jammed disk packings are typically isostatic in all types of boundary conditions studied. (See Appendix A.) For jammed disk-filled cells with flexible walls, the shape of the boundary is not typically a square, as shown in Fig. 3 (c), since the energy function for the walls does not include a bending energy term. Despite this, we show that several of the jammed configurations in the cells with fixed-length and flexible walls share the same interparticle contact networks, e.g. configuration 1 in Fig. 3 (b) and (c). For \(N=4\), we find that rattler particles occur in jammed disk-filled cells with fixed-length and flexible walls. See configurations 5 and 6 in Fig. 3 (b) and configuration 7 in Fig. 3 (c). Rattler disks also occur for disk-filled cells with periodic boundary conditions and physical walls for \(N>4\). Since our focus is on jammed packings that do not undergo particle rearrangements during simple shear and isotropic compression, we will not include calculations of the mechanical response for cells with rattler disks. ### Shear modulus versus pressure for a single cell In Fig. 4 (a)-(c), we show the shear modulus \(G_{c}(\theta)\) of single disk-filled cells as a function of pressure \(p\) over the full range of shear angles \(\theta\) for cells with periodic boundary conditions, fixed-length, and flexible physical walls, respectively. In contrast to the behavior for large-\(N\) systems, we find that the disks do not rearrange and \(G_{c}(\theta)\) varies continuously with \(p\) over more than four orders of magnitude. For cells with periodic boundary conditions, \(G_{c}(\theta)\) typically decreases with \(p\) as shown in Fig. 4 (a). In contrast, for cells with fixed-length walls (Fig. 4 (b)) and flexible walls (Fig. 4 (c)), \(G_{c}(\theta)\) can either decrease or increase with \(p\), depending on the value of \(\theta\). As we showed previously for jammed packings of spherical particles with periodic boundary conditions, we find quite generally that \(G_{c}(\theta)\) varies linearly with \(p\)[15; 37], \(G_{c}(\theta)=G_{c0}(\theta)+\lambda_{c}(\theta)p\), for disk-filled cells with periodic boundary conditions and physical walls in the absence of particle rearrangements (see the inset to Fig. 4 (a)). \(G_{c0}(\theta)\) gives the single-cell shear modulus in the zero-pressure limit and \(\lambda_{c}(\theta)=dG_{c}(\theta)/dp\) gives the slope [15]. In Fig. 4 (d)-(f), we plot \(\lambda_{c}(\theta)\) as a function of \(\theta\) for all \(N=4\) disk-filled cells without rattlers. We show that \(\lambda_{c}(\theta)=\lambda_{c,a}\sin[4(\theta-\theta_{0})]+\lambda_{c,dc}\) varies sinusoidally with period \(\pi/2\), where \(\lambda_{c,a}\) is the amplitude, \(\theta_{0}\) is the phase shift, and \(\lambda_{c,dc}\) is the mean value of \(\lambda_{c}(\theta)\)[27]. (Previous studies have shown that the shear modulus of jammed packings of spherical particles is sinusoidal with period \(\pi/2\)[11; 38]) \(\lambda_{c}(\theta)<0\) for nearly all \(\theta\) values and disk-filled cells for periodic boundary conditions, except for configuration 2 (Fig. 3 (a)) in the range \(0.2\lesssim\theta/\pi\lesssim 0.3\) (Fig. 4 (d)). For disk-filled cells with fixed-length and flexible walls, we observe similar sinusoidal behavior for \(\lambda_{c}(\theta)\), but there are large \(\theta\) ranges where \(\lambda_{c}(\theta)>0\). Our results showing that \(\lambda_{c,a}\sim\lambda_{c,dc}\) emphasize that disk-filled cells at small \(N\) are highly anisotropic. For disk-filled cells with flexible walls, we do not find a correlation between \(\lambda_{c}(\theta)>0\) and the occurrence of quartic modes as discussed in Sec. III.1. We focus on the response of the system to simple shear Figure 5: Probability \(\mathcal{P}_{+}\) that disk-filled cells possess \(\lambda_{c}(\theta)>0\) for any nonzero range of \(\theta\), as a function of system size \(N\) for periodic boundary conditions (blue circles), fixed-length physical walls (red crosses), and flexible physical walls (pink triangles). strain, but to fully describe the pressure-dependent mechanical properties for anisotropic materials, all six components of the stiffness matrix must be determined. In Appendix B, we derive the \(\theta\)-dependence for all six stiffness matrix elements \(C_{ij}\), where \(C_{33}\) gives the shear modulus for isotropic materials. In addition, our previous studies have shown that the sign of \(\lambda_{c}(\theta=0)\) is determined by the second derivative of the packing fraction at jamming onset with respect to \(\gamma\) in jammed packings of spherical particles in periodic boundary conditions [15, 37]. In Appendix C, we show that this relation is still true at any \(\theta\) in disk-filled cells with fixed-length walls. As \(N\) increases, the probability \(\mathcal{P}_{+}\) to obtain a disk-filled cell with \(\lambda_{c}(\theta)>0\) decreases rapidly for periodic boundary conditions. As shown in Fig. 5, we do not find \(\lambda_{c}(\theta)>0\) for cells with \(N\geq 6\) for periodic boundary conditions. For disk-filled cells with physical walls, \(\mathcal{P}_{+}\) also decreases with increasing \(N\), but not as rapidly as that for cells with periodic boundary conditions. These results emphasize that if one wants to tune the pressure dependence of \(G_{c}(\theta)\) (i.e. between \(\lambda_{c}(\theta)>0\) and \(\lambda_{c}(\theta)<0\)), one should employ disk-filled cells with small \(N\). We next investigate the dependence of \(\lambda_{c}(\theta)\) on the particle-wall stiffness \(\epsilon_{pb}/\epsilon_{pp}\) and wall stiffness \(\epsilon_{bb}/\epsilon_{pp}\) relative to the strength of the repulsive interparticle interactions in disk-filled cells with fixed-length and flexible physical walls. In Fig. 6 (a), we show that for configuration 4 depicted in Fig. 3 (b), \(\lambda_{c,a}\), \(\lambda_{c,dc}\), and \(\theta_{0}\) undergo only small variations when \(\epsilon_{pb}/\epsilon_{pp}\) changes by nearly two orders of magnitude. \(\lambda_{c,a}\) and \(\lambda_{c,dc}\) converge for \(\epsilon_{pb}/\epsilon_{pp}\gtrsim 10\) for all \(N=4\) disk-filled cells with fixed-length physical walls (Fig. 6 (b) and (c)). For configuration 4, we find that \(\lambda_{c}(\theta)>0\) for a finite range of \(\theta\) in the large \(\epsilon_{pb}/\epsilon_{bb}\) limit. We can also fix \(\epsilon_{pb}/\epsilon_{pp}\) and show that \(\lambda_{c,a}\), \(\lambda_{c,dc}\), and \(\theta_{0}\) converge in the large \(\epsilon_{bb}/\epsilon_{pp}\) limit. (See Fig. 6 (d)-(f).) We find that \(\lambda_{c}(\theta)<0\) (for all \(\theta\)) at large \(\epsilon_{bb}/\epsilon_{pp}\) for all \(N=4\) configurations with flexible physical walls, since \(\lambda_{c,dc}<0\) and \(\lambda_{c,a}<|\lambda_{c,dc}|\). These results emphasize that disk-filled cells with flexible physical walls become similar to cells with periodic boundary conditions (with \(\lambda_{c}(\theta)<0\)) in the large \(\epsilon_{bb}\)-limit. Thus, particle-wall interactions are essential for \(\lambda_{c}(\theta)>0\). ### Shear modulus versus pressure for tessellated granular metamaterials We now study the pressure-dependence of the shear modulus \(G(\theta)\) for tessellated granular metamaterials (Fig. 1) constructed from multiple disk-filled cells with flexible walls, \(\epsilon_{pb}/\epsilon_{pp}=1\), and \(\epsilon_{bb}/\epsilon_{pp}=0.1\). In Fig. 7 Figure 6: (a) The slope of the shear modulus \(\lambda_{c}(\theta)\) versus pressure for a single disk-filled cell with fixed-length physical walls (configuration 4 in Fig. 3 (b)) versus the angle \(\theta\) of the applied shear strain for several values of the particle-wall interaction energy normalized by the strength of the interparticle repulsive energy, \(\epsilon_{pb}/\epsilon_{pp}\) indicated by the different colors and symbols. (b) Mean value \(\lambda_{c,dc}\) and (c) amplitude \(\lambda_{c,a}\) of the slope of the shear modulus as a function \(\epsilon_{pb}/\epsilon_{pp}\) for the three single disk-filled cells without rattlers and fixed-length physical walls shown in Fig. 3 (b). The symbols and colors in (b) and (c) indicate the specific configurations in Fig. 3 (b). (d)-(f) Similar data to that in panels (a)-(c) except for single disk-filled cells with flexible physical walls. Data for configuration 5 in Fig. 3 (c) is shown in panel (d). The symbols and colors in (e) and (f) indicate the specific configurations in Fig. 3 (c). The dashed horizontal lines in (a) and (d) indicate \(\lambda_{c}=0\). (a), we show \(G(\theta)\) versus \(p\) for \(\mathcal{N}_{c}=36\) cells that each contain configuration 5 from Fig. 3 (c). Similar to the results for \(G_{c}(\theta)\) for single cells, the mechanical response of tessellated granular metamaterials shows strong shear angle dependence. In particular, for some values of \(\theta\), the slope of \(G(\theta)\) versus \(p\), \(\lambda(\theta)>0\), and for other values, \(\lambda(\theta)<0\). In Fig. 7 (b), we show that \(\lambda(\theta)\) possesses weak system-size dependence as \(\mathcal{N}_{c}\) is increased. \(\lambda(\theta)\) for the multi-cell system in the large-\(\mathcal{N}_{c}\) limit converges to \(\lambda_{c}(\theta)\) for a single cell with flexible walls with \(\epsilon_{bb}/\epsilon_{pp}=0.05\), which is half of the value for the multi-cell system. This result can be explained because each wall in the tessellated granular metamaterial is shared by its neighboring cell except for those on the exterior. Since \(\lambda(\theta)\) for the tessellated granular metamaterial mimics that for single cells, these results indicate that we can lock-in the behavior of the shear modulus versus pressure for a single cell in tessellated granular metamaterials in large-\(\mathcal{N}_{c}\) limit. In particular, we find finite regions of \(\theta\) where \(\lambda(\theta)>0\) in the large-\(\mathcal{N}_{c}\) limit without particle rearrangements. The similarity of the mechanical response between the multi- and single-cell systems is caused by the fact that all of the single cells display similar displacements during applied simple shear strains. In Fig. 7 (c), we show that the displacement field for each cell \(\mathcal{F}_{xy}\) matches the applied simple shear strain \(\gamma\)[30]. We also investigate how many constraints on the outer wall endpoints are necessary to enforce the mechanical response of the single cells in tessellated granular metamaterials. To address this question, we allow different wall endpoints on the outer boundary to relax during energy minimization following the applied isotropic compression and simple shear strain (see Fig. 8 (a)). We find that \(G(\theta)\) versus \(p\) for tessellated granular metamaterials with even a single mobile outer wall endpoint deviates from that of the tessellated granular metamaterial where all outer wall endpoints are fixed. We show \(G(0)\) versus \(p\) as an example in Fig. 8 (b). Allowing the outer wall endpoints to move gives rise to buckling of tessellated granular metamaterials during compression, as shown in Fig. 8 (c) and (d). This buckling induces changes in the shape of single cells compared to those of single cells with fixed outer wall endpoints during compression and shear, which causes the deviations in \(G(\theta)\) versus \(p\). Therefore, to ensure that tessellated granular metamaterials lock-in single-cell behavior, it is necessary to constrain all of the outer wall endpoints. ## IV Conclusions and Future Directions In the large-system limit, the shear modulus \(G\) of static packings of spherical particles increases with pressure due to frequent particle rearrangements and non-affine particle motion that enable the packings to increase their contact number with increasing pressure. In this work, we investigate a novel class of granular materials, tessellated granular metamaterials, that allow us to control the slope of the shear modulus versus pressure by preventing particle rearrangements even in the large-system limit. We focus on tessellated granular metamaterials in two dimensions, which are collections of \(\mathcal{N}_{c}\) coupled cells that each contain \(N\) bidisperse disks enclosed by four physical walls. In particular, we can design tessellated granular metamaterials with negative slope of the shear modulus with pressure even in the large-\(\mathcal{N}_{c}\) limit. We first studied the mechanical properties of single disk-filled cells with three sets of boundary conditions: periodic boundary conditions, fixed-length physical walls, and flexible physical walls. Packings with small \(N\) do not undergo frequent particle rearrangements, and thus we enumerated all possible mechanically stable disk-filled cells with all three boundary conditions Figure 7: (a) Shear modulus \(G(\theta)\) versus pressure \(p\) for a tessellated granular metamaterial composed of \(\mathcal{N}_{c}=36\) cells containing the same \(N=4\) disk configuration (i.e. configuration 5 in Fig. 3 (c)) with \(\epsilon_{pb}/\epsilon_{pp}=1\) and \(\epsilon_{bb}/\epsilon_{pp}=0.1\). Different colors correspond to the angle \(\theta\) of the applied simple shear strain, which varies from 0 (blue) to \(\pi/2\) (red). (b) \(\lambda(\theta)\) for tessellated granular metamaterials with \(\epsilon_{pb}/\epsilon_{pp}=1\) and \(\epsilon_{bb}/\epsilon_{pp}=0.1\) and different values of \(\mathcal{N}_{c}\) (blue circles: 4, red crosses: 16, purple upward triangles: 36, and magenta squares: 64) and a single cell with \(\epsilon_{bb/\epsilon_{pp}}=0.05\) (yellow diamonds) and 0.1 (green left triangles). All cells contain the same disk configuration 5 in Fig. 3 (c) and possess flexible physical walls. (c) The off-diagonal term of the fitted displacement matrix \(F_{xy}-\gamma\) in Eq. 7 as a function of the applied simple shear strain \(\gamma\) at \(p=10^{-7}\) for each cell (indicated by different colors) for the tessellated granular metamaterial in (a). for \(N\leq 8\). We find that the mechanically stable disk-filled cells with periodic boundary conditions and fixed-length physical walls are either isostatic or hyperstatic, while those with flexible physical walls can be hypostatic, as well as isostatic and hyperstatic. The hypostatic disk-filled cells with flexible physical walls are stabilized by quartic modes, as found for hypostatic packings of non-spherical and deformable particles. Second, we showed that the shear modulus of single disk-filled cells depends linearly on pressure, \(G_{c}(\theta)=\lambda_{c}(\theta)p+G_{c0}(\theta)\). Further, the slope of the shear modulus versus pressure for single disk-filled cells is strongly anisotropic, i.e. \(\lambda_{c}(\theta)=\lambda_{c,dc}+\lambda_{c,a}\sin(4(\theta-\theta_{0}))\) and \(\lambda_{c,a}\sim\lambda_{c,dc}\). We find that \(\lambda_{c}(\theta)<0\) for single disk-filled cells in periodic boundary conditions with \(N>4\). In contrast, disk-filled cells with fixed-length and flexible physical walls and small \(N\) can possess either \(\lambda_{c}(\theta)>0\) or \(\lambda_{c}(\theta)<0\). However, the probability of obtaining disk-filled cells with \(\lambda_{c}(\theta)>0\) vanishes in the large-\(N\) limit. After studying the mechanical response of single disk-filled cells, we investigated the shear modulus of tessellated granular metamaterials formed by connecting many single disk-filled cells with flexible walls. We showed that we can lock-in the mechanical response of single disk-filled cells in tessellated granular metamaterials. The ability to lock-in the mechanical response of single cells in multi-cell systems is reduced if the outer wall endpoints are free to move during energy minimization after applied deformations. These results demonstrate that we can build large-scale granular metamaterials whose mechanical properties do not change after repeated cycles of compression and decompression, as well as positive and negative simple shear strain, since particle rearrangements are eliminated. These findings raise many interesting directions for future research. First, we found that the mechanical response of both single and multiple-cell granular systems is highly anisotropic. To fully understand the pressure-dependent mechanical properties of anisotropic materials Figure 8: (a) Tessellated granular metamaterial with \(\mathcal{N}_{c}=36\) described in Fig. 7, except now some of the wall endpoints marked by red crosses are no longer fixed after the applied isotropic compression and simple shear strain. (b) Shear modulus \(G(0)\) measured at \(\theta=0\) as a function of pressure \(p\) when different wall endpoints on the outer boundary are switched from fixed to mobile (blue circles: all outer wall endpoints are fixed, red crosses: endpoint 1 is mobile, magenta upper triangles: endpoint 2 is mobile, black squares: endpoint 3 is mobile, yellow diamonds: endpoint 4 is mobile, green left triangles: endpoints 1 and 3 are mobile, cyan crosses: endpoints 2 and 4 are mobile, and purple asterisks: endpoints 1-4 are mobile). (c)-(d) Tessellated granular metamaterials at \(p=0.01\) when (c) endpoint 1 is mobile, and (e) endpoints 1-4 are mobile. in two dimensions, we must characterize all 6 stiffness matrix components as a function of pressure. Second, for the current studies, we fixed all of the outer wall endpoints during energy minimization to enforce nearly affine simple shear of tessellated granular metamaterials. However, when the outer wall endpoints are not fixed, the individual disk-filled cells can change their shape during energy minimization that follows the applied compression and simple shear strain. Thus, it will be interesting to study and predict the pressure dependence of \(G_{c}(\theta)\) of single disk-filled cells when the outer wall endpoints are free to move or bending ending is included between adjacent endpoints to generate cells with arbitrary shapes. Third, we have focused on tessellated granular metamaterials composed of identical single cells. In future studies, we will consider tessellated granular metamaterials composed of single cells with different disk configurations and boundaries with varied \(\epsilon_{pb}\) and \(\epsilon_{bb}\) to understand how the mechanical properties of single cells determine the mechanical properties of the multi-cell system. Fourth, we will extend our studies of tessellated granular metamaterials to three dimensions. In three dimensions, there are three principal simple shear directions, instead of one in two dimensions, which provides additional ways to design tessellated granular metamaterials. For example, we can create strongly anisotropic tessellated granular metamaterials by having some cells possess \(\lambda_{c}(\theta)>0\) in one shear direction, others possess \(\lambda_{c}(\theta)<0\) in another shear direction, and others possess \(\lambda_{c}(\theta)>0\) in the remaining shear direction. ###### Acknowledgements. The authors acknowledge support from ONR Grant No. N00014-22-1-2652 and NSF Grant No. DMREF-2118988. This work was also supported by the High Performance Computing facilities operated by Yale's Center for Research Computing. ## Appendix A Isostaticity in a single cell In this Appendix, we discuss the number of degrees of freedom \(N_{d}\) in disk-filled cells for all three boundary conditions. In periodic boundary conditions, \(N_{d}=2N^{\prime}-2+1=2N^{\prime}-1\), where \(N^{\prime}=N-N_{r}\) and \(N_{r}\) is the number of rattler disks. In this expression, the \(-2\) comes from the two global translational degrees of freedom in periodic boundary conditions and the \(+1\) comes from the size degree of freedom of the disks. For the degrees of freedom in disk-filled cells with fixed-length and flexible walls, we must include the degrees of freedom of the wall endpoints, as well as the disks: \(N_{d}=2N^{\prime}+2N_{v}-N_{B}-3+1\). Here, \(N_{v}=4\) is the number of wall endpoints, \(N_{B}\) is the number of constraints associated with the walls, the \(-3\) comes from the two rigid-body translational and one rotational degree of freedom, and the \(+1\) comes from the size degree of freedom of the disks. For flexible walls, four springs connect the wall endpoints, and hence \(N_{B}=4\). For fixed-length walls, in addition to the length constraint for each wall, the angle between any two neighboring walls is also fixed, and thus \(N_{B}=5\). Hence, \(N_{d}=2N^{\prime}+1\) for cells with fixed-length walls and \(N_{d}=2N^{\prime}+2\) for flexible walls. For isostatic packings, the total number of contacts satisfies \(N_{c}=N_{d}\). A packing is hyperstatic when \(N_{c}>N_{d}\) and hypostatic when \(N_{c}<N_{d}\). We show that the probability of obtaining a hyperstatic or hypostatic disk-filled cell decreases with increasing \(N\) for all three boundary conditions as shown in Fig. 9. We note that there are a finite number of hypostatic packings in disk-filled cells with flexible walls even at \(N\leq 16\), which highlights the effect of soft physical walls on the structural and mechanical properties of jammed granular materials. ## Appendix B Stiffness matrix after rotation In this Appendix, we show the angular dependence of all elements of the stiffness matrix \(\hat{C}\), which relates stress and strain. At a predefined orientation with \(\theta=0\), we can calculate the stiffness matrix \(\hat{C}(0)\). After rotating the configuration by an angle \(\theta\) clockwise, the stiffness matrix becomes \(\hat{C}(\theta)=\hat{R}^{T}(\theta)\hat{C}(0)\hat{R}(\theta)\), where \[\hat{R}(\theta)=\begin{pmatrix}\cos^{2}\theta&\sin^{2}\theta&-\frac{1}{2}\sin 2 \theta\\ \sin^{2}\theta&\cos^{2}\theta&\frac{1}{2}\sin 2\theta\\ \sin 2\theta&-\sin 2\theta&\cos 2\theta\end{pmatrix}. \tag{23}\] Figure 9: Probability \(\mathcal{P}(N)\) (over \(10^{4}\) single disk-filled cells) of obtaining a given number of contacts, hyperstatic (\(N_{c}>N_{d}\)) or hypostatic (\(N_{c}<N_{d}\)), for single disk-filled cells with periodic boundary conditions, fixed-length physical walls, or flexible physical walls as a function of system size \(N\). Using Eq. 12, we find the following angle-dependent stiffness matrix elements : \[\begin{split}\hat{C}_{11}(\theta)=&\hat{C}_{11}(0)\cos^ {4}\theta+\hat{C}_{22}(0)\sin^{4}\theta\\ &+\hat{C}_{33}(0)\sin^{2}(2\theta)+\frac{1}{2}\hat{C}_{12}(0)\sin^ {2}(2\theta)\\ &+2\hat{C}_{13}(0)\sin(2\theta)\cos^{2}\theta\\ &+2\hat{C}_{23}(0)\sin(2\theta)\sin^{2}\theta,\end{split} \tag{13}\] \[\begin{split}\hat{C}_{12}(\theta)=&\left(\frac{1}{4} \left(\hat{C}_{11}(0)+\hat{C}_{22}(0)\right)-\hat{C}_{33}(0)\right)\sin^{2}(2 \theta)\\ &+\hat{C}_{12}(0)\left(\sin^{4}\theta+\cos^{4}\theta\right)\\ &+\frac{1}{2}\left(\hat{C}_{23}(0)-\hat{C}_{13}(0)\right)\sin(4 \theta),\end{split} \tag{14}\] \[\begin{split}\hat{C}_{13}(\theta)=&-\frac{1}{2} \hat{C}_{11}(0)\sin(2\theta)\cos^{2}\theta+\frac{1}{2}\hat{C}_{22}(0)\sin(2 \theta)\sin^{2}\theta\\ &+\frac{1}{2}\hat{C}_{33}(0)\sin(4\theta)+\frac{1}{4}\hat{C}_{12} (0)\sin(4\theta)\\ &+\hat{C}_{13}(0)\cos^{2}\theta\left(2\cos(2\theta)-1\right)\\ &+\hat{C}_{23}(0)\sin^{2}\theta\left(2\cos(2\theta)+1\right), \end{split} \tag{15}\] \[\begin{split}\hat{C}_{22}(\theta)=&\hat{C}_{11}(0) \sin^{4}\theta+\hat{C}_{22}(0)\cos^{4}\theta+\hat{C}_{33}(0)\sin^{2}(2\theta) \\ &+\frac{1}{2}\hat{C}_{12}(0)\sin^{2}(2\theta)-2\hat{C}_{13}(0)\sin( 2\theta)\sin^{2}\theta\\ &-2\hat{C}_{23}(0)\sin(2\theta)\cos^{2}\theta,\end{split} \tag{16}\] \[\begin{split}\hat{C}_{23}(\theta)=&-\frac{1}{2} \hat{C}_{11}(0)\sin(2\theta)\sin^{2}\theta+\frac{1}{2}\hat{C}_{22}(0)\sin(2 \theta)\cos^{2}\theta\\ &-\frac{1}{2}\hat{C}_{33}(0)\sin(4\theta)-\frac{1}{4}\hat{C}_{12} (0)\sin(4\theta)\\ &+\hat{C}_{13}(0)\sin^{2}\theta\left(2\cos(2\theta)+1\right)\\ &+\hat{C}_{23}(0)\cos^{2}\theta\left(2\cos(2\theta)-1\right), \end{split} \tag{17}\] \[\begin{split}\hat{C}_{33}(\theta)=&\frac{1}{4} \left(\hat{C}_{11}(0)+\hat{C}_{22}(0)-2\hat{C}_{12}(0)\right)\sin^{2}\left(2 \theta\right)\\ &+\frac{1}{2}\left(\hat{C}_{23}(0)-\hat{C}_{13}(0)\right)\sin \left(4\theta\right)\\ &+\hat{C}_{33}(0)\cos^{2}\left(2\theta\right).\end{split} \tag{18}\] Eqs. 13-18 show that generally all six elements of the reference stiffness matrix contribute to each \(\hat{C}\) element at a given angle \(\theta\). Therefore, in anisotropic materials, it is important to track all \(\hat{C}\) elements to fully characterize their mechanical properties. ## Appendix C Relation between shear modulus and mixed shear strain derivatives In this Appendix, we verify that the pressure-dependence of the single-cell shear modulus is related to the variation of the packing fraction at jamming onset \(\phi_{J}\) with simple shear strain \(\gamma\) as shown in previous studies [37]. We illustrate this relationship using a single cell containing the \(N=2\) monodisperse disk packing with fixed-length walls in the inset to Fig. 10 (a) since \(\phi_{J}(\gamma)\) can be calculated analytically for this case. The shear modulus can be written in terms of three mixed Figure 10: (a) Shear modulus \(G_{c}(\theta)\) for a single cell containing a monodisperse \(N=2\) disk packing with fixed-length walls at shear angle \(\theta=0\) (red squares and red solid line) and \(\pi/4\) (blue circles and blue solid line) as a function of pressure \(p\). The squares and circles show \(G_{c}(\theta)\) calculated using \(G_{c}=d\Sigma_{c}/d\gamma\), while the solid lines show \(G_{c}(\theta)\) calculated using Eq. 10. The inset shows the \(N=2\) disk-filled cell at simple shear strain \(\gamma=0\) and \(\theta=0\). (b) Packing fraction \(\phi_{J}\) of the single cell in (a) at jamming onset as a function of \(\gamma\) at several values of \(\theta\) as indicated by the different colors and symbols. The symbols and solid lines correspond to the results from the numerical simulations and analytical calculations using Eqs. 13-14. derivatives of the simple shear strain: \[\begin{split} G&=\left(\frac{d\Sigma}{d\gamma}\right)_{ \phi}=\frac{1}{A}\left(\frac{d\left(\frac{dU}{d\gamma}\right)_{p}}{d\gamma} \right)_{\phi}\\ &-\frac{p}{\phi}\left(\frac{d\left(\frac{d\phi}{d\gamma}\right)_{p }}{d\gamma}\right)_{\phi}-\frac{1}{\phi}\left(\frac{dp}{d\gamma}\right)_{\phi} \left(\frac{d\phi}{d\gamma}\right)_{p}.\end{split} \tag{10}\] In Fig. 10 (a), we demonstrate that Eq. 10 still holds for single disk-filled cells with fixed-length walls. The second term in Eq. 10 is proportional to \(p\), with \(\left(d\left(\frac{d\phi}{d\gamma}\right)_{p}/d\gamma\right)_{\phi}>0\) for single cells with \(N\geq 6\) and periodic boundary conditions [15]. In contrast, the first and third terms in Eq. 10 do not possess strong \(p\)-dependence. Therefore, the second derivative of \(\phi_{J}(\gamma)\) typically determines whether \(G\) will increase or decrease with \(p\). In particular, if \(\phi_{J}(\gamma)\) is concave upward, \(G\) decreases with increasing \(p\), and vice versa. For the single cells with the \(N=2\) monodisperse disk packing and fixed-length walls, we can analytically determine the packing fraction at jamming onset \(\phi_{J}\) as a function of the simple shear strain \(\gamma\) and shear angle \(\theta\): \[\phi_{J}(\theta,\gamma)=\pi\frac{\left(B-\sqrt{B^{2}-4AC}\right)^{2}}{2A^{2}}, \tag{11}\] where \(A\), \(B\), and \(C\) are given by \[A=4\cot^{2}\alpha, \tag{12}\] \[B=-4(a\cot\alpha+b(\cot\alpha\cos(2\alpha)+\sin(2\alpha))), \tag{13}\] \[C=a^{2}+2ab\cos(2\alpha)+b^{2}, \tag{14}\] and \(a\), \(b\), and \(\alpha\) are given by \[a=\sqrt{1-2\gamma\sin\theta\cos\theta+\gamma^{2}\sin^{2}\theta}, \tag{15}\] \[b=\sqrt{1+2\gamma\sin\theta\cos\theta+\gamma^{2}\cos^{2}\theta}, \tag{16}\] \[\alpha=\frac{1}{2}\cos^{-1}\left(\frac{\gamma(\cos^{2}\theta-\sin^{2}\theta)- \gamma^{2}\sin\theta\cos\theta}{ab}\right). \tag{17}\] We verify in Fig. 10 (b) that \(\phi_{J}(\theta,\gamma)\) determined by the numerical simulations matches that predicted by Eq. 11. At \(\gamma=0\), which is where \(G_{c}(\theta)\) is measured throughout the main text, we find that the \(\phi_{J}(\gamma)\) is concave downward at \(\theta=0\) and concave upward at \(\theta=\pi/4\) (Fig. 10 (b)). Thus, since \(\lambda_{c}(\theta)\) switches sign, we expect a saddle point to occur in the \(\phi_{J}(\gamma,\theta)\) plane between \(\theta=0\) and \(\pi/4\). In future studies, we will apply a similar approach that generated Eq. 10 to obtain the pressure-dependence of all elements of the stiffness matrix.
2308.12665
Note on intrinsic metrics on graphs
We study the set of intrinsic metrics on a given graph. This is a convex compact set and it carries a natural order. We investigate existence of largest elements with respect to this order. We show that the only locally finite graphs which admit a largest intrinsic metric are certain finite star graphs. In particular all infinite locally finite graphs do not admit a largest intrinsic metric. Moreover, we give a characterization for the existence of intrinsic metrics with finite balls for weakly spherically symmetric graphs.
Daniel Lenz, Marcel Schmidt, Felix Seifert
2023-08-24T09:15:19Z
http://arxiv.org/abs/2308.12665v1
# Note on intrinsic metrics on graphs ###### Abstract. We study the set of intrinsic metrics on a given graph. This is a convex compact set and it carries a natural order. We investigate existence of largest elements with respect to this order. We show that the only locally finite graphs which admit a largest intrinsic metric are certain finite star graphs. In particular all infinite locally finite graphs do not admit a largest intrinsic metric. Moreover, we give a characterization for the existence of intrinsic metrics with finite balls for weakly spherically symmetric graphs. ## 1. Introduction Analysis of the Laplace-Beltrami operator on a Riemannian manifold is intimately linked to the geometry of the manifold. Of particular importance is the metric geometry with respect to the geodesic distance, see e.g. [5] for a survey. In his seminal works [14, 15, 16] Sturm realized that many theorems which hold on Riemannian manifolds can also be established for operators induced by strongly local Dirichlet forms when one replaces the geodesic distance by so-called intrinsic metrics. On such Dirichlet spaces there is the notion of a measure valued gradient, which we denote by \(\Gamma\). On a Riemannian manifold it is given by \(\Gamma(f)=|\nabla f|^{2}\mathrm{dvol}\). The characterizing property of an intrinsic metric \(\varrho\) is a Rademacher type result: If \(f\) is \(1\)-Lipschtiz with respect to \(\varrho\), then \(\Gamma(f)\leq 1\) (this notation means that the Radon-Nikodym derivative with respect to the underlying measure is \(\leq 1\)). It turns out that there is a largest intrinsic metric \(\kappa\), which is given by \[\kappa(x,y) =\sup\{|f(x)-f(y)|\mid f\text{ continuous with }\Gamma(f)\leq 1\}\] \[=\sup\{\varrho(x,y)\mid\varrho\text{ intrinsic}\}.\] In the Riemannian setting this metric coincides with the geodesic distance. One assumption that is usually needed in the analysis of the corresponding operators is compactness of balls with respect to an intrinsic metric. Since large metrics lead to smaller balls and in view of the Riemannian case, it therefore makes sense to view \(\kappa\) as the canonical intrinsic metric of the strongly local Dirichlet space. In fact, the metric \(\kappa\) is exactly the metric that the mentioned works of Sturm rely on. In recent years there was an increased interest in analysis of unbounded graph Laplacians and more general non-local operators. It was noted that the combinatorial graph distance is not well suited for many analytical problems on graphs and, therefore, metrics adapted to the situation were introduced simultaneously by several authors, see [2, 3] for graphs specifically, and [13] and [6] for some more general non-local operators and [4] for a treatment of arbitrary regular Dirichlet forms. The setting of [4] includes Introduction Let \(\mathcal{F}\) be a finite graph. Let \(\mathcal{F}\) be a finite graph. condition on the metric can be dropped, see Theorem 4.1. In particular, we obtain a sharp criterion for the existence of intrinsic metrics with finite balls on antitrees with polynomially growing spheres, see Example 4.3. Section 3 grew out of the Bachelor's thesis of the third named author. **Acknowledgements.** D. L. and M. S. gratefully acknowledge partial support of German Research Foundation (DFG). ## 2. Set up and (non) existence of nontrivial intrinsic metrics Let \(X\neq\emptyset\) be a countable set equipped with the discrete topology. Hence, any function on \(X\) is continuous, a subset of \(X\) is compact if and only if it is finite and the \(\sigma\)-algebra generated from the open sets consists just of all subsets. The space of bounded real-valued functions on \(X\) is denoted by \(\ell^{\infty}(X)\) and we write \(C(X)\) for all real-valued functions on \(X\). Due to \(X\) being discrete the support of \(f\in C(X)\) is given by \(\mathrm{supp}f=\{x\in X\mid f(x)\neq 0\}\) and we write \(C_{c}(X)\) for all real valued functions of finite support. We write \(s\wedge t\) and \(s\lor t\) for the minimum and maximum of the real numbers \(s\) and \(t\) respectively. We extend this to functions \(f,g\in C(X)\) pointwisely. Let \(m\colon X\to(0,\infty)\). We abuse notation and also view \(m\) as a measure on all subsets of \(X\) by \[m(A)=\sum_{x\in A}m(x),\quad A\subseteq X.\] A _weighted graph_ on \(X\) is a symmetric function \(b\colon X\times X\to[0,\infty)\) such that \(b(x,x)=0\) for all \(x\in X\) and such that the _weighted vertex degree_ satisfies \[\deg(x):=\sum_{y\in X}b(x,y)<\infty\] for each \(x\in X\). Two points \(x,y\in X\) are said to be _connected by an edge_ if \(b(x,y)>0\). In this case, we write \(x\sim y\). The graph \(b\) is called _locally finite_ if for each \(x\in X\) the set of its neighbors \[\{y\in X\mid x\sim y\}\] is finite. In this case, we write \(N(x):=|\{y\in X\mid x\sim y\}|\). A _path_ is a finite sequence \(\gamma=(x_{0},\dots,x_{n})\) in \(X\) with \(x_{0}\sim x_{1}\sim\dots\sim x_{n}\). We say it _connects_\(x\) and \(y\) if they are contained in the path. The graph \(b\) is called _connected_ if all \(x,y\in X\) are connected by a path. A _pseudo metric_ on \(X\) is a symmetric function \(\sigma\colon X\times X\to[0,\infty)\) satisfying the triangle inequality. It is called _intrinsic_ with respect to \(b\) and \(m\) if \[\frac{1}{m(x)}\sum_{y\in X}b(x,y)\sigma(x,y)^{2}\leq 1\] for every \(x\in X\). We write \(\mathfrak{M}=\mathfrak{M}_{b,m}\) for the set of all intrinsic pseudo metrics with respect to \(b\) and \(m\). In applications the most important feature of intrinsic pseudo metrics is the following Rademacher type result: If \(\sigma\) is an intrinsic pseudo metric and \(f\colon X\to\mathbb{R}\) is \(L\)-Lipschitz with respect to \(\sigma\) (i.e. \(|f(x)-f(y)|\leq L\sigma(x,y)\) for all \(x,y\in X\)), then \[|\nabla f|:=\left(\frac{1}{m(x)}\sum_{y\in X}b(x,y)|f(x)-f(y)|^{2}\right)^{1/2} \leq L.\] Clearly, the trivial pseudo metric \(\sigma=0\) is always an intrinsic pseudo metric and so the first question is whether non-trivial intrinsic metrics always exist. The following example provides a positive answer for locally finite (connected) graphs. **Example 2.1** (Path pseudo metrics).: Assume the graph \(b\) is connected and let \(w\colon X\times X\to[0,\infty)\). To a path \(\gamma=(x_{0},\ldots,x_{n})\) we associate its _length_ (with respect to \(w\)) by \[L_{w}(\gamma)=\sum_{k=1}^{n}w(x_{k-1},x_{k})\wedge w(x_{k},x_{k-1}).\] The function \[d_{w}\colon X\times X\to[0,\infty),\,d_{w}(x,y)=\inf\{L_{w}(\gamma)\mid\gamma \text{ path connecting }x\text{ and }y\}\] is called the _path pseudo metric_ induced by the weight \(w\). It is readily verified that \(d_{w}\) is a pseudo metric. Note that \(d_{w}\) only depends on the values \(w(x,y)\) with \(x\sim y\). Hence, in some examples below we only specify the values of such \(w\) on neighbors. Note also that we can assume without loss of generality that \(w\) is symmetric (i.e. satisfies \(w(x,y)=w(y,x)\) for all \(x,y\in X\) with \(x\sim y\)) as we could otherwise replace it by the symmetric \(w(x,y)\wedge w(y,x)\). Path pseudo metrics have the following characteristic feature. **Proposition 2.2** (Characterisation of path pseudo metrics).: _Let \((X,b)\) be a connected graph. Let \(\varrho\) be a pseudo metric on \(X\)._ _(a) If \(\varrho\) is a path pseudo metric induced by the weight \(w\), then any pseudometric \(\sigma\) with \(\sigma(x,y)\leq w(x,y)\) for all \(x,y\in X\) with \(x\sim y\) satisfies \(\sigma\leq\varrho\)._ _(b) The pseudometric \(\varrho\) is a path pseudo metric if and only if any pseudometric \(\sigma\) with \(\sigma(x,y)\leq\varrho(x,y)\) for all \(x,y\in X\) with \(x\sim y\) satisfies \(\sigma\leq\varrho\)._ Proof.: (a) Assume without loss of generality that \(w\) is symmetric. Let \(x,y\in X\) be given and \((x_{0},\ldots,x_{n})\) a path connecting \(x\) and \(y\). Then, the triangle inequality (for \(\sigma\)) gives \[\sigma(x,y)\leq\sum_{k=1}^{n}\sigma(x_{k-1},x_{k})\leq\sum_{k=1}^{n}w(x_{k-1},x_{k}).\] Taking the infimum over all path connecting \(x\) and \(y\) we find \(\sigma\leq\varrho\). (b) From (a) we easily infer the 'only if' part. To show the 'if' part assume that \(\varrho\) has the feature that any pseudometric \(\sigma\) with \(\sigma(x,y)\leq\varrho(x,y)\) for all \(x,y\in X\) with \(x\sim y\) satisfies \(\sigma\leq\varrho\). Consider now the pseudo path metric \(\sigma:=d_{\varrho}\). Then, clearly \(\sigma(x,y)\leq\varrho(x,y)\) holds for all \(x,y\in X\) with \(x\sim y\). So, we infer \[\sigma\leq\varrho.\] Moreover, as \(\varrho\) is a pseudo metric we can apply the triangle inequality to \(\varrho\) to find for any path \((x_{0},\ldots,x_{n})\) connecting \(x\) and \(y\) that \[\varrho(x,y)\leq\sum_{k=1}^{n}\varrho(x_{k-1},x_{k})\] holds. Taking the infimum over all path we obtain \[\varrho\leq d_{\varrho}=\sigma.\] Putting this together we find that \(\varrho=d_{\varrho}\) is a path metric. If the graph is locally finite and \(w(x,y)>0\) for all \(x\sim y\), then \(d_{w}\) is even a metric inducing the discrete topology on \(X\). For \(w=1\) the metric \(d:=d_{1}\) is called _combinatorial distance_, as it counts the least number of edges in a path connecting \(x,y\). The path pseudo metric \(d_{w}\) satisfies \(d_{w}(x,y)\leq w(x,y)\wedge w(y,x)\) for \(x\sim y\). Hence, it is intrinsic if \[\frac{1}{m(x)}\sum_{y\in X}b(x,y)w(x,y)^{2}\leq 1.\] One function \(w\) satisfying this inequality is given by \[w(x,y)=\frac{m(x)^{1/2}}{\deg(x)^{1/2}}\wedge\frac{m(y)^{1/2}}{\deg(y)^{1/2}}.\] This provides a concrete example of an intrinsic pseudo metric that is a path pseudo metric. The combinatorial distance \(d\) satisfies \(d(x,y)=1\) for \(x\sim y\). Hence, it is intrinsic if and only if \(\deg/m\) is bounded by \(1\). Another class of intrinsic pseudo metrics is induced by functions on \(X\). This is discussed next. **Example 2.3** (Intrinsic metrics induced by functions).: Let \(f\colon X\to\mathbb{R}\). Then \[\sigma_{f}\colon X\times X\to[0,\infty),\quad\sigma_{f}(x,y)=|f(x)-f(y)|\] is obviously a pseudo metric on \(X\). It is intrinsic if and only if \(|\nabla f|\leq 1\). Moreover, it is a metric if and only if \(f\) is injective. The following lemma introduces a metric which dominates all intrinsic pseudo metrics. It leads to examples of graphs that do not possesses any nontrivial intrinsic pseudo metric and to compactness of the set of all intrinsic pseudo metrics \(\mathfrak{M}\) with respect to pointwise convergence. **Lemma 2.4**.: _Let \((X,b)\) be a graph and consider \(s\colon X\times X\to[0,\infty)\) with_ \[s(x,y)=\frac{m(y)^{1/2}}{b(x,y)^{1/2}}\] _for \(x\sim y.\) Then any intrinsic pseudo metric \(\sigma\) satisfies \(\sigma\leq d_{s}\). In particular, \(\sigma(x,y)\leq s(x,y)\) for \(x\sim y\)._ Proof.: Let \(\sigma\) be intrinsic. For \(z\sim x\) the inequality \(\sum_{y\in X}b(x,y)\sigma(x,y)^{2}\leq m(x)\) implies \[\sigma(x,z)\leq\frac{m(x)^{1/2}}{b(x,z)^{1/2}}=s(z,x)\] where we use the symmetry of \(b\) to obtain the last equality. As \(\sigma\) is a metric this gives \[\sigma(x,y)=\sigma(y,x)\leq s(x,y)\] whenever \(x\sim y\) holds. From (a) of Proposition 2.2 we then infer \(\sigma\leq d_{s}\). **Example 2.5** (Graphs without nontrivial intrinsic pseudo metric).: Assume \[\inf_{z\in X}\left(\frac{m(z)^{1/2}}{b(x,z)^{1/2}}+\frac{m(z)^{1/2}}{b(y,z)^{1/ 2}}\right)=0\] for all \(x,y\in X\) (here we use the convention \(m(z)^{1/2}/b(x,z)^{1/2}=\infty\) if \(x\not\sim z\)). Then the triangle inequality and the previous lemma imply that there is no nontrivial intrinsic pseudo metric with respect to \(b\) and \(m\). In this case, Example 2.3 shows that any \(f\in C(X)\) with \(|\nabla f|\in\ell^{\infty}\) is constant, which is a quite remarkable property. We now explicitly construct such a graph. We let \(X=\mathbb{N}\), \[b\colon\mathbb{N}\times\mathbb{N}\to[0,\infty),\quad b(i,j)=\begin{cases} \frac{1}{i^{2}+j^{2}}&\text{if }i\neq j\\ 0&\text{if }i=j\end{cases}\] and \[m\colon\mathbb{N}\to(0,\infty),\quad m(i)=\frac{1}{i^{3}}.\] It is clear that \(b\) is symmetric and satisfies the summability condition. Moreover, \[\frac{m(k)^{1/2}}{b(i,k)^{1/2}}+\frac{m(k)^{1/2}}{b(j,k)^{1/2}}=\frac{(i^{2}+k ^{2})^{1/2}}{k^{3/2}}+\frac{(j^{2}+k^{2})^{1/2}}{k^{3/2}}\to 0,\quad k\to\infty,\] which shows that \((\mathbb{N},b)\) has no nontrivial intrinsic metric with respect to \(m\). **Remark**.: Non-existence of nontrivial intrinsic pseudo metrics is not a special feature of graphs or non-local operators. Indeed, for certain strongly local Dirichlet forms on fractals the measures \(\Gamma(f)\) mentioned in the introduction are always singular to the underlying measure and hence the only intrinsic pseudo metric is the trivial one, see e.g. [7]. **Proposition 2.6**.: _Let \(b\) be a graph over \((X,m)\). Then \(\mathfrak{M}\) is a compact convex subset of \(C(X\times X)\) equipped with the topology of pointwise convergence._ Proof.: The convexity of \(\mathfrak{M}\) follows directly from the the definition of intrinsic pseudo metrics and the convexity of the function \([0,\infty)\to[0,\infty)\), \(x\mapsto x^{2}\). Since \(X\times X\) is at most countable, the topology of pointwise convergence on \(C(X\times X)\) is metrizable. Let \(d_{s}\) be the metric introduced in Lemma 2.4. By a standard diagonal sequence argument the set \[\{f\in C(X\times X)\ |\ 0\leq f\leq d_{s}\}\] is compact in \(C(X\times X)\) with respect to pointwise convergence. By Lemma 2.4 it contains \(\mathfrak{M}\) and so it suffices to show that \(\mathfrak{M}\) is closed. Let \((\sigma_{n})\) in and \(\sigma\in C(X\times X)\) with \(\sigma_{n}\to\sigma\) pointwise. The pointwise limit of pseudo metrics is clearly a pseudo metric and Fatou's lemma yields \[\sum_{y\in X}b(x,y)\sigma(x,y)^{2}\leq\liminf_{n\to\infty}\sum_{y\in X}b(x,y) \sigma_{n}(x,y)^{2}\leq m(x)\] for all \(x\in X\). Hence, \(\sigma\in\mathfrak{M}\). Clearly, the set \(C(X\times X)\) carries a natural order \(\leq\) with \(F\leq G\) if \(F(x,y)\leq G(x,y)\) for all \(x,y\in X\). This order induces an order on \(\mathfrak{M}\) and we look for largest elements with respect to this order. This is the topic of the next section. ## 3. Maximal intrinsic metrics As mentioned in the introduction intrinsic pseudo metrics with small balls are useful in applications. Small balls correspond to large pseudo metrics. Hence, we study largest intrinsic pseudo metrics in this section. Here, largest refers to the natural order structure on the set of intrinsic metrics induced by pointwise comparison (compare discussion at the end of previous section). One candidate for a largest intrinsic pseudo metric is the pointwise sup of all intrinsic metrics. For connected locally finite graphs it turns out that only on special star graphs this leads to an intrinsic metric (Theorem 3.5). In the quest for largest intrinsic metrics one may also turn to maximal intrinsic metrics (i.e. intrinsic metrics \(\sigma\) which agree with any intrinsic metric \(\varrho\) with \(\sigma\leq\varrho\)). Such maximal intrinsic metrics can be shown to exist on any graph with the help of Zorn's lemma (Proposition 3.7). However, except in the case of the special star graphs already mentioned, there will exist more than one such maximal intrinsic metric (Theorem 3.8). So, there is a lack of uniqueness. Let \(b\) be a graph over \((X,m)\). From Lemma 2.4 we infer that the function \[\kappa=\kappa_{b,m}=\sup\{\sigma\mid\sigma\in\mathfrak{M}_{b,m}\}\] is finite with \(\kappa\leq d_{s}\). As a pointwise supremum of pseudo metrics it is a pseudo metric itself. We call it the _canonical pseudo metric_ of \(b\) over \((X,m)\). If the graph is locally finite, there are intrinsic metrics (cf. Example 2.1) and so \(\kappa\) is indeed a metric in this case. **Proposition 3.1**.: _Let \(b\) be a graph over \((X,m)\). For \(x,y\in X\) we have_ \[\kappa(x,y)=\sup\{|f(x)-f(y)|\mid f\in C(X)\text{ with }|\nabla f|\leq 1\}.\] Proof.: Let \(S_{xy}=\sup\{|f(x)-f(y)|\mid|\nabla f|\leq 1\}\). As noted in Example 2.3 for \(f\in C(X)\) with \(|\nabla f|\leq 1\) the pseudo metric \(\sigma_{f}\) is intrinsic. Since \(\sigma_{f}(x,y)=|f(x)-f(y)|\), this leads to \(\kappa(x,y)\geq S_{xy}\). Now let \(\sigma\) be an intrinsic pseudo metric. For \(w\in X\) the function \(f_{w}=\sigma(\cdot,w)\) is \(1\)-Lipschitz and therefore satisfies \(|\nabla f_{w}|\leq 1\). For \(x,y\in X\) we obtain \(\sigma(x,y)=\sup\{|f_{w}(x)-f_{w}(y)|\mid w\in X\}\). Combining both observations leads to \(S_{xy}\geq\sigma(x,y)\). Since \(\sigma\) was arbitrary, this leads to \(\kappa(x,y)\leq S_{xy}\). **Definition 3.2** (Star graph).: We say a graph \(b\) over \(X\) is a _star graph_ if it is connected and there exists \(p\in X\) such that \(N(x)=1\) for all \(x\in X\setminus\{p\}\). In this case, \(p\) is called _center_ of \(b\). A graph \(b\) over \(X\) is called a _galaxy_ if all of its connected components are star graphs. **Remark**.: By definition star graphs are connected. Hence, if \(p\) is a center, then every vertex from \(X\setminus\{p\}\) is connected to \(p\). The center of a star graph is unique if \(|X|\geq 3\). **Proposition 3.3**.: _Let \(b\) be a star graph over \((X,m)\). The following assertions are equivalent._ 1. _The canonical pseudo metric_ \(\kappa\) _is intrinsic._ 2. _There exists a center_ \(p\in X\) _of_ \(b\) _such that_ \[\sum_{x\in X\setminus\{p\}}m(x)\leq m(p).\] Proof.: Let \(p\in X\) be a center of \(b\). If \(X=\{p\}\), then there is nothing to show and we therefore assume \(|X|\geq 2\). We first prove \[\kappa(x,p)^{2}=\frac{m(x)\wedge m(p)}{b(x,p)}\] for \(x\in X\setminus\{p\}\). We already know from Lemma 2.4 that the left hand side is smaller than the right hand side. For the opposite inequality for \(x\in X\setminus\{p\}\) we consider the function \[f=\left(\frac{(m(x)\wedge m(p))^{\frac{1}{2}}}{b(x,p)^{\frac{1}{2}}}\right)1_ {\{x\}}.\] Using that \(b\) is a star graph with center \(p\), it is readily verified that \(|\nabla f|\leq 1\). Hence, the discussion in Example 2.3 shows that \(\sigma_{f}\) is an intrinsic pseudo metric. Since \[\sigma_{f}(x,p)^{2}=(f(x)-f(p))^{2}=\frac{m(x)\wedge m(p)}{b(x,p)}\] and since \(\kappa\) is the supremum over all intrinsic pseudo metrics, we obtain the desired equality. If \(x\in X\setminus\{p\}\), our formula for \(\kappa\) shows \[\frac{1}{m(x)}\sum_{y\in X}b(x,y)\kappa(x,y)^{2}=\frac{b(x,p)}{m(x)}\frac{m(x )\wedge m(p)}{b(x,p)}\leq 1.\] In particular, \(\kappa\) being intrinsic can only fail at \(p\). But at \(p\) we have \[\frac{1}{m(p)}\sum_{y\in X}b(p,y)\kappa(p,y)^{2}=\frac{1}{m(p)}\sum_{y\in X \setminus\{p\}}(m(y)\wedge m(p)).\] It is readily verified that this sum is \(\leq 1\) if and only if \(\sum_{y\in X\setminus\{p\}}m(y)\leq m(p)\). This shows the equivalence of (i) and (ii). **Proposition 3.4**.: _Let \(b\) be a locally finite connected graph over \((X,m)\) and assume it is not a star graph. Then \(\kappa\) is not intrinsic._ Proof.: We assume that \(\kappa\) is intrinsic and construct another intrinsic pseudo metric which is larger than \(\kappa\) in one argument. Since \(b\) is connected but no star graph, there exist \(x,y\in X\) with \(x\sim y\) and \(N(x),N(y)\geq 2\). Using that \(b\) is locally finite we can assume without loss of generality that \(y\) satisfies \[\kappa(x,y)=\min\{\kappa(x,z)\mid z\sim x\text{ and }N(z)\geq 2\}.\] Moreover, local finiteness implies that \(\kappa\) is a metric and we obtain \[s:=\min(\{\kappa(x,z)\mid z\sim x\}\cup\{\kappa(y,z)\mid z\sim y\})>0.\] By our choice of \(s\) and since \(\kappa\) is intrinsic, there exists \(0<\varepsilon<s/3\) with \[\frac{1}{m(x)}\left(\sum_{z\in X\setminus\{y\}}b(x,z)(k(x,z)-\frac{s}{3})^{2} +b(x,y)(k(x,y)+\varepsilon)^{2}\right)\leq 1\] and \[\frac{1}{m(y)}\left(\sum_{z\in X\setminus\{x\}}b(y,z)(k(y,z)-\frac{s}{3})^{2} +b(y,x)(k(y,x)+\varepsilon)^{2}\right)\leq 1.\] Next we consider the weight \[w=\kappa+\varepsilon 1_{C}-\frac{s}{3}1_{D},\] with \(C=\{x,y\}\times\{x,y\}\) and \(D=\{x,y\}\times(X\setminus\{x,y\})\cup(X\setminus\{x,y\})\times\{x,y\}\). The above inequalities and \(\kappa\) being intrinsic yield \[\frac{1}{m(u)}\sum_{z\in X}b(u,z)w(u,z)^{2}\leq 1\] for all \(u\in X\). Hence, the induced path metric \(d_{w}\) is intrinsic. We finish the proof by showing \(d_{w}(x,y)\geq\kappa(x,y)+\varepsilon\). Let \(\gamma=(x_{0},\ldots,x_{n})\) be a path connecting \(x\) and \(y\). Assume without loss of generality that the \(x_{j}\), \(j=1,\ldots,n\), are pairwise different. We prove \(L_{w}(\gamma)\geq\kappa(x,y)+\varepsilon\). If \(n=1\), then \(x_{0}=x,x_{1}=y\) and we obtain \[L_{w}(\gamma)=w(x,y)=\kappa(x,y)+\varepsilon.\] If \(n\geq 2\), without loss of generality we can assume \(N(x_{i})\geq 2\) for all \(i=1,\ldots,n-1\) (otherwise \((x_{0},\ldots x_{i-1},x_{i+1},\ldots,x_{n})\) is a shorter path connecting \(x\) and \(y\)). In particular, we then have \(N(x_{1})\geq 2\). Now, by the very definition of path we have \(x_{1}\neq x,x_{n-1}\neq y\) and - as the elements of the path are pairwise different - we also have \(x_{1}\neq y\) and \(x_{n-1}\neq x\). So, \((x,x_{1})\) and \((x_{n-1},y)\) both belong to \(D\). Hence, we obtain \[L_{w}(\gamma)\geq w(x,x_{1})+w(x_{n-1},y)=\kappa(x,x_{1})-\frac{s}{3}+\kappa( x_{n-1},y)-\frac{s}{3}.\] The definition of \(s\) yields \(\kappa(x_{n-1},y)\geq s\). Our choice of \(y\) (together with \(N(x_{1})\geq 2\)) yields \(\kappa(x,x_{1})\geq\kappa(x,y)\). Altogether we arrive at \[L_{w}(\gamma)\geq\kappa(x,y)+\frac{s}{3}\geq\kappa(x,y)+\varepsilon,\] where we used \(\varepsilon<s/3\) in the last estimate. **Remark**.: Example 2.5 shows that the previous proposition does not hold in general on graphs that are not locally finite. The graph without nontrivial intrinsic pseudo metric constructed there is a complete graph (every vertex is connected to every other vertex). Note however that the previous proposition holds true (with essentially the same proof) under weaker conditions. One only needs the existence of \(x\in X\) with \(2\leq N(x)<\infty\) and \(N(z)<\infty\) for all \(z\sim x\) and the existence of \(y\sim x\) with \(N(y)\geq 2\). In other words, the combinatorial \(2\)-ball around \(x\) is locally finite and the graph is not a star graph. **Theorem 3.5**.: _Let \(b\) be a connected locally finite graph over \((X,m)\). The following assertions are equivalent._ 1. _The canonical metric_ \(\kappa\) _is intrinsic._ 2. \(b\) _is a star graph with center_ \(p\in X\) _such that_ \[\sum_{x\in X\setminus\{p\}}m(x)\leq m(p).\] Proof.: (i) \(\Rightarrow\) (ii): It follows from Proposition 3.4 that \(\kappa\) is not intrinsic if \(b\) is not a star graph. The criterion on \(m\) follows Proposition 3.3. (ii) \(\Rightarrow\) (i): This follows from Proposition 3.3. **Remark**.: We formulated the theorem for connected graphs but it also holds on locally finite non-connected graphs. In this case, \(\kappa\) is an intrinsic metric if and only if \(b\) is a galaxy and the inequality in (ii) holds on each of its stars (i.e. every connected component). Next we discuss how \(\kappa\) being intrinsic is related to uniqueness of maximal intrinsic pseudo metrics. To this end, we need a finitary description of \(\kappa\) being intrinsic as given by the next proposition. **Proposition 3.6**.: _Let \(b\) be a graph over \((X,m)\). The following assertions are equivalent._ 1. \(\kappa\) _is an intrinsic metric._ 2. _For all intrinsic metrics_ \(\varrho,\sigma\) _the metric_ \(\varrho\vee\sigma\) _is intrinsic._ Proof.: (i) \(\Rightarrow\) (ii): If \(\varrho,\sigma\) are intrinsic, then \(\varrho,\sigma\leq\kappa\), which implies \(\varrho\vee\sigma\leq\kappa\). Since \(\kappa\) is intrinsic, this implies that \(\varrho\vee\sigma\) is intrinsic. (ii) \(\Rightarrow\) (i): Since \(X\times X\) is countable, we find a sequence \((\sigma_{n})\) in \(\mathfrak{M}\) with \[\kappa=\sup\{\sigma_{n}\mid n\in\mathbb{N}\}=\lim_{N\to\infty}\sup\{\sigma_{n }\mid 1\leq n\leq N\},\] where the convergence holds pointwise. Iterating the assumption yields \(\sup\{\sigma_{n}\mid 1\leq n\leq N\}\in\mathfrak{M}\). Since \(\mathfrak{M}\) is closed with respect to pointwise convergence, we arrive at (i). Even though \(\kappa\) may not be an intrinsic metric there always exist maximal intrinsic pseudo metrics. More precisely, an intrinsic pseudo metric \(\sigma\) is called _maximal_ if for any other intrinsic pseudo metric \(\varrho\) the inequality \(\sigma(x,y)\leq\varrho(x,y)\) for all \(x,y\in X\) implies \(\varrho=\sigma\). **Proposition 3.7** (Existence of maximal intrinsic metrics).: _Let \(b\) be a graph over \((X,m)\). For every intrinsic pseudo metric \(\sigma\) there exists a maximal intrinsic pseudo metric \(\varrho\) with \(\sigma\leq\varrho\)._ Proof.: We use Zorn's lemma. The pointwise order on \[\mathfrak{M}_{\sigma}:=\{\varrho\mid\varrho\in\mathfrak{M}\text{ with }\sigma\leq\varrho\}\] is a partial order. Let \(\mathfrak{F}\subseteq\mathfrak{M}_{\sigma}\) be a totally ordered subset. We show that \(\mathfrak{F}\) has an upper bound in \(\mathfrak{M}_{\sigma}\). Since \(X\times X\) is countable, there exists a sequence \((\varrho_{n})\) in \(\mathfrak{F}\) with \[\sup\{\varrho_{n}\mid n\in\mathbb{N}\}=\sup\{\varrho\mid\varrho\in\mathfrak{ F}\}.\] Since \(\mathfrak{F}\) is totally ordered, for every \(n\in\mathbb{N}\) we have \(\varrho_{n}\leq\varrho_{n+1}\) or \(\varrho_{n}\geq\varrho_{n+1}\). We use this to inductively define \(r_{1}=\varrho_{1}\) and for \(n\geq 2\) \[r_{n}=\begin{cases}\varrho_{n}&\text{if }r_{n-1}<\varrho_{n}\\ r_{n-1}&\text{if }r_{n-1}\geq\varrho_{n}\end{cases}.\] Then \(r_{n}\in\mathfrak{F}\), \(r_{n}\leq r_{n+1}\) and \(\varrho_{n}\leq r_{n}\). We infer \[\sup\{\varrho\mid\varrho\in\mathfrak{F}\}=\sup\{\varrho_{n}\mid n\in\mathbb{ N}\}=\sup\{r_{n}\mid n\in\mathbb{N}\}=\lim_{n\to\infty}r_{n},\] where the limit holds pointwise. By Proposition 2.6 pointwise limits of intrinsic pseudo metrics are again intrinsic pseudo metrics. Hence, \(\sup\{\varrho\mid\varrho\in\mathfrak{F}\}\in\mathfrak{M}_{\sigma}\) is an upper bound for \(\mathfrak{F}\). The fact that in general \(\kappa\) is not an intrinsic metric leads to non-uniqueness of maximal intrinsic metrics. **Theorem 3.8**.: _Let \(b\) be a locally finite connected graph over \((X,m)\). The following assertions are equivalent._ 1. _There exists a unique maximal intrinsic metric._ 2. \(b\) _is a star graph and it has a center_ \(p\in X\) _such that_ \[\sum_{x\in X\setminus\{p\}}m(x)\leq m(p).\] Proof.: (ii) \(\Rightarrow\) (i): By Theorem 3.5 assertion (ii) implies that \(\kappa\) is an intrinsic metric. Clearly it is maximal. (i) \(\Rightarrow\) (ii): Assume (ii) does not hold. According to Proposition 3.6 and Theorem 3.5 there exist intrinsic pseudo metrics \(\varrho,\sigma\) such that \(\varrho\vee\sigma\) is not intrinsic. By Proposition 3.7 we can assume that \(\varrho\) and \(\sigma\) are maximal. Since \(\varrho\vee\sigma\) is not intrinsic, we infer \(\varrho\neq\sigma\). For certain examples non-uniqueness of maximal intrinsic pseudo metrics can be seen quite easily. **Example 3.9**.: Consider \(X=\mathbb{Z}\) with \(b(n,m)=1\) if \(|n-m|=1\) and \(b(n,m)=0\), else. Moreover, let \(m(n)=1\), \(n\in\mathbb{Z}\), be the counting measure. For any function \(f\colon\mathbb{Z}\to\mathbb{R}\) with \[1=|\nabla f|^{2}(n)=(f(n+1)-f(n))^{2}+(f(n-1)-f(n))^{2}\] for all \(n\in\mathbb{Z}\) the pseudo metric \(\sigma_{f}\) is maximal intrinsic. Clearly, there are many such functions on \(\mathbb{Z}\). ## 4. Existence of intrinsic pseudo metrics with finite balls In this section we (partially) characterize when weakly spherically symmetric graphs (see below for a definition) admit intrinsic pseudo metrics with finite balls. To this end we employ a criterion involving existence of certain cut-off functions. A weighted graph \(b\) over \((X,m)\) is called \(\chi\)_-complete_ if there exists a sequence \((\chi_{n})\) in \(C_{c}(X)\) with \(\chi_{n}\to 1\) pointwise and \(C>0\) such that \[|\nabla\chi_{n}|\leq C\] for all \(n\in\mathbb{N}\). In this case, we simply call \((\chi_{n})\) a _sequence of cut-off functions_ for \(b\) over \((X,m)\). Let \(T\colon\mathbb{R}\to\mathbb{R},T(x)=(x\wedge 1)\lor 0\). Then \(T\) is \(1\)-Lipschitz and therefore \(|\nabla(T\circ\chi_{n})|\leq|\nabla\chi_{n}|\leq C\). Hence, we may always assume that cut-off functions satisfy \(0\leq\chi_{n}\leq 1\). It was recently proven in [12] that a locally finite graph \(b\) over \((X,m)\) is \(\chi\)-complete if and only if there exists an intrinsic pseudo metric with finite balls. From now on we assume that the graph \(b\) over \((X,m)\) is connected. Recall that \(d\) is the combinatorial graph metric. Fix \(o\in X\). We say that a function \(f\colon X\to\mathbb{R}\) is radially symmetric with respect to \(o\) if the value \(f(x)\) only depends on \(d(x,o)\). In this case, given \(x\in X\) with \(d(x,o)=r\) we abuse notation and write \(f(r)\) for the value \(f(x)\). Often \(o\) is fixed and we simply say that \(f\) is radially symmetric. A graph \(b\) over \((X,m)\) is called _weakly spherically symmetric_ if there exists \(o\in X\) (the _root_) such that the functions \(\kappa_{\pm}\colon X\to\mathbb{R}\) defined by \[\kappa_{\pm}(x)=\frac{1}{m(x)}\sum_{y\in S_{r\pm 1}}b(x,y),\quad\text{if $x\in S _{r}$,}\] are radially symmetric. Here, \(S_{r}=\{x\in X\mid d(x,o)=r\}\) denotes the combinatorial \(r\)-sphere around \(o\) if \(r\in\mathbb{N}_{0\,}\), and we use the convention \(S_{-1}=\emptyset\), which leads to \(\kappa_{-}(o)=0\). In this case, for radially symmetric \(f\colon X\to\mathbb{R}\) the function \(|\nabla f|^{2}\) is also radially symmetric and for \(r\geq 1\) it takes the form \[|\nabla f|^{2}(r) =\kappa_{-}(r)(f(r)-f(r-1))^{2}+\kappa_{+}(r)(f(r)-f(r+1))^{2}\] \[=\frac{1}{m(S_{r})}\kappa_{+}(r-1)m(S_{r-1})(f(r)-f(r-1))^{2}\] \[\quad+\frac{1}{m(S_{r})}\kappa_{+}(r)m(S_{r})(f(r)-f(r+1))^{2}.\] For the second equality we used \(\kappa_{+}(r)m(S_{r})=\kappa_{-}(r+1)m(S_{r+1})\), which follows from \[\kappa_{+}(r)m(S_{r})=\sum_{x\in S_{r},y\in S_{r+1}}b(x,y)=\kappa_{-}(r+1)m(S_ {r+1}).\] This quantity can be interpreted as the weighted size of the combinatorial boundary of \(B_{r}=\{x\in X\mid d(x,o)\leq r\}\), which is why we write \(|\partial B_{r}|=\kappa_{+}(r)m(S_{r})\). Hence, the formula for the gradient reads \[|\nabla f|^{2}(r)=\frac{1}{m(S_{r})}\left(|\partial B_{r-1}|(f(r)-f(r-1))^{2} +|\partial B_{r}|(f(r)-f(r+1))^{2}\right).\] **Theorem 4.1**.: _Let \(b\) be a locally finite and weakly spherically symmetric graph over \((X,m)\) with root \(o\in X\). Of the following assertions (i), (ii) and (iii) are equivalent, (i)/(ii)/(iii) implies (iv) and (iv) implies (v)._ 1. _The graph is_ \(\chi\)_-complete and there exists a sequence of cut-off functions which are radially symmetric._ 2. _There exists an intrinsic pseudo metric with finite balls_ \(\sigma\) _such that the function_ \(\sigma(o,\cdot)\) _is radially symmetric._ 3. \[\sum_{r=1}^{\infty}\frac{\sqrt{m(S_{r})\wedge m(S_{r+1})}}{\sqrt{|\partial B_ {r}|}}=\infty.\] 4. _There exists an intrinsic metric with finite balls._ 5. \[\sum_{r=1}^{\infty}\frac{\sqrt{m(S_{r})}}{\sqrt{|\partial B_{r}|}}=\sum_{r=1} ^{\infty}\frac{1}{\sqrt{\kappa_{+}(r)}}=\infty.\] _If, moreover, there exists \(K>0\) such that \(m(S_{r})\leq Km(S_{r+1})\) for all \(r\geq 1\), then all assertions are equivalent._ Proof.: (i) \(\Rightarrow\) (ii): That the existence of a sequence of cut-off functions yields an intrinsic pseudometric with finite balls was shown in [12, Theorem A.1]. Here we discuss why the pseudometric constructed in the proof of [12, Theorem A.1] is radially symmetric. Let \((\psi_{n})\) be a sequence of cut-off functions which are radially symmetric. As seen in the proof of the implication (iii) \(\Rightarrow\) (i) of [12, Theorem A.1], there is a sequence of cut-off functions \((\varphi_{n})\) with the following properties: * Every \(\varphi_{n}\) is a convex combination of finitely many of the functions \(\psi_{n},n\in\mathbb{N}\). * \(|\nabla\varphi_{n}|\to 0\) uniformly, as \(n\to\infty\). The first property implies that \(\varphi_{n}\) is radially symmetric and the second yields that we can assume without loss of generality (after possibly passing to a suitable subsequence) \[\sum_{n=1}^{\infty}\left\|\left|\nabla\varphi_{n}\right|\right\|_{\infty}\leq 1.\] Moreover, since \(\varphi_{n}\to 1\) pointwise, we can also assume (after passing to another subsequence) \[\sum_{n=1}^{\infty}(1-\varphi_{n}(x))^{2}<\infty\] for all \(x\in X\). Under these conditions it is shown in the proof of [12, Theorem A.1] that \(\sigma\colon X\times X\to[0,\infty)\) given by \[\sigma(x,y)=\left(\sum_{n=1}^{\infty}(\varphi_{n}(x)-\varphi_{n}(y))^{2} \right)^{1/2}\] is an intrinsic metric with finite balls. Clearly, the function \(\sigma(o,\cdot)\) is radially symmetric because the functions \(\varphi_{n},n\in\mathbb{N}\), are radially symmetric. (ii) \(\Rightarrow\) (i): The functions \[\chi_{n}\colon X\to[0,\infty),\quad\chi_{n}(x)=(1-\sigma(o,x)/n)_{+}\] are radially symmetric and satisfy \(|\nabla\chi_{n}|\leq 1/n\) because \(\sigma\) is intrinsic, see e.g. [11, Proposition 11.29]. Their support is contained in the ball \(B^{\sigma}_{n}(o)\), which is finite by assumption, and they satisfy \(\psi_{n}\to 1\) pointwise. (i) \(\Rightarrow\) (iii): Let \((\psi_{n})\) be a sequence of cut-off functions which are radially symmetric. Then there exists \(C\geq 0\) such that for all \(r,n\in\mathbb{N}\) we have \[\kappa_{-}(r)(\psi_{n}(r)-\psi_{n}(r-1))^{2}+\kappa_{+}(r)(\psi_{n}(r)-\psi_{n }(r+1))^{2}=|\nabla\psi_{n}|^{2}(r)\leq C.\] This implies \[(\psi_{n}(r)-\psi_{n}(r+1))^{2}\leq C\left(\frac{1}{\kappa_{+}(r)}\wedge\frac {1}{\kappa_{-}(r+1)}\right)=C\frac{m(S_{r})\wedge m(S_{r+1})}{|\partial B_{r} |}.\] For a given \(R\geq 1\) we choose \(n\in\mathbb{N}\) large enough such that \(\psi_{n}(R)\geq 1/2\). Since \(\psi_{n}\) has finite support, there is \(l\geq 1\) such that \(\psi_{n}(r)=0\) for \(r\geq R+l\). These properties of \(\psi_{n}\) yield \[\frac{1}{2} \leq|\psi_{n}(R)-\psi_{n}(R+l)|\leq\sum_{r=R}^{R+l-1}|\psi_{n}(r) -\psi_{n}(r+1)|\] \[\leq\sqrt{C}\sum_{r=R}^{\infty}\frac{\sqrt{m(S_{r})\wedge m(S_{r+ 1})}}{\sqrt{|\partial B_{r}|}}.\] Since \(R\) was arbitrary, we arrive at (ii). (iii) \(\Rightarrow\) (i): We consider radially symmetric functions \(\chi_{n}\) defined by \(\chi_{n}(o)=1\) and, for \(r\geq 1\), \[\chi_{n}(r)=\left(1-\sum_{k=0}^{r-1}1_{[n,\infty)}(k)\frac{\sqrt{m(S_{k}) \wedge m(S_{k+1})}}{\sqrt{|\partial B_{k}|}}\right)_{+}.\] Clearly, \(\chi_{n}\to 1\) pointwise and \[|\chi_{n}(r)-\chi_{n}(r+1)|^{2}\leq\frac{m(S_{r})\wedge m(S_{r+1})}{| \partial B_{r}|}.\] The latter inequality and our expression for \(|\nabla\chi_{n}|^{2}\) in the radially symmetric case yield \(|\nabla\chi_{n}|^{2}\leq 2\). The assumption (ii) implies that \(\chi_{n}\) has compact support and we arrive at (i). (ii) \(\Rightarrow\) (iv): This is trivial (iv) \(\Rightarrow\) (v): As discussed in [12, Theorem A.1] the existence of an intrinsic pseudo metric with finite balls yields a sequence of cut-off functions \((\chi_{n})\) and we assume \(0\leq\chi_{n}\leq 1\). We consider radially symmetric functions \(\varphi_{n}\) defined as follows. First we inductively choose a sequence \((x_{r})\) with \(x_{0}=o\) and \(x_{r}\in S_{r}\) such that \[|\chi_{n}(x_{r})-\chi_{n}(x_{r-1})|=\min\{|\chi_{n}(y)-\chi_{n}(x_{r-1})|\mid y \sim x_{r-1},y\in S_{r}\},\] and then we set \(\varphi_{n}(r)=\chi_{n}(x_{r})\). Clearly, \(\varphi_{n}\) has finite support and \(\varphi_{n}\to 1\) pointwise. Moreover, for \(r\geq 1\) we obtain by our choice of \(x_{r}\) \[\kappa_{+}(r)(\varphi_{n}(r)-\varphi_{n}(r+1))^{2} =\frac{1}{m(x_{r})}\sum_{y\in S_{r+1}}b(x,y)(\chi_{n}(x_{r})-\chi_ {n}(x_{r+1}))^{2}\] \[\leq\frac{1}{m(x_{r})}\sum_{y\in S_{r+1}}b(x,y)(\chi_{n}(x_{r})- \chi_{n}(y))^{2}\] \[\leq|\nabla\chi_{n}|^{2}(x_{r})\leq C.\] This implies \[(\varphi_{n}(r)-\varphi_{n}(r+1))^{2}\leq C\frac{1}{\kappa_{+}(r)}=C\frac{m( S_{r})}{|\partial B_{r}|}.\] With this at hand we can argue as in the proof of (i) \(\Rightarrow\) (ii) to conclude (iv). If we assume \(m(S_{r})\leq Km(S_{r+1})\) for all \(r\geq 0\), the equivalence of (ii) and (v) is obvious. **Remark**.: It remains open whether on any weakly spherically symmetric \(\chi\)-complete graph the sequence of cut-off functions can be chosen to be radially symmetric. Note that the sequence \((\varphi_{n})\) constructed in the proof of (iv) \(\Rightarrow\) (v) need not be a sequence of cut-off functions. They satisfy \[\kappa_{+}(r)(\varphi_{n}(r)-\varphi_{n}(r+1))^{2}\leq C\] but only the weaker bound \[\kappa_{-}(r)(\varphi_{n}(r)-\varphi_{n}(r-1))^{2} =\frac{m(S_{r-1})}{m(S_{r})}\kappa_{+}(r-1)(\varphi_{n}(r)-\varphi _{n}(r-1))^{2}\] \[\leq C\frac{m(S_{r-1})}{m(S_{r})},\] which leads to \[|\nabla\varphi_{n}|^{2}(r)\leq C\left(1+\frac{m(S_{r-1})}{m(S_{r})}\right).\] Hence, \((\varphi_{n})\) is a sequence of cut-off functions if there exists \(K>0\) such that \(m(S_{r})\leq Km(S_{r+1})\) for all \(r\geq 0\), which is the additional condition in the theorem. **Remark**.: The idea for the proof of implication (iii) \(\Rightarrow\) (i) is taken from the proof of [1, Theorem 3.20]. Let \(D\subseteq[0,\infty)\). For functions \(f,g\colon D\to(0,\infty)\) we write \(f\sim g\) if there exists \(C>0\) such that \(C^{-1}f(t)\leq g(t)\leq Cf(t)\) for all sufficiently large \(t\in D\). If \(m\) satisfies \(m(S_{.})\sim f\) for some monotone increasing function \(f\colon\mathbb{N}\to(0,\infty)\), then there exists \(K\geq 0\) such that \(m(S_{r})\leq Km(S_{r+1})\) for all \(r\in\mathbb{N}\). In this case, all assertions of Theorem 4.1 are equivalent. **Example 4.2** (Trees).: Let \(m=1\) be the counting measure on \(X\) and let be an infinite radially symmetric connected tree with \(b\in\{0,1\}\). Then \(m(S_{n})=|S_{n}|\) and since any \(y\in S_{r+1}\) has exactly one neighbor in \(S_{r}\) (this is the defining property of a connected tree), we have \(|\partial B_{r}|=|S_{r+1}|\) and \(|S_{r}|\leq|S_{r+1}|\). For the inequality we used that the graph is radially symmetric and infinite. Thus, Theorem 4.1 shows that the graph has an intrinsic metric with finite balls if and only if \[\sum_{r=1}^{\infty}\frac{\sqrt{|S_{r}|}}{\sqrt{|S_{r+1}|}}=\infty.\] This criterion for \(\chi\)-completeness was obtained in [1, Proposition 3.16]. It shows that if \(|S_{r}|\) grows polynomially in \(r\), then the graph has an intrinsic pseudo metric with finite balls. If there exists \(\alpha>0\) such that \(|S_{r}|\sim e^{r^{\alpha}}\), then the graph has an intrinsic pseudo metric with finite balls if and only if \(\alpha\leq 1\). **Example 4.3** (Antitrees).: Let \(X\) be infinite, let \(m=1\) be the counting measure and fix \(o\in X\). A connected graph \(b\) over \((X,m)\) is called _radially symmetric antitree_ if every vertex from \(S_{n}\) is connected to every vertex from \(S_{n+1}\) and there are no neighbors within \(S_{n}\). We assume that \(b\) is such an antitree with \(b(x,y)\in\{0,1\}\) for all \(x,y\in X\). Then \(m(S_{n})=|S_{n}|\) and since every \(y\in S_{r+1}\) is connected to all \(x\in S_{r}\), we have \(|\partial B_{r}|=|S_{r}||S_{r+1}|\). Hence, \[\sum_{r=1}^{\infty}\frac{1}{\sqrt{|S_{r}|\vee|S_{r+1}|}}=\infty.\] implies that the graph has an intrinsic metric with finite balls. For \(\chi\)-completeness this condition was observed in [1, Proposition 3.22]. Moreover, the existence of an intrinsic pseudo metric with finite balls yields \[\sum_{r=1}^{\infty}\frac{1}{\sqrt{|S_{r}|}}=\infty.\] If there exists \(\alpha>0\) such that \(|S_{r}|\sim r^{\alpha}\), then the antitree has an intrinsic pseudo metric with finite balls if and only if \(\alpha\leq 2\). Here the 'only if' part seems to be new.
2310.08477
TASEP Exit Times
We address the question of the time needed by $N$ particles, initially located on the first sites of a finite 1D lattice of size $L$, to exit that lattice when they move according to a TASEP transport model. Using analytical calculations and numerical simulations, we show that when $N \ll L$, the mean exit time of the particles is asymptotically given by $T_N(L) \sim L+\beta_N \sqrt{L}$ for large lattices. Building upon exact results obtained for 2 particles, we devise an approximate continuous space and time description of the random motion of the particles that provides an analytical recursive relation for the coefficients $\beta_N$. The results are shown to be in very good agreement with numerical results. This approach sheds some light on the exit dynamics of $N$ particles in the regime where $N$ is finite while the lattice size $L\rightarrow \infty$. This complements previous asymptotic results obtained by Johansson in \cite{Johansson2000} in the limit where both $N$ and $L$ tend to infinity while keeping the particle density $N/L$ finite.
Jérôme Dorignac, Fred Geniet, Estelle Pitard
2023-10-12T16:35:31Z
http://arxiv.org/abs/2310.08477v1
# TASEP Exit Times ###### Abstract We address the question of the time needed by \(N\) particles, initially located on the first sites of a finite 1D lattice of size \(L\), to exit that lattice when they move according to a TASEP transport model. Using analytical calculations and numerical simulations, we show that when \(N\ll L\), the mean exit time of the particles is asymptotically given by \(T_{N}(L)\sim L+\beta_{N}\sqrt{L}\) for large lattices. Building upon exact results obtained for 2 particles, we devise an approximate continuous space and time description of the random motion of the particles that provides an analytical recursive relation for the coefficients \(\beta_{N}\). The results are shown to be in very good agreement with numerical results. This approach sheds some light on the exit dynamics of \(N\) particles in the regime where \(N\) is finite while the lattice size \(L\to\infty\). This complements previous asymptotic results obtained by Johansson in [1] in the limit where both \(N\) and \(L\) tend to infinity while keeping the particle density \(N/L\) finite. ## I Introduction The TASEP model (Totally Asymmetric Simple Exclusion Process) is a unidirectional model of transport of particles with exclusion on a one dimensional lattice [2]. It has various interesting applications in traffic on lanes, waiting times lists, directed transport of particles through channels and more [3; 4; 5]. It can also be mapped on models of interface growth [1; 6], providing alternate interpretations of its results. Originally introduced in the context of the kinetics of biopolymerization, it has also been a paradigmatic model in the field of biological transport since [7; 8]. Most theoretical investigations of the TASEP model have been dedicated to obtaining results at stationarity when the flux of particles entering and exiting the lattice has reached a stationary value. In that respect, particle density and current properties have been thoroughly studied [9; 10; 11; 12; 13]. But some results have also been obtained in non-stationary regimes, especially in infinite lattices. For instance, the exact Green functions of the continuous time TASEP model on \(\mathbb{Z}\) have been obtained by Schutz in [14]. Related quantities have subsequently been used to determine some asymptotic features of the time evolution of the particle density when starting from a step-initial condition where particles initially populate the left half of the lattice only [15]. Much in the same vein, the statistical features of the motion of certain (tagged) particles along the lattice have been elucidated as well [6; 16]. The question we address here pertains to that class of non-stationary problems: how a set of particles, transported according to the TASEP rules, evacuate a finite lattice, especially when they start from a "step-like" configuration where all of them are located on the leftmost sites of that lattice? To answer that question, we shall study the distribution of their exit time and, more specifically, their mean exit time. Studies on exit times (also called evacuation times or escape times) in single-file systems, that is in 1D systems where particles cannot pass each other, generally involve bidirectional motion like in single-file diffusion (SFD) problems (see for instance [17; 18] and references therein). In this context, exit time distributions may be analyzed via the first passage time density of a "tracer" (or tagged) particle moving within a crowd of like particles (see e.g. [19]). Analysis of these SFD problems shows that the tracer position \(x(t)\) has a subdiffusive behaviour leading to a mean-squared displacement that scales as \(\langle(x(t)-x_{0})^{2}\rangle\propto t^{2H}\) at long times where \(H\) is the Hurst exponent [20; 21]. This behaviour, due to crowding effects generated by 1D confinement at a given density of particles, contrasts with the \(\langle(x(t)-x_{0})^{2}\rangle\propto t\) scaling typical of a diffusive behaviour for which \(H=1/2\). In the particular case of the symmetric exclusion process (SEP) for instance - a 1D hard-core lattice gas problem equivalent to the TASEP model but where particles may equally jump to the right or to the left provided the corresponding site is empty (see e.g. [22]) - the Hurst exponent is \(H=1/4\) and the SEP problem has been shown to be equivalent to a fractional Brownian motion (fBm) that depends on the particle density [23]. The exit of particles following TASEP transport rules from a finite size lattice share some similarities with SFD systems. In particular, the motion of a given particle is hindered by others (exclusion) and therefore cannot perform a simple independent random walk. But there are two main differences between SFD problems and the question we address in this paper. First, the particle density does not remain constant over time because particles progressively leave the system they start from and free the motion of those that remain within the lattice. In that respect, the escape of colloidal particles from microfluidic channels studied in ref. [24] is a problem closer to ours. Second, the motion of particles is unidirectional (particles only move to the right) that is, transport is totally biased. This situation is similar to emergency evacuation in trains or aircrafts where individuals have to quickly walk down a narrow seat aisle [25; 26]. The evacuation of particles according to TASEP rules might therefore provide some insight on emergency evacuation although pedestrian dynamics has a quite complex stochastic structure [27; 28]. In this paper, we focus on a special setting of the TASEP model: particles start from a step-like initial state and no particle is injected at site 1. Moreover, all particles exit the lattice as they reach site \(L+1\) (absorbing condition after the \(L\)th site). At time \(t=0\), the \(N\) particles are located on sites \(1,..N\) with \(N\leq L\), as displayed in Figure 1. We are interested in the emptying time of this model which is equal to the exit time of the leftmost particle of the lattice. In particular, we shall use analytical calculations and numerical simulations to determine the mean exit time (MET). After introducing the model and quantities of interest in Section II, we present some exact results for 1 and 2 particles in Section III and Section IV. We then use a continuous space and time description of the relative motion of the particles with respect to the leading one to calculate the exit time in the large \(L\) limit in Section V. This approach provides a simple physical approximate solution of the problem, yielding a recursive expression for \(T_{N}(L)\) in the large \(L\) limit. These results are then compared to Gillespie simulations in Section VI. In section VII, we discuss our asymptotic results and compare them to those of Johansson [1] obtained in the finite density regime. ## II Transport model and its exit time distribution ### The TASEP Model The Totally Asymmetric Simple Exclusion Process (TASEP) is a paradigmatic dynamical model for the unidirectional transport of particles on a lattice that takes into account exclusion. It is generally defined by the following rules: a particle may be loaded on the lattice with probability rate \(\alpha\) provided the first site is empty. It then proceeds forward with a hopping rate \(p\) provided the neighbouring site (on the right) is empty and leaves the last lattice site with a probability exit rate \(\beta\). In what follows, we shall study the exit time distribution of \(N\) particles initially located on the first (leftmost) \(N\) sites of a lattice containing \(L\geq N\) sites, see figure 1 for a pictorial view. We shall moreover assume that the hopping rates on the lattice are homogeneous, \(p=\beta\) and using \(1/p\) as unit of time, we shall simply set \(p=\beta=1\). Finally, the incoming rate \(\alpha\) is set to zero in such a way that no particle enters the lattice from \(t=0\) onward. The TASEP model is a Markov process governed by the master equation, \[\frac{d|P(t)\rangle}{dt}=M|P(t)\rangle \tag{1}\] where the probability vector may be written as \[|P(t)\rangle=\sum_{\mathbf{\sigma}}P_{\mathbf{\sigma}}(t)|\mathbf{\sigma}\rangle\,. \tag{2}\] Here, the configuration vector is \(|\mathbf{\sigma}\rangle=|\sigma_{1}\rangle\otimes\cdots\otimes|\sigma_{L}\rangle\) with column vectors \(|\sigma_{i}\rangle=(1-\sigma_{i},\sigma_{i})^{T}\) where \(\sigma_{i}=1\) when site \(i\) is occupied by a particle and \(\sigma_{i}=0\) otherwise. The sum runs over all \(2^{L}\) particle configurations. Within our settings, the Markov matrix \(M\) reads [29] \[M=\sum_{i=1}^{L-1}\mathrm{I\!I}^{i-1}\otimes\mathrm{m}\otimes\mathrm{I\!I}^{L- 1-i}+\mathrm{I\!I}^{L-1}\otimes\mathrm{b}\,. \tag{3}\] where \(\mathrm{I\!I}\) is the \(2\times 2\) identity matrix and where \[m=\begin{pmatrix}0&0&0&0\\ 0&0&1&0\\ 0&0&-1&0\\ 0&0&0&0\end{pmatrix};\ b=\begin{pmatrix}0&1\\ 0&-1\end{pmatrix}. \tag{4}\] It is worth noting that, as the incoming rate \(\alpha\) has been set to zero, \(M\) is a \(2^{L}\times 2^{L}\) upper triangular matrix. ### Exit time distribution As the TASEP model is a random process, the time \(t\) taken by \(N\) particles to empty an \(L\)-site lattice is a random variable. We shall denote by \(p_{N,L}(t)\) its probability density function (PDF). In the terminology of the previous section, the probability that the lattice is empty at Figure 1: Initial, intermediate and final configurations. time \(t\) is given by \(P_{\bf 0}(t)\) where \({\bf 0}=(0,\ldots,0)\) is the configuration where the \(L\) sites are empty. Now, the lattice is empty at time \(t\) if the \(N\) particles have evacuated it by a time \(\tau\leq t\). Then, the probability \({\rm Pr}(\tau\leq t)\) that the exit time of the \(N\) particles is less than \(t\) is exactly equal to \(P_{\bf 0}(t)\). The exit time PDF, \(p_{N,L}(t)=d{\rm Pr}(\tau\leq t)/dt\), is therefore given by \[p_{N,L}(t)=\dot{P}_{\bf 0}(t)=P_{(0,\ldots,0,1)}(t)\,, \tag{5}\] where the dot denotes the time derivative and where the last equality is obtained from the master equation (1). It is thus sufficient to evaluate the probability that only site \(L\) is occupied at time \(t\) to obtain the exit time PDF. Taking the Laplace transform of the master equation (1), we obtain the following algebraic system \[(s-M)|\tilde{P}(s))=|P(0))\,, \tag{6}\] where \(s\) is the parameter of the Laplace transform defined by \[\tilde{f}(s)=\int_{0}^{\infty}\!dt\,f(t)e^{-st}\,. \tag{7}\] In Eq. (6), \(|P(0))\) is the initial probability vector with a single nonzero component: \(P_{(1,\ldots,1,0,\ldots,0)}=1\) with \(N\) 1's and \((L-N)\) 0's. Solving the triangular algebraic system (6), one obtains the Laplace transform of \(P_{(0,\ldots,0,1)}(t)\) and, from there, the PDF \(p_{N,L}(t)\) itself. Of particular interest is the Mean Exit Time (MET) of that distribution. In the next sections, we shall be mainly interested in the asymptotic behavior of this quantity as \(L\) becomes large while \(N\) remains finite. For that reason, we shall denote the mean exit time of \(N\) particles from a lattice \(\{1,L\}\) with \(L\) sites as \(T_{N}^{(f)}(L)\). The superscript \((f)\) emphasizes the fact that this MET is obtained for a finite lattice with \(L\) sites and not for a section with \(L\) sites \([1,L]\) embedded in an infinite lattice. We shall come back to that point in section IV.2. Let us just note for now that \(T_{N}^{(f)}(L)\) may be directly derived from the Laplace transform of \(p_{N,L}(t)\) as \[T_{N}^{(f)}(L)=-\left.\frac{d\tilde{p}_{N,L}(s)}{ds}\right|_{s=0}\,. \tag{8}\] A word is in order here. The solution of (6) is technically immediate, both because the system is triangular and because the results are rational fractions in \(s\) whose inverse Laplace transforms are straightforward. Nonetheless, it allows for an analytical determination of the exit time distribution \(p_{N,L}(t)\) and its MET for small lattices only. The size of the Markov matrix grows indeed exponentially fast with \(L\) and we have not found any compact way to express the analytical solution in the general case (\(N\leq L\)). Of course, starting from \(N\) particles, only configurations with at most \(N\) particles contribute to the dynamics of the system. The dimension of the Markov matrix reduced to these configurations is much smaller than \(2^{L}\): for instance for \(N=2\), the total number of configurations with at most two particles is \(L(L+1)/2+1\) which grows algebraically as \(L^{2}/2\) for large \(L\). For \(L=20\), the reduced Markov matrix is then roughly 200\(\times\)200 vs \(10^{6}\times 10^{6}\) for the full one. However, in spite of this drastic reduction, analytical expressions become very lengthy whenever \(L>20\) and, although exact, they are not particularly helpful in determining asymptotic behaviors for large \(L\). They provide results that can be used as benchmarks for simulations though. Examples of such results for \(N=2,3\) are provided in appendix A. In the next section, we shall therefore use a different method to tackle the determination of the MET for arbitrary large lattices. ## III One particle: ballistic regime We briefly treat here the exit time distribution of a single particle initially located on site 1 of an \(L\)-site lattice \(\{1,L\}\). As it is more convenient, we switch from the "Eulerian" description based on particle configurations, that we have used so far to express probabilities, to a "Lagrangian" approach where particles are traced. Let us then call \(P(n;t)\) the probability that the particle lies on site \(n\in\{1,\ldots,L+1\}\) at time \(t\). The addition of a virtual \((L+1)\)th site allows the particle to exit the lattice. This site is "absorbing" and \(P(L+1;t)\) is then the probability that the lattice \(\{1,L\}\) is empty. According to Eq. (5), we then have \(p_{1,L}(t)=P(L;t)=\dot{P}(L+1;t)\). In the Lagrangian terminology, the master equation (1) translates into \[\dot{P}(1;t) = -P(1;t) \tag{9}\] \[\dot{P}(n;t) = P(n-1;t)-P(n;t)\,,\ n\in\{2,\ldots,L\}\,. \tag{10}\] Taking the Laplace transform of equation (9) with an initial condition given by \(P(1;0)=1\) (all other component being zero) yields \(\tilde{P}(n;s)=(1+s)^{-n}\). Hence, \[\tilde{p}_{1,L}(s)=\tilde{P}(L;s)=(1+s)^{-L}\,, \tag{11}\] which upon inversion yields the exit time distribution of a single particle out of the \(\{1,L\}\) lattice, \[p_{1,L}(t)=\frac{t^{L-1}}{(L-1)!}e^{-t}\,. \tag{12}\] This distribution is of the Poisson type, as expected: a single particle indeed never experiences exclusion and spends on each site a time that follows the same exponential distribution (\(e^{-t}\)). Therefore, the total amount of time it spends on the lattice \(\{1,L\}\) is nothing but the sum of \(L\) exponentially distributed variables which leads to the Poisson distribution (12). Moreover, according to equations (8) and (11), the mean exit time of the particle is \[T_{1}^{(f)}(L)=L\,. \tag{13}\] The particle spends on average a unit of time on each site and thus travels at constant velocity. In that respect, its motion is ballistic. The purpose of the next section is to detail how this motion is hindered when another particle seats initially next to its right side. ## IV Two particles : exact and asymptotic expressions for the MET. ### Finite lattice Let us label the particles in their exiting order, namely from right to left, and consider first the same problem as in the previous section but with 2 particles initially located on site 2 (first particle) and on site 1 (second particle) of the lattice \(\{1,L\}\). We can show (see appendix B) that the mean exit time of these two particles is exactly given by \[T_{2}^{(f)}(L)=L+\frac{L-1}{4^{L-2}}\times\binom{2L-3}{L-1}. \tag{14}\] where \(\binom{m}{n}=m!/(n!(m-n)!)\) is the binomial coefficient. Asymptotically, for large \(L\), we then find \[T_{2}^{(f)}(L)=L+\frac{2}{\sqrt{\pi}}\,\sqrt{L}+\mathcal{O}(L^{-1/2})\,. \tag{15}\] Comparing this expression to the 1 particle MET (13) shows that the main effect of adding a particle next to the first one at the start of the process is to delay its exit by an amount of time that is proportional to the square root of the distance it has to travel to exit. In the next section, we shall interpret that result as a consequence of the random motion of the second particle confined on its right side by the random motion of the first one that it cannot overtake. ### Infinite lattice We now consider a problem related to the previous one although slightly different: what is the time \(T_{2}(L)\) necessary for 2 particles to exit the section \([1,L]\) of an _infinite lattice_, with the same initial positions as for the finite lattice \(\{1,L\}\)? This problem has much simpler boundary conditions than in IV.1 as particles keep moving on the infinite lattice instead of being absorbed at site \((L+1)\). This will enable us to develop a connection with a diffusion equation. The physical difference between the two situations is that in this present case, a particle having exited the \([1,L]\) section still hinders the previous ones, whereas in the finite domain problem, the dynamics of a particle changes to a ballistic one each time its predecessor exits the lattice \(\{1,L\}\). However in the large \(L\) limit, we expect the particles mean relative distances to become large, and the additional constraint provided by the following particles to be weak. Our following results will sustain this claim. We first use results developed in [14], which provides an exact formula for the probability of 2 particles to be at positions \(X_{1}\) and \(X_{2}\) at times \(t\) knowing their initial positions \(Y_{1}\) and \(Y_{2}\) at time 0. From that we are able to deduce (see appendix C): \[T_{2}(L)=L+\frac{2}{\sqrt{\pi}}\times\frac{\Gamma(L+1/2)}{\Gamma(L)} \tag{16}\] and this again yields the same asymptotic behaviour than Equation (15) : \[T_{2}(L)=L+\frac{2}{\sqrt{\pi}}\,\sqrt{L}+\mathcal{O}(L^{-1/2}) \tag{17}\] therefrom showing the equivalence of the finite and infinite formulation of the exit problem in the large \(L\) limit. We now make a connection between this problem and a diffusion equation. The master equation for this two particles case writes [14]: \[\partial_{t}P(k_{2},k_{1};t)=\] \[+P(k_{2}-1,k_{1};t)+P(k_{2},k_{1}-1;t)-2P(k_{2},k_{1};t)\] \[P(k,k+1;t)=P(k,k;t)\] \[P(1,2;t=0)=1,\qquad\text{or else }0 \tag{18}\] valid for any \(k_{2}<k_{1}\), the positions of the trailing and leading particles, respectively. The boundary condition elegantly accounts for the special case \(k_{1}=k_{2}+1\). The probability \(\mathcal{P}(\chi;t)\) that the distance between the 2 particles be \(\chi\) at time \(t\) then follows from \[\mathcal{P}(\chi;t)=\sum_{k_{2}\geq 1}P(k_{2},k_{2}+\chi;t) \tag{19}\] and satisfy \[\partial_{t}\mathcal{P}(\chi;t)=\mathcal{P}(\chi+1;t)+\mathcal{P}(\chi-1;t)-2 \mathcal{P}(\chi;t) \tag{20}\] with the boundary condition \[\mathcal{J}(\chi=0;t)\equiv\mathcal{P}(0;t)-\mathcal{P}(1;t)=0 \tag{21}\] We notice that this is a discretized version of the diffusion equation with a no flux condition (\(\mathcal{J}=0\)) originating from the exclusion constraint, and this will enable us to develop an approach based on this equation in the next section. Solving equation (20) we obtain for the mean distance (see appendix D) \[\langle\chi(t)\rangle=\frac{e^{-2t}}{2}[(4t+1)I_{0}(2t)+4tI_{1}(2t)]+\frac{1}{2} \tag{22}\] where \(I_{k}\) is the modified Bessel function of order \(k\). Consequently, for large times \(t\), the distance between the 2 particles varies like \(\langle\chi(t)\rangle\simeq\frac{2}{\sqrt{\pi}}\sqrt{t}\). In the context of the exit of 2 particles initially at \(k_{2}=1\) and \(k_{1}=2\), particle 1 reaches the end of the lattice after a time \(L\), time at which particle 2 is on average at a distance \(\langle\chi(L)\rangle\propto L^{1/2}\) behind particle 1. Then particle 2 reaches the end of the lattice with a delay \(\langle\chi(L)\rangle\). Finally, for \(L\) large, the exit time of the 2 particles is \(T_{2}(L)\simeq L+2\sqrt{L/\pi}\), which is consistent with our previous exact results. ## V The diffusion approximation The former calculations suggest the following simple physical approach : since the leading particle has on average a ballistic motion with a constant velocity, it is convenient to study the motion of the rear particles in the reference frame of the leading one. As we saw in equation (20) this leads to a diffusion equation for the motion of the second particle with a no flux boundary condition accounting for the exclusion. This can be generalized to any of the \((N-1)\) trailing particles, the preceding particle acting as an impenetrable wall due to exclusion, to recursively find the average position of the n-th particle with respect to the leading one. We denote by \(x\) the relative position of a particle with respect to the leading one (particle 1), and \(X\) its absolute position in the lab frame. Let us first consider the occupancy probability \({\cal P}_{2}(x;t)\) of the second particle. As we saw, this relative motion is simply described by a continuous diffusion equation. We are then left with the following set of equations in the domain \(x<0\) : \[\begin{array}{l}\partial_{t}{\cal P}_{2}(x;t)=\partial_{xx}{\cal P}_{2}(x;t )\;,\quad x<0\\ {\cal P}_{2}(x;t)\xrightarrow[x\to-\infty]{}0\\ {\cal P}_{2}(x;t=0)=\delta(x-0^{-})\\ {\cal J}(0;t)\equiv-\partial_{x}{\cal P}_{2}(x=0^{-};t)=0\end{array} \tag{23}\] Some comments are in order : here the space domain extends from \(x=0\) corresponding to the position of the leading particle up to \(x=-\infty\) when the trailing particle stays at rest in the lab frame. At \(t=0\) particle 2 is situated next to the leading particle, which in the continuous limit gives the stated initial condition. Finally the exclusion caused by the leading particle is described by a no flux condition \({\cal J}(x=0)=0\). The solution of this set of equations is elementary and is twice the fundamental solution of the \(1D\) diffusion equation. This immediately leads to the average position for the second particle with respect to the first one: \(\langle x_{2}(t)\rangle=-2\sqrt{t/\pi}\). When the leading particle exits at a mean time \(T_{1}=L\) the second one therefore sits at a position \(\langle X_{2}(L)\rangle=L-2\sqrt{L/\pi}\) in the lab frame. Since we demonstrated the equivalence of the finite and infinite lattice frames for the exit problem, we can assume particle 2 to be then unconstrained. Hence, it needs an additional time \(T_{2}-T_{1}=2\sqrt{L/\pi}\) to exit, (see Figure2 for a pictorial view). This reproduces the result obtained previously by our exact algebraic computations (15), and this validates our continuous approach. Encouraged by this first result, we seek a recursive scheme to obtain the mean position of the \((n+1)\)-th particle with time, assuming an average position \(\langle x_{n}(t)\rangle=-\beta_{n}\sqrt{t}\) of the previous one, always relative to the first particle. The set of evolution equations for the \((n+1)\)-th particle can then be written as: \[\begin{array}{l}\partial_{t}{\cal P}_{n+1}(x;t)=\partial_{xx}{\cal P}_{n+1} (x;t)\;,\quad x<\langle x_{n}(t)\rangle\\ {\cal P}_{n+1}(x;t)\xrightarrow[x\to-\infty]{}0\\ {\cal P}_{n+1}(x;t=0)=\delta(x-0^{-})\\ {[}\partial_{x}{\cal P}_{n+1}(x;t)+\langle\dot{x}_{n}(t)\rangle{\cal P}_{n+1} (x;t)]_{x=\langle x_{n}(t)\rangle}=0\end{array} \tag{24}\] The last condition can be established for example by imposing \(\frac{d}{dt}\int_{-\infty}^{\langle x_{n}(t)\rangle}{\cal P}_{n+1}(x;t)dx=0\) which results from the normalization condition. It enforces a no flux condition at the moving boundary \(x_{n}(t)\). The solution of Eqs (24) is simply \[{\cal P}_{n+1}(x;t)=\frac{1}{\sqrt{\pi t}\,\mbox{erfc}(\beta_{n}/2)}e^{-x^{2} /4t} \tag{25}\] One can check that both the normalization and the no-flux condition, which are related, are satisfied by the above solution _when the boundary is moving_\(\propto\sqrt{t}\). The average velocity \(\frac{d}{dt}\langle x_{n+1}(t)\rangle=-\beta_{n+1}/(2\sqrt{t})\) can finally be calculated by taking the mean of the diffusion equation as \(\frac{d}{dt}\langle x_{n+1}(t)\rangle=-{\cal P}_{n+1}(0;t)\), and this allows us to write the following recursion relation : \[\beta_{n+1}=\frac{2}{\sqrt{\pi}}\,\frac{\exp{(-\beta_{n}^{2}/4)}}{\mbox{erfc}( \beta_{n}/2)} \tag{26}\] initiating at \(\beta_{1}=0\). This is the central analytical result which allows to calculate the actual average position of the nth particle as a function of time in the lab frame as \(\langle X_{n}(t)\rangle=t-\beta_{n}\sqrt{t}\). Figure 2: Mean exit times of the first two particles. In Figure 3, we have plotted the resulting mean trajectories of the first 5 particles in the lab frame as a function of time, (see dashed lines). For comparison we also plotted in solid lines the mean trajectories obtained by a direct Gillespie-type simulation of the TASEP over 1000 replicas and \(L=2000\) sites, see section VI. As is clear in the insert, the continuous diffusion approximation breaks down at short times, since TASEP particles can not have negative velocities in the lab frame. It however works remarkably well at larger times/positions, giving the exact large \(L\) asymptotic for \(N=1\) and 2 and an error of the order of a fraction of percent for small values of \(N\) (see below). Finally, for \(N\) particles and large \(L\) the asymptotic behavior for the mean exit time is : \[T_{N}(L)\simeq L+\beta_{N}\sqrt{L} \tag{27}\] where \(\beta_{n}\) is given by Eq. (26). ## VI Gillespie simulations of exit times Numerical simulations were also done to compute directly the exit times. In order to compare our results with real data, we simulated the emptying of a TASEP from an initial step condition using a continuous time Gillespie algorithm, see appendix E for details. The simulations were done with \(L=300,600\) and 1000 sites, and with up to \(N=50\) particles. The rather modest number of copies was generally enough to ensure a reasonable error on the mean exit time, since this quantity is in itself a mean of different Gillespie times along one single history. The values of the coefficients \(\beta_{N}^{G}(L)\) were then computed using the definition \[\beta_{N}^{G}(L)\equiv\frac{T_{N}^{G}(L)-L}{\sqrt{L}} \tag{28}\] and compared to the values obtained by our recursion equation (26) in figure 4, see the red-green-blue stars and black solid curve respectively. The agreement between these two independent methods (diffusion approximation calculation and Gillespie simulations) is excellent for \(L=1000\) and for small values of \(N\), with a vanishing relative error at \(N=1\) and 2 since the diffusion method gives the exact result there, and a relative error ranging from \(+1\%\) for \(N=3\) to -0.7% for \(N=10\). This ultimately validates our diffusion continuous approach. We also note that for large \(N\) values, the Gillespie simulations are getting closer to our estimation (27) when increasing \(L\), with an error of only -7% for \(N=50\) and \(L=1000\). Actually we conjecture that our diffusion scheme produces an exact result in the limit \(L\to\infty\) and \(N\to\infty\) with \(N/L\to 0\) as we discuss in the next section. Figure 4: Values of \(\beta_{N}^{G}\) obtained directly by Gillespie simulations (stars : red, green, blue \(L=300,600,1000\) respectively) compared to the values of \(\beta_{N}\) obtained by the recursive scheme based on the diffusion equation approach, equation (26) (solid curve). The dashed line correspond to the limit of vanishing densities of reference [1], see section VII below. Figure 3: Mean trajectories \(\langle X_{n}(t)\rangle=t-\beta_{n}\sqrt{t}\) of successive \(n=1,2,3,4,5\) TASEP particles from the continuous approach (dotted lines). Inset is a zoom near the origin. In comparison, the Gillespie-simulated trajectories in solid lines for 1000 different histories show that that the diffusion approach is quite good, see section VI (note that the artefact at the end of the simulation near \(X=2000\) is due to taking a mean position over a non constant ensemble of particles, since exiting particles are disappearing the simulation at \(L=2000\). ## VII Discussion and Conclusions In this work, we have considered the question of the mean time taken by \(N\) particles to empty a lattice with \(L\) sites while being transported according to the rules of the TASEP model and starting from the leftmost sites of that lattice (step initial condition). We have investigated two slightly different versions of that problem: A) particles definitively exit the lattice as they leave the site \(L\) and B) particles keep moving along an infinite lattice after they have crossed the \(L\)th site. For \(N=2\) particles, we have found the exact mean exit time for both problems and we have shown that they have a common asymptotic behaviour at large \(L\) equal to \(T_{2}(L)=L+\frac{2}{\sqrt{\pi}}\sqrt{L}+\mathcal{O}(L^{-1/2})\). Still for \(N=2\) particles and within the framework of problem B, we have revisited that result by showing that the probability distribution of the distance between the particles obey a master equation that is a discrete version of a diffusion equation. From there, we have calculated the mean distance as a function of time and rederived the asymptotic behaviour of the mean exit time. Then, generalizing this approach to \(N\geq 3\) particles, we have devised an approximate diffusion model for the relative positions of consecutive particles that leads to a mean exit time for \(N\) particles that behaves for large \(L\) as \(T_{N}(L)\sim L+\beta_{N}\sqrt{L}\) where \(\beta_{N}\) can be calculated recursively. Finally, we have confirmed the validity of this approximation by Gillespie simulations for values of \(N\ll L\). Our diffusion model seems to work particularly well for a small finite number of particles. In the limit where the lattice size becomes infinite, \(L\to\infty\), the average particle density \(N/L\) tends therefore to zero. It is nonetheless tempting to try to extrapolate our results to a number of particles proportional to the lattice size, \(N=\mu L\) (as \(L\to\infty\)) with a proportionality coefficient \(\mu\ll 1\) in order to keep \(N\ll L\). Assuming equation (26) to be still valid for large values of \(N\ll L\), the asymptotic behaviour \(\beta_{N}\sim 2N^{1/2}\) for large \(N\) obtained from Eq. (26) would yield \[T_{N}(L)\simeq L+2\sqrt{NL}. \tag{29}\] Letting \(N=\mu L\) then provides the following asymptotic behaviour \[T_{\mu L}(L)\sim(1+2\sqrt{\mu})L,\quad(L\to\infty,\mu\ll 1)\,. \tag{30}\] This is to be compared to the exact known result \(T_{\mu L}(L)=(1+\sqrt{\mu})^{2}L\) obtained by Johansson for \(N=\mu L\) in the limit \(L\to\infty\) and \(\mu\) finite (see ref. [1], theorem 1.6, Eq. (1.19) in which \(\gamma=1/\mu\)[30]). The corresponding value of \(\beta_{N}^{\prime}\) defined as in equation (28) reads \(\beta_{N}^{J}=(2+\sqrt{\mu})\sqrt{N}\). Using this result in the limit \(\mu=0\) corresponding to our vanishing density regime, we have also plotted in Figure 4 the corresponding \(\beta_{N}^{J}=2\sqrt{N}\) (black dashed line). We can see that our approximation gives a much better estimate of \(T_{L}(N)\) in the finite \(N\) regime and behaves decently at \(N\) large, with the same asymptotic value of \(\beta_{N}\). This leads us to conjecture that equation (29) is exact in the limit \(L\to\infty\) and \(N\to\infty\) with the density \(\mu=N/L\to 0\), a region outside of the scope of ref. [1]. Another problem of interest is the exit time of \(N\) particles transported without exclusion. In that case, particles are all independent. They wait for a time \(t\) distributed according to the exponential distribution \(e^{-t}\) between two consecutive jumps to the right, may overtake each other and occupy the same lattice site as others. The particle to last exit the lattice among the \(N\), irrespective of its initial location, sets the exit time. Evaluating the distribution obeyed by the latter thus simply amounts to finding the distribution of the maximum of the individual exit times of each of the \(N\) particles (that depend on their initial location). For \(N=2\) particles starting respectively from sites 1 and 2 of an \(L\)-site lattice, it can be shown that the exit time asymptotically reads \(T_{2}(L)\sim L+\frac{1}{\sqrt{\pi}}\sqrt{L}+O(1)\), for large \(L\)[31]. Strikingly, we see that the \(\sqrt{L}\) correction to the mean exit time of a single particle is not solely attributable to the exclusion effect of the TASEP model. It also occurs in independent particles as a by-product of the distribution of the maximum of their individual exit times, although with a different prefactor (half of the TASEP one for 2 particles). Preliminary analytical and numerical results seem to show that for a large number \(N\) of particles, all starting from site 1 of the lattice, the prefactor of the \(\sqrt{L}\) correction of the exit time is proportional to \(\ln N\) as \(N\to\infty\). This behaviour is to be contrasted with the \(\sqrt{N}\) correction obtained in presence of exclusion for the TASEP model. Finally, this study can be seen as a step towards the calculation of exit times of some more refined transport models. For example, one could try to test the diffusion approximation used in this paper to compute probabilities of interest studied in the clearance problem of [32]. Queuing problems [4] or experimental microfluidic setups [33] could also benefit from our approach (e.g by relaxing the exclusion constraint for the queuing problem, or allowing for bidirectional transport like in the ASEP or SEP models, see[16; 22]). ## Appendix A Some exact results for small lattices In table 1, we list some exact results for the exit time distributions and their MET that can be obtained from the method exposed in section II.2. As may readily be checked from the third column of this table, the Mean Exit time of \(N=2\) particles on a finite lattice \(\{1,L\}\) (\(L\leq 10\)) agrees with the exact formula provided in (14). Laplace transforms of the time distributions have been given up to \(L=5\) only for they then become somewhat lengthy. From \(L\geq 3\) onwards, the denominator of \(\tilde{p}_{2,L}(s)\) is \((s+1)^{L+1}(s+2)^{2L-5}\). The constant coefficient of the numerator polynomial is \(2^{2L-5}\) and its highest degree coefficient (\(s^{L-3}\) for \(L\geq 3\)) is the Catalan number \(C(L)=\binom{2L}{L}/(L+1)\) (valid for \(L\geq 2\)). As for \(\tilde{p}_{3,L}(s)\), its denominator is given by \((s+1)^{L+2}(s+2)^{2L-3}(s+3)^{3L-14}\) for \(L\geq 5\). Exact results for small lattices (up to \(L=20\)) with \(N=2,3\) particles are typically obtained by Maple on a basic laptop within less than a minute computation time. These results may serve as benchmarks for simulations. ## Appendix B Exact MET for 2 particles, finite lattice To find the Mean Exit time of two particles initially located on site 1 and 2 of the finite lattice \(\{1,L\}\), it is sufficient, according to Eqs. (5) and (8), to find the Laplace Transform (LT) of the probability that particle 2 is on site \(L\) of that lattice while particle 1 has left it. We shall denote that quantity by \(\tilde{P}_{o}(L;s)\) where the subscript \(o\) indicates that particle 1 has left the lattice. We shall denote by \(\tilde{P}(k_{2},k_{1};s)\) the LT of the probability that particle 1 is at site \(k_{1}\) and particle 2 at site \(k_{2}\) with \(1\leq k_{2}<k_{1}\leq L\). Let us write the master equation for the LT \(\tilde{P}_{o}(n;s)\), \(n\in[\![1,L]\!]\). Dropping the \(s\) dependence for simplicity, one obtains, \[s\tilde{P}_{o}(1) = -\tilde{P}_{o}(1)+\tilde{P}(1,L)\] \[s\tilde{P}_{o}(k) = -\tilde{P}_{o}(k)+\tilde{P}_{o}(k-1)+\tilde{P}(k,L)\] \[s\tilde{P}_{o}(L) = -\tilde{P}_{o}(L)+\tilde{P}_{o}(L-1) \tag{10}\] where \(k\in[\![2,L-1]\!]\). Solving for \(\tilde{P}_{o}(L;s)\) yields \[\tilde{P}_{o}(L;s)=\sum_{n=1}^{L-1}\frac{\tilde{P}(n,L;s)}{(s+1)^{L-n+1}} \tag{11}\] We shall now take advantage of the fact that \(P(n,L;t)\) is known exactly for it is the probability that two particles located on sites 1 and 2 at \(t=0\) be located at site \(n\) and \(L\), respectively, at time \(t\). This transition probability is provided by Schutz in [14] who has solved this problem on an infinite lattice. Yet, as none of the particles have left the section \([1,L]\), this probability is exactly the same as for the finite lattice \(\{1,L\}\). This makes the necessary connection between the finite and infinite lattice problems. According to [14], we have \[P(n,L;t)=\begin{vmatrix}F_{0}(n-1;t)&F_{-1}(n-2;t)\\ F_{1}(L-1;t)&F_{0}(L-2;t)\end{vmatrix}\,, \tag{12}\] where \[F_{0}(k;t)=\frac{t^{k}}{k!}e^{-t}\text{ and }F_{1}(k;t)=1-e^{-t}\sum_{q=0}^{k-1} \frac{t^{q}}{q!}\] and where \(F_{-1}(k;t)=F_{0}(k;t)-F_{0}(k+1;t)\). Expanding \(P(n,L;t)\) in sums of products of exponentials and powers in \(t\) makes it easy to obtain its Laplace transform \(\tilde{P}(n,L;s)\). Reinstating the latter in Eq. (11) and using \[T_{2}^{(f)}(L)=-\left.\frac{d\tilde{P}_{o}(L;s)}{ds}\right|_{s=0}\,, \tag{13}\] eventually yields, after a somewhat lengthy calculation, \[T_{2}^{(f)}(L)=L+\frac{L-1}{4^{L-2}}\times\binom{2L-3}{L-1}. \tag{14}\] ## Appendix C Exact MET for 2 particles, infinite lattice The easiest way to obtain the exact Mean Exit Time (MET) for 2 particles leaving a section \([1,L]\) of an infinite lattice while being initially located on sites 1 and 2 of that section is probably to use the integral formula given by Rakos and Schutz [15] for the probability that the second leftmost particle of two initially side by side particles has carried out at least \(L\) steps to the right at time \(t\). This probability, that is exactly the probability that the two particles have left the section \([1,L]\) by time \(t\), is given by \[P(L,2,t)=Z\int_{[0,t]^{2}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! As expected, the time needed by the two particles to exit the section \([1,L]\) of an infinite lattice is slightly longer than the time needed to exit the finite lattice \(\{1,L\}\) given that when the rightmost particle has gone out of \(\{1,L\}\), the last one is free to move ahead while it can still be hindered by the front particle on the infinite lattice. ## Appendix D Exact mean relative distance for 2 particles To obtain the mean relative distance between two particles, initially side by side, on an infinite lattice, we first take the Laplace transform of Eqs. (20) and (21): \[s\tilde{\mathcal{P}}(1;s)-1=-\tilde{\mathcal{P}}(1;s)+\tilde{ \mathcal{P}}(2;s)\] \[s\tilde{\mathcal{P}}(\chi;s)=-2\tilde{\mathcal{P}}(\chi;s)+ \tilde{\mathcal{P}}(\chi+1;s)+\tilde{\mathcal{P}}(\chi-1;s),\] where \(\chi\geq 2\). Solving for \(\tilde{\mathcal{P}}(\chi;s)\) and taking into account the fact that \(\sum_{\chi\geq 1}\mathcal{P}(\chi;t)=1\) (i.e. \(\sum_{\chi\geq 1}\tilde{\mathcal{P}}(\chi;s)=1/s\)), we obtain \[\tilde{\mathcal{P}}(\chi;s)=\frac{1-\lambda}{s}\,\lambda^{\chi-1}, \tag{12}\] where \[\lambda=1+\frac{s}{2}-\sqrt{\left(1+\frac{s}{2}\right)^{2}-1}\,. \tag{13}\] Then, \[\langle\tilde{\chi}(s)\rangle:=\sum_{\chi\geq 1}\chi\tilde{\mathcal{P}}(\chi;s) =\frac{1}{s(1-\lambda)}\,, \tag{14}\] and, upon inverting that expression, we finally obtain \[\langle\chi(t)\rangle=\frac{e^{-2t}}{2}[(4t+1)I_{0}(2t)+4tI_{1}(2t)]+\frac{1}{2} \tag{15}\] where \(I_{k}\) is the modified Bessel function of order \(k\): \(I_{0}(x)=\sum_{n\geq 0}x^{2n}/[4^{n}(n!)^{2}]\) and \(I_{1}(x)=xI_{0}^{\prime}(x)\) where the prime denotes derivatives wrt \(x\). ## Appendix E Gillespie simulations details Numerical simulation were performed using Octave on a DELL XPS13. The continuous time Gillespie method was used in order to produce an in-silico realization of equation 1. In this method, each history simulate a stochastic trajectory associated with the TASEP master equation. Most of the simulations were done using \(10^{3}\) histories, in order to keep the computation time manageable on a laptop, especially for large values of \(N\simeq 50\) and \(L\simeq 1000\). To estimate the error for mean values such as \(T_{N}^{G}(L)\), we performed 20 independent simulations of \(10^{3}\) histories and obtained a dispersion of values of the order of \(\Delta T\sim 0.5\) for \(T_{N}^{G}(L)\sim 300-1000\). The same procedure was then used with \(10^{4}\) histories and, as expected, lowered this figure to \(\Delta T\sim 0.15\). The precision obtained with \(10^{3}\) copies was usually enough to compare the simulations results with our theoretical value \(\beta_{N}\). For small \(N<5\) however, the values of \(\beta_{N}(L)\) estimated by the different methods and for different length \(L=300-1000\) are very close, and it was necessary to use \(10^{4}\) copies to order the different values of \(\beta_{N}(L)\) properly. It was found that : * for fixed \(N\) the values of \(\beta_{N}^{G}(L)\) are systematically decreasing when increasing \(L\), as seen in Figure 4. * for values of \(N>10\) our result (26) underestimates the value of the coefficient, \(\beta_{N}(L)\), while for small values it is overestimating. \begin{table} \begin{tabular}{c c c c c} \(L\) & \(\tilde{p}_{2,L}(s)\) & \(T_{2}^{(f)}(L)\) & \(\tilde{p}_{3,L}(s)\) & \(T_{3}^{(f)}(L)\) \\ \hline 2 & \(\frac{1}{(s+1)^{3}}\) & 3 & n.a & n.a \\ & \(\frac{2}{(s+1)^{4}(s+2)}\) & \(\frac{9}{2}\) & \(\frac{2}{(s+1)^{5}(s+2)}\) & \(\frac{11}{2}\) \\ & \(5s+8\) & \(\frac{47}{8}\) & \(\frac{12s^{4}+39s+32}{(s+1)^{6}(s+2)^{5}}\) & \(\frac{233}{32}\) \\ 4 & \(\frac{2(7s^{2}+21s+16)}{(s+1)^{6}(s+2)^{5}}\) & \(\frac{115}{16}\) & \(\frac{110s^{3}+495s^{2}+751s+384}{(s+1)^{7}(s+2)^{7}(s+3)}\) & \(\frac{3409}{384}\) \\ 5 & \(\vdots\) & \(\frac{1083}{128}\) & \(\vdots\) & \(\frac{107617}{10368}\) \\ 6 & \(\vdots\) & \(\frac{2485}{256}\) & \(\vdots\) & \(\frac{1323775}{119744}\) \\ 7 & \(\vdots\) & \(\frac{11195}{1024}\) & \(\vdots\) & \(\frac{2132010983}{161243136}\) \\ 8 & \(\vdots\) & \(\frac{24867}{2048}\) & \(\vdots\) & \(\frac{254084494957}{17414258688}\) \\ 10 & \(\vdots\) & \(\frac{437075}{32768}\) & \(\vdots\) & \(\frac{7491745364599}{470184984576}\) \\ \end{tabular} \end{table} Table 1: Exact Laplace transform and Mean Exit Time of the exit time distribution of a finite lattice \(\{1,L\}\) for \(L\leq 10\) and \(N=2,3\). * for \(N\leq 10\) the relative error of equation (26) with respect to our best estimate of the exact \(\beta_{N}\), obtained with the highest copy number and the highest \(L\), is less than 1%, (0 for \(N=1\) and 2 since our expression is then exact), and reaches -7 % for \(N=50\).
2310.18664
Node Cardinality Estimation in the Internet of Things Using Privileged Feature Distillation
The Internet of Things (IoT) is emerging as a critical technology to connect resource-constrained devices such as sensors and actuators as well as appliances to the Internet. In this paper, we propose a novel methodology for node cardinality estimation in wireless networks such as the IoT and Radio-Frequency IDentification (RFID) systems, which uses the privileged feature distillation (PFD) technique and works using a neural network with a teacher-student model. The teacher is trained using both privileged and regular features, and the student is trained with predictions from the teacher and regular features. We propose node cardinality estimation algorithms based on the PFD technique for homogeneous as well as heterogeneous wireless networks. We show via extensive simulations that the proposed PFD based algorithms for homogeneous as well as heterogeneous networks achieve much lower mean squared errors in the computed node cardinality estimates than state-of-the-art protocols proposed in prior work, while taking the same number of time slots for executing the node cardinality estimation process as the latter protocols.
Pranav S. Page, Anand S. Siyote, Vivek S. Borkar, Gaurav S. Kasbekar
2023-10-28T10:27:12Z
http://arxiv.org/abs/2310.18664v1
# Node Cardinality Estimation in the Internet of Things Using Privileged Feature Distillation ###### Abstract The Internet of Things (IoT) is emerging as a critical technology to connect resource-constrained devices such as sensors and actuators as well as appliances to the Internet. In this paper, we propose a novel methodology for node cardinality estimation in wireless networks such as the IoT and Radio-Frequency Ientification (RFID) systems, which uses the privileged feature distillation (PFD) technique and works using a neural network with a teacher-student model. The teacher is trained using both privileged and regular features, and the student is trained with predictions from the teacher and regular features. We propose node cardinality estimation algorithms based on the FFD technique for homogeneous as well as heterogeneous wireless networks. We show via extensive simulations that the proposed FFD based algorithms for homogeneous as well as heterogeneous networks achieve much lower mean squared errors in the computed node cardinality estimates than state-of-the-art protocols proposed in prior work, while taking the same number of time slots for executing the node cardinality estimation process as the latter protocols. Medium Access Control Protocols, Data Management and Analytics, Internet of Things, Node Cardinality Estimation, Privileged Feature Distillation, Neural Network ## I Introduction The Internet of Things (IoT) is emerging as a critical technology to connect a large number of resource-constrained devices such as sensors and actuators as well as appliances to the Internet [1]. Many industries, including smart grids, healthcare, vehicular telematics, smart cities, security and public safety, agriculture, and industrial automation, extensively use IoT networks [2]. Active research is being conducted on designing effective networking protocols to handle the growing number of IoT devices. The design of medium access control (MAC) protocols for IoT networks is particularly challenging because of their unique characteristics [3]. For instance, (i) network access must be provided to a large number of IoT devices, (ii) most IoT devices are battery-powered and have limited power availability, and (iii) the quality of service (QoS) requirements in IoT applications differ from those in human-to-human (H2H) communications [3]. One of the key components of a MAC protocol for IoT networks is a node cardinality estimation protocol that rapidly estimates the number of active devices (i.e., the devices that currently have some data that needs to be transferred to the base station) in every time frame [3]. These estimates can be used to determine the optimal values of various MAC protocol parameters such as contention probability, contention period duration, data transmission period duration, etc. [4, 5, 6]. Node cardinality estimation protocols also have a large number of applications apart from their use in MAC protocol design. They are used in [7] to periodically estimate the numbers of vehicles moving on various congested routes; the estimated information can be used to dynamically adapt the ON/ OFF periods of traffic lights based on vehicle density. Consider a farm with sensors installed to track a number of variables such as temperature and soil moisture. Before gathering the actual data from the active sensors, a mobile base station (MBS), e.g., one mounted on an unmanned aerial vehicle (UAV), navigates over the agricultural area and stops at designated locations to estimate the number of active sensors [8]. This increases the effectiveness of data collection because the MBS can optimally determine the amount of time it needs to spend at each stop when it subsequently returns to the same locations to collect the actual data and because it can inform the active sensors when to be available for sending the data based on the estimates. During or after natural calamities such as floods and earthquakes, MBSs hover above the affected area to estimate the number of people who need help. These estimates are used to plan disaster relief efforts and efficiently distribute supplies [9]. Also, numerous Radio-Frequency IDefinition (RFID) systems use node cardinality estimation protocols for inventory management, tag identification, missing tag detection, etc. [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. Extensive research has been conducted on the problem of node cardinality estimation in IoT networks and RFID systems. Most of this research is focused on node cardinality estimation in a _homogeneous_ network, wherein the network consists of only one type of nodes [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. In addition, some work has been carried out on node cardinality estimation in a _heterogeneous_ network, that is, a network consisting of \(T\) types of nodes, where \(T\geq 2\) is an integer [42, 43, 44, 5, techniques, which are sub-optimal, for computing the node cardinality estimates. Very little research has been conducted so far on applying machine learning based techniques to compute node cardinality estimates; in particular, to the best of our knowledge, the powerful _Privileged Feature Distillation_ (PFD) technique [49] has not been used in prior work for node cardinality estimation in wireless networks. This is the space in which we contribute in this paper. In this paper, we propose a novel methodology for node cardinality estimation in wireless networks such as the IoT and RFID systems, which uses the PFD technique and works using a neural network with a _teacher-student model_[49]. The teacher is trained using both privileged and regular features, and the student is trained using predictions from the teacher and regular features [49]. The concept of a privileged feature [49] arises in scenarios where a particular feature \(z\) is available during the training phase but not during the testing or inference phase. The term "privileged" refers to the notion that this feature possesses additional information during the training process that can potentially aid in improving the prediction accuracy or performance. By identifying privileged features and incorporating them into the training process, it is possible to leverage the additional information they provide to potentially improve the model's predictive capabilities, when those features are not available during the testing phase. _Distillation_[50] refers to the standard practice of labeling the training dataset using teacher predictions, and using these as supervision targets in the training of the student model. PFD has been successfully applied in various machine learning problems including speech recognition [51], medical imaging [52], and image super-resolution [53]. We review some background concepts pertaining to PFD in Section IV-C. The main contributions of this paper are as follows. In Section III-A, we formulate the problem of estimating the number of active nodes in a homogeneous wireless network, while minimizing the mean squared error (MSE) between the actual number of active nodes and the algorithm's estimate. In Section III-B, we generalize this problem formulation to the problem of estimating the number of active nodes of each type in a heterogeneous wireless network with \(T\) types of nodes, where \(T\geq 2\) is an integer, while minimizing the MSE. In Section V-A, we propose a novel algorithm, which uses the PFD technique and a neural network with a teacher-student model, for node cardinality estimation in a homogeneous network. In Section V-B, we generalize this algorithm for estimating the cardinality of each node type in a heterogeneous network with \(T\) types of nodes. In Section VI, we show via extensive simulations that the proposed PFD based algorithm for a homogeneous (respectively, heterogeneous) network achieves a much lower MSE than the state-of-the-art simple RFID counting (SRC\({}_{s}\)) protocol [54] (respectively, \(T\)-SRC\({}_{s}\) protocol), even though both the algorithms take the same number of time slots for executing the node cardinality estimation process. The rest of this paper is organized as follows. A review of related prior literature is provided in Section II. The system model and problem formulation are described in Section III. Some relevant background is given in Section IV. The proposed algorithms and other algorithms for comparison are described in Section V. Simulation results are provided in Section VI. Finally, conclusions and directions for future research are provided in Section VII. ## II Related Work In Section II-A (respectively, Section II-B), we provide a review of related prior literature on node cardinality estimation in wireless networks (respectively, PFD). ### _Node Cardinality Estimation_ The estimation of active node cardinalities is considered crucial in the design of a MAC protocol for IoT networks. This importance has led to extensive research being conducted on this topic [26, 27, 28, 29, 30, 31, 32, 33, 34, 35]. These studies focus not only on estimating the number of active devices in a homogeneous IoT network, but also on using these estimates to determine the contention probabilities that optimize the throughput of their respective MAC protocols for IoT networks. To estimate the number of active nodes in the current time frame, the estimation scheme proposed in [27] uses the estimates obtained in the previous frame as well as the sub-optimal Dynamic Access Class Barring (D-ACB) factors from the previous frame. In [28], a modified Carrier Sense Multiple Access/ Collision Avoidance (CSMA/CA) protocol intended for IoT networks was introduced. The size of the backoff window for the current time frame is chosen by this protocol by considering the size of the previous backoff window and the previously computed estimates of the active node cardinality. This procedure incorporates historical estimates to improve the effectiveness of the backoff mechanism. Note that both [27] and [28] relied on the estimates obtained in previous frames to compute their estimates in the current frame. This iterative approach allows the utilization of past information to improve the accuracy and effectiveness of the estimation process. A new technique for dynamic access control and resource allocation for random-access channels based on an estimation scheme was introduced in [29]. The only input used in the estimation procedure in [29] for computing the estimates was the number of open slots. The 6-Dimensional Markov Chain (6-DMC) estimation method was introduced in [30]. The numbers of devices that are delay tolerant (DTDs) and delay sensitive (DSDs) are estimated using this approach. The estimation methods in [30, 31], and [32] are based on 6-DMC, Maximum Likelihood Estimation (MLE), and IoT-OSA (an extension of the opportunistic splitting technique). The satellite random access (RA) MAC protocol is provided in [55]. In this protocol, throughput is maximized by computing an estimate of the number of Return Channel Satellite Terminals (RCSTs). The number of collisions observed in earlier frames affects the length of the current frame in the model described in [55]. The approach described in [33] estimates the number of nodes that cause collisions. In Long-Term Evolution (LTE) networks, this estimation enables an effective partitioning of nodes into a predetermined number of groups while minimizing intra-group collisions. Dynamic Backoff (DB), a new method for resolving channel contention, was first described in [34]. Based on the estimated number of competing active devices, this approach modifies the size of the backoff window used to manage channel contention during data transfer. The scheme proposed in [34] also dynamically modifies each frame size based on the projected number of devices, making it adaptable to shifting network circumstances and device activity. The node cardinality estimation problem in IoT networks is similar to the tag cardinality estimation problem in the context of RFID technology. In the latter situation, an RFID reader estimates the number of tags, like a base station does when estimating the number of active nodes in an IoT network. Schemes for estimating the number of tags in an RFID system were proposed in [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. Node cardinality estimation schemes for heterogeneous IoT networks and RFID systems have been proposed in [2, 4, 5, 42, 43, 44, 45, 46, 47, 48]. A specialized MAC protocol for a heterogeneous IoT network, catering to three types of IoT devices, has been introduced in [4, 5]. It incorporates a rapid estimation protocol to determine active node counts, and uses them to optimize the contention probabilities in the MAC protocol. An efficient node cardinality estimation solution with two components- snapshot collection and accurate estimation- has been given in [46]. It focuses on improving joint cardinality estimation in distributed RFID systems, allowing queries across multiple tag sets at different locations and times with controlled error. It has applications in tracking product flows in logistics. Simulations show a significant time cost reduction while maintaining accuracy. Enhancement of RFID technology's cardinality estimation function in two ways- joint estimation across tags at different locations and times and category-level tracking- was proposed in [47]. It introduces an anonymous protocol that efficiently estimates joint category-level information, preserving tag anonymity and enabling applications such as monitoring diverse products in distributed supply chains. Multi-category RFID tag estimation has been addressed in [48], aiming to swiftly and accurately count tags within each category. It introduces the "Simultaneous Estimation for Multi-category RFID Systems" (SEM) approach, leveraging Manchester coding to decode combined signals, allowing simultaneous estimation across categories while maintaining pre-defined accuracy. SEM significantly improves estimation speed compared to existing protocols. Rapid estimation of the cardinalities of active nodes of different types in heterogeneous IoT networks with \(T\) node types, where \(T\geq 2\) is an arbitrary integer, has been addressed in [42, 43, 44, 45]. However, the PFD technique has not been applied to the problem of node cardinality estimation in either a homogeneous or heterogeneous wireless network in any of the above prior works. ### _Privileged Feature Distillation (PFD)_ The concept of learning with privileged features was introduced in [56], and a framework called "learning using privileged information" (LUPI) was proposed. Privileged information is the primary approach used by LUPI to discriminate between simple and complex cases. These methods are closely related to Support Vector Machines (SVM); e.g., the "SVM+" algorithm, which creates slack variables from privileged features and learns an SVM based on regular features with those slack variables, is proposed in [56, 57]. A pairwise SVM algorithm for ranking that uses privileged features to differentiate between easy and hard pairs is proposed in [58]. The privileged features are employed in the version presented in [59] to produce importance weighting for various training samples. A popular technique for knowledge transfer is model distillation [50], often from a large model to a smaller model [60, 61]. Recent studies [62, 63, 64], and even those where the teacher model and student model have the same structure [65, 66], have demonstrated remarkable empirical success in ranking tasks. "Generalised distillation" (GenD) is the term for the method first suggested in [67] for using distillation to learn from privileged features. This offers a comprehensive perspective on distillation and LUPI. GenD and its derivatives [51, 53, 58] train an expert model using just privileged features, after which the student model is trained to replicate the expert's predictions. Recently, PFD was presented in [69], where the teacher model accepts input from both regular and privileged features. Due to their emphasis on privileged feature exploitation rather than model size reduction, PFD and GenD are different from traditional model distillation. On a non-public data collection, the improved performance of PFD for recommendation systems is empirically demonstrated in [69]. Despite the aforementioned empirical accomplishment, there remains a lack of understanding of privileged characteristics distillation. Prior research [70] demonstrates that LUPI speeds up convergence under the strict premise that the best classifier can be realised using just privileged information. GenD has a quick convergence rate, as shown by [67]. This is different from PFD since it assumes that the function class complexity of the student model is significantly larger than that of the teacher model. The study by [71] on GenD under semi-supervised learning reveals that the benefits come from reducing the complexity of student function classes. However, it does not quantify this reduction, and the theory does not explain why exploiting privileged traits is advantageous. Prior proposals include other uses of privileged features. To enhance picture classification performance, [72] learns a more varied representation using privileged information. Distillation strategies have been proposed by [53, 73] for more effective feature extraction from regular features. A more recent study [74] examined the possibility of training a model using both regular and privileged features to improve the internal representation of regular features. However, to the best of our knowledge, this paper is the first to use the technique of PFD to address the problem of node cardinality estimation in wireless networks. ## III System Model and Problem Formulation In Section III-A (respectively, Section III-B), we describe the system model and problem formulation for the case of a homogeneous network (respectively, heterogeneous network). ### _Homogeneous Network_ #### Iii-A1 System Model Consider a population of nodes such that each node is in the range of a single stationary base station (BS). We consider a node as _active_ when it has data to send to the BS. Time is divided into frames of equal durations. To effectively design MAC protocols for data upload to the BS, the number of active nodes in a time frame must be estimated. We study the case, which often arises in practice, when there exists some correlation between the number of nodes active in a time frame and the number of nodes active in the next time frame. E.g., the number of active nodes may evolve as a discrete-time Markov chain. #### Iii-A2 Problem Formulation The aim of this work is to design algorithms that minimize the mean squared error (MSE) between the actual number of active nodes in a time frame and the algorithm's estimate of this number, while also reducing the number of time slots needed to produce the estimate. The objective of minimizing the MSE for a homogeneous network of nodes is as follows: \[alg^{\prime}=\text{argmin}_{alg}\mathbb{E}\left[\lim_{\tau\to \infty}\frac{1}{\tau}\sum_{t=0}^{\tau-1}(\hat{n}_{t}^{alg}-n_{t}^{truth})^{2} \right]. \tag{1}\] Here, \(n_{t}^{truth}\) is the true number of nodes active in time frame \(t\), while \(\hat{n}_{t}^{alg}\) is the estimate given by the algorithm \(alg\) in time frame \(t\). The expectation is with respect to the different realizations of the random process (\(n_{t}^{truth},\,t=0,1,2,\ldots\)). The objective is to find the algorithm \(alg^{\prime}\) that achieves the minimum in the RHS of (1). ### _Heterogeneous Network_ #### Iii-B1 System Model In the heterogeneous case, there exist \(T\) types of nodes, where \(T\geq 2\) is an integer, in the range of a BS, as shown in Figure 1 for the case \(T=3\). For example, nodes of different types may correspond to nodes that send emergency traffic such as fire alarms, nodes that contain moisture sensors, nodes that contain temperature sensors, etc. We represent the numbers of active nodes of different types by \(\mathbf{n}_{t}^{truth}\), a \(1\times T\) vector, where \(\mathbf{n}_{t}^{truth}[b]\), \(b\in\{1,\cdots,T\}\), is the number of active nodes of type \(b\) in time frame \(t\). #### Iii-B2 Problem Formulation Similar to the homogeneous case, for the heterogeneous case, the objective is to minimize the expected time-averaged squared Euclidean distance between the estimates computed by the algorithm and the time series, \(\mathbf{n}_{t}^{truth}\), of true numbers of active nodes: \[alg^{\prime}=\text{argmin}_{alg}\mathbb{E}\left[\lim_{\tau\to \infty}\frac{1}{\tau}\sum_{t=0}^{\tau-1}\left\|\hat{\mathbf{n}}_{t}^{alg}- \mathbf{n}_{t}^{truth}\right\|_{2}^{2}\right]. \tag{2}\] Here, \(\hat{\mathbf{n}}_{t}^{alg}\) is a \(1\times T\) vector, where \(\hat{\mathbf{n}}_{t}^{alg}[b]\), \(b\in\{1,\cdots,T\}\), is the estimate of the number of active nodes of type \(b\) in time frame \(t\) found by the algorithm \(alg\). The objective is to find the algorithm \(alg^{\prime}\) that achieves the minimum in the RHS of (2). ## IV Background In this section, we review some concepts that are used in the rest of the paper. ### _Simple RFID Counting (SRC\({}_{s}\)) Protocol_ SRC\({}_{s}\) is a node cardinality estimation protocol for a homogeneous network [54], which finds an estimate, \(\hat{n}\), of the number of active nodes, \(n\), to within given accuracy requirements \(\epsilon\) and \(\delta\), i.e, the following relation is satisfied: \[\mathbb{P}(|\hat{n}-n|\leq\epsilon n)\geq 1-\delta.\] The SRC\({}_{s}\) protocol (Algorithm 3) [54] uses the Lottery Frame (LoF) protocol (Algorithm 1) to generate a rough estimate of the number of active nodes, followed by a Balls and Bins (BB) trial (Algorithm 2) that uses the rough estimate given by LoF. The LoF protocol uses a trial length of \(l_{lof}=\lceil\log_{2}n_{all}\rceil\) time slots, where \(n_{all}\) is the maximum number of active nodes in the network. The SRC\({}_{s}\) protocol conducts multiple, say num_lof, LoF trials and computes an average of the rough estimates generated in the trials, \(n^{\prime}\). The choice of the length of the BB trial \(l\) depends on the relative error tolerated, \(\epsilon\), and is taken as \(l=\frac{65}{(1-0.04^{\prime})^{2}}\)[54]. The number of LoF trials, num_lof, in SRC\({}_{s}\) is taken to be of the order \(O(\log\frac{1}{\delta})\). For example, for \(\delta=10^{-3}\), num_lof\(=3\) is used. The output of the BB trial is the final SRC\({}_{s}\) estimate, \(\hat{n}\). The frame structure of SRC\({}_{s}\) is shown in Figure 2. ``` 1: Choose trial length \(l_{lof}=\lceil\log_{2}n_{all}\rceil\) 2: Each active node independently transmits in slot \(i=1,\ldots,l_{lof}-1\) with probability \(2^{-i}\) and in slot \(l_{lof}\) with probability \(2^{-(l_{lof}-1)}\) 3: Trial ends when first empty slot (slot in which no node transmits), say slot \(j\), is seen 4:return\(n^{\prime}=1.2897\times 2^{j}\) ``` **Algorithm 1** Lottery Frame Protocol [54] ### _3-Stage Scheme-Balls and Bins (3-SS-BB) Protocol_ 3-SS-BB is a protocol for finding an estimate, \(\hat{n}_{b}\), of the number of active nodes, \(n_{b}\), of each type \(b\in\{1,\ldots,T\}\) in a heterogeneous network with \(T\) types of nodes (see Figure 1) [42, 43]. It is an extension of the BB trial (Algorithm 2) to Figure 1: The figure shows an example heterogeneous network with \(T=3\) types of nodes in the range of a BS. a heterogeneous network. It assumes that rough estimates, \(n^{\prime}_{b}\), \(b\in\{1,\ldots,T\}\), of the numbers of active nodes of the \(T\) types are initially available; e.g., these estimates may be generated by separately conducting LoF trials for each node type as in the first stage of SRC\({}_{s}\). 3-SS-BB uses a trial with \(l\) blocks; within each block, there are \(T-1\) slots. Each active node of type \(b\) independently participates in the trial of \(l\) blocks with probability \(p_{b}=\min(1,1.6l/n^{\prime}_{b})\). Each participating node of type \(b\) chooses a block out of the \(l\) blocks uniformly at random ``` 1:Given rough estimate \(n^{\prime}\), each active node independently participates in the trial of length \(l\) slots with probability \(p=\min(1,1.6l/n^{\prime})\) 2:Each participating node transmits in a slot chosen uniformly at random from the \(l\) slots 3:\(z\) = number of empty slots in trial 4:if\(z>0\)then 5:return\(\frac{\ln\left(z/l\right)}{\ln\left(1-p/l\right)}\) 6:else 7:return arbitrary number 8:endif ``` **Algorithm 2** Balls and Bins Protocol [54] ### _Privileged Features Distillation (PFD)_ In some problem settings, there exist some features that are not available during testing, but are available offline for training. Instead of discarding these features, one approach is to train a 'teacher' model on the privileged features, say \(\mathbf{x}_{privileged}\)[49]. The teacher model is then used in the training of a different'student' model [49]. The student is trained only on the features, say \(\mathbf{x}_{general}\), that are available during testing, but the loss of the model is designed (see (5)) as a convex combination of the student loss and the teacher loss. In (3), \(\mathcal{L}^{teacher}\) refers to the teacher loss corresponding to the loss described by the loss function \(L(\cdot,\cdot)\), operating on the prediction of the teacher and the target \(y\). The teacher Figure 3: The figure shows the symbol combinations used by different types of nodes under the 3-SS-BB protocol. 0 indicates no transmission. Figure 2: The figure shows the frame structure of the SRC\({}_{s}\) protocol. network is represented by \(g^{teacher}\), and the teacher's prediction is \(g^{teacher}(\mathbf{x}_{privileged})\). Similarly, in (4), the data loss \(\mathcal{L}^{data}\) is the loss between the student's prediction \(g^{student}(\mathbf{x}_{general})\) on the general features, \(\mathbf{x}_{general}\), and the target \(y\). The mixing ratio between the data loss and the teacher loss is \(\alpha\in[0,1]\). The student does not interact with the privileged features, nor with the teacher's predictions, but only with the loss between the teacher's prediction and the target. \[\mathcal{L}^{teacher} =L(g^{teacher}(\mathbf{x}_{privileged}),y) \tag{3}\] \[\mathcal{L}^{data} =L(g^{student}(\mathbf{x}_{general}),y)\] (4) \[\mathcal{L}^{student} =\alpha\mathcal{L}^{data}+(1-\alpha)\mathcal{L}^{teacher} \tag{5}\] ## V Algorithms In Section V-A (respectively, V-B), we describe our proposed PFD based algorithm, and other algorithms for comparison, for homogeneous (respectively, heterogeneous) wireless networks. ### _Homogeneous Network_ #### V-A1 Proposed Algorithm Consider the model and problem formulation described in Section III-A. The entire population of nodes in the range of the base station is of a single type. Recall from Section IV-A that the SRC\({}_{s}\) protocol consists of a LoF-based phase 1 and a BB-based phase 2. The LoF phase obtains a rough estimate of the number of nodes, \(n^{\prime}\), which the BB phase uses to obtain a refined estimate. In each time frame \(t\), the proposed neural network (NN) based algorithm (see Algorithm 5) executes only phase 2 (BB) and obtains the trial result. If the trial consists of \(l_{BB}\) slots, then a vector of size \(l_{BB}\) is generated via BB. This vector consists of the outcome (no transmission, success (one transmission), or collision) in each of the \(l_{BB}\) slots of the BB trial. The trained model takes this vector as input, along with the estimate of the number of active nodes generated by the model in the previous time frame, and estimates the value of the number of active nodes in the current time frame. The NN is a student model trained using PFD as explained in Section V-A2; so henceforth, the trained model will be represented by the notation \(Stu\). In time frame \(0\), the proposed NN method conducts a set of LoF trials to give the initial rough estimate \(\hat{n}^{\prime}_{0}\), which is then used as the rough estimate (see step 1 in Algorithm 2) in a BB trial. The NN then uses the result of the BB trial, which is a vector of length \(l_{BB}\), say \(v_{0}\), and the rough estimate \(\hat{n}^{\prime}_{0}\), to generate its own estimate in time frame \(0\), \(\hat{n}^{Stu}_{0}\), using the student network \(g^{Stu}\). Subsequently, in each time frame \(t=1,2,\ldots,\text{num\_iters}\), where num_iters is the total number of time frames, a BB trial is conducted with rough estimate \(\hat{n}^{Stu}_{t-1}\) (\(Stu\)'s estimate of the previous time frame), which generates a vector of length \(l_{BB}\), say \(v_{t}\), as a result. The NN \(g^{Stu}\) operates on \((v_{t},\hat{n}^{Stu}_{t-1})\) to produce its estimate \(\hat{n}^{Stu}_{t}\) in time frame \(t\). The motivation for using the estimate of the previous time frame, \(\hat{n}^{Stu}_{t-1}\), as the rough estimate for the BB trial of the current time frame \(t\) arises from the fact that some correlation exists between the nodes active in the previous time frame and the nodes active in the current time frame. We exploit this fact to reduce the number of time slots used by _not_ executing LoF trials to obtain the rough estimates for the BB trials of time frames \(t=1,2,3,\ldots\). ``` 1:At \(t=0\), conduct LoF trials to give the initial rough estimate \(\hat{n}^{\prime}_{0}\); then conduct BB trial to generate \(v_{0}\) 2:\(\hat{n}^{Stu}_{0}=g^{Stu}(v_{0},\hat{n}^{\prime}_{0})\) 3:for\(t=1,\cdots,\text{num\_iters}\)do 4: Conduct BB trial with rough estimate \(\hat{n}^{Stu}_{t-1}\), giving result \(v_{t}\) 5:\(\hat{n}^{Stu}_{t}=g^{Stu}(v_{t},\hat{n}^{Stu}_{t-1})\) 6:endfor ``` **Algorithm 5** Proposed NN Method The architecture of the NN used is shown in Figure 4. The input dense layer of length \(L\) has Rectified Linear Unit (ReLU) activation, while the other two \(L/2\) dense layers (hidden layers) have sigmoid activation. The output layer of length \(1\) has linear activation. A description of the activation functions is provided in [75]. The architecture has been designed for regression specifically and for ease of training, according to insights from [76]. #### V-A2 Training The estimation of the number of active nodes in each time frame is to be done by a model that uses data recorded in a real online scenario. This implies that each element of the results of the trial \(v_{t}\) would comprise of three possibilities: {no transmission, success, collision}. This would be a difficult problem to tackle, and would need more information than the online model has to perform well. Thus, we use PFD, where a teacher model is trained on privileged data not available in the online scenario, in particular, the number of nodes transmitting in each slot of the BB trial and the true value of the number of active nodes in the previous time frame. A student model is trained on the actual data seen during the online scenario. In particular, the objective is to train a student model, which is the actual \(Stu\) NN used in Algorithm 5, to perform online estimation of the number of active nodes, \(\hat{n}^{Stu}_{t}\), in Figure 4: The figure shows the neural network architecture of a student (\(Stu\)) model used in the case of a homogeneous network. each time frame \(t\) using the general feature vector \(v_{t}\) and the previous time frame's estimate \(\hat{n}_{t-1}^{Stu}\). The feature vector \(v_{t}\) for a BB trial of length \(l_{BB}\) has dimensions \(1\times 3l_{BB}\) since the outcome of each of the \(l_{BB}\) slots is represented using _one-hot encoding_ and there are 3 possibilities for a slot- no transmission, success, and collision. Also, \(L=3l_{BB}+1\) in Figure 4 since the NN takes as input the vector \(v_{t}\) and the estimate of the previous time frame. In the first step, the teacher model is to be trained. The teacher model is fed with privileged information. Let \(V_{t}\) be a \(1\times l_{BB}\) vector denoting the results of the BB trial in time frame \(t\) and containing privileged information. In particular, for \(i\in\{1,\ldots,l_{BB}\}\), \(V_{t}[i]\) is the number of active nodes transmitting in slot \(i\) of the BB trial, which is privileged information since when a collision occurs, i.e., two or more nodes transmit in a slot, the number of transmitting nodes is not known in practice. Due to the nature of the algorithm, where the NN's output in the previous time frame is part of the input vector for the current time frame, if a network is trained on each time frame one-by-one, it overfits the current sample. It is thus able to predict the next few steps accurately, but the same model will 'forget' the samples seen a few time frames ago. Hence, the procedure of getting a model to be a genie to 'track' the evolution of the time series, logging the dataset, shuffling it, and training a new model to mimic the genie is followed. The model trained offline is able to go through the data multiple times and out-of-order. This calls for a step of data generation- creating a dataset with \(((V_{t},n_{t-1}^{truth}),n_{t}^{truth})\) as the (feature, target) tuple for time frame \(t\). The dataset is then shuffled and a new uninitialized teacher network is trained on this dataset. For data generation, recall that the BB trial requires a rough estimate of the number of active nodes. We use a _genie_ network for this purpose, as explained in Algorithm 6. The _genie_ teacher is trained on the BB trial in each time frame \(t\) after generating the prediction \(\hat{n}_{t}^{gTr}\), which is used as the initial rough estimate for the BB trial in time frame \(t+1\). The _genie_ teacher network is represented by \(g^{gTr}\). The estimate by the _genie_ teacher for the trial at time \(t\) is \(\hat{n}_{t}^{gTr}\), and is given by the following equation: \[\hat{n}_{t}^{gTr}=g^{gTr}(V_{t},n_{t-1}^{truth}). \tag{6}\] The teacher training is explained in Algorithm 6. In gen_teacher_training_data, a new network \(gTr\) is initialized, LoF trials are conducted to give an initial rough estimate \(n_{0}^{\prime}\), and a BB trial is conducted with rough estimate \(n_{0}^{\prime}\). The resulting privileged information \(V_{0}\) is used for prediction by \(gTr\) to generate \(\hat{n}_{0}^{gTr}\). Subsequently, in each time frame \(t\), a BB trial is run with rough estimate \(\hat{n}_{t-1}^{gTr}\), and the privileged vector \(V_{t}\) is fed to the \(gTr\) network. The feature vector \((V_{t},n_{t-1}^{truth})\) and the target \(n_{t}^{truth}\) are stored in the teacher \(Tr\) training data. The gen \(gTr\) is trained on the current data point with the loss \(\mathcal{L}_{t}^{gTr}\) given by: \[\mathcal{L}_{t}^{gTr}=(\hat{n}_{t}^{gTr}-n_{t}^{truth})^{2}. \tag{7}\] ``` 1:functiongen_teacher_training_data 2: Randomly initialize \(gTr\)'s weights 3: At \(t=0\), conduct LoF trials to give the initial rough estimate \(\hat{n}_{0}^{\prime}\); then conduct a BB trial to generate \(V_{0}\) 4:\(\hat{n}_{0}^{gTr}=g^{gTr}(V_{0},\hat{n}_{0}^{\prime})\) 5:for\(t=1,\ldots,\text{num\_iters}\)do 6: Run BB trial with rough estimate \(n^{\prime}=\hat{n}_{t-1}^{gTr}\) to generate \(V_{t}\) 7:\(\hat{n}_{t}^{gTr}=g^{gTr}(V_{t},n_{t}^{truth})\) 8: Save \(((V_{t},n_{t-1}^{truth}),n_{t}^{truth})\) in \(Tr\) training data 9: Fit \(g^{gTr}(.)\) to \(((V_{t},n_{t-1}^{truth}),n_{t}^{truth})\) using loss \(\mathcal{L}_{t}^{gTr}=(\hat{n}_{t}^{gTr}-n_{t}^{truth})^{2}\) 10:endfor 11:return\(Tr\) training data 12:endfunction 13:functiontrain_teacher_offline(\(Tr\) training data) 14: Randomly initialize teacher \(Tr\) 15: Shuffle and split dataset into train and test 16: Train \(Tr\) with loss \(\mathcal{L}_{t}^{Tr}=(\hat{n}_{t}^{Tr}-n_{t}^{truth})^{2}\) 17:return\(Tr\) 18:endfunction ``` **Algorithm 6** Teacher Training In train_teacher_offline, the generated \(Tr\) training data is shuffled to avoid overfitting. The dataset is split into \(train\) and \(test\), and a new teacher network \(Tr\) is trained on the dataset using the MSE loss given by: \[\mathcal{L}_{t}^{Tr}=L(\hat{n}_{t}^{Tr},n_{t}^{truth})=(\hat{n}_{t}^{Tr}-n_{t}^ {truth})^{2}. \tag{8}\] In train_teacher_offline, the generated \(Tr\) training data is shuffled to avoid overfitting. The dataset is split into \(train\) and \(test\), and a new teacher network \(Tr\) is trained on the dataset using the MSE loss given by: \[\mathcal{L}_{t}^{Tr}=L(\hat{n}_{t}^{Tr},n_{t}^{truth})=(\hat{n}_{t}^{Tr}-n_{t}^ {truth})^{2}. \tag{9}\] The student model does not see the privileged information (number of active nodes transmitting in each slot, true value of the number of active nodes in the previous time frame). It only sees the result of each slot (no transmission, success, or collision). Thus, the student has \((v_{t},\hat{n}_{t-1}^{Stu})\) to calculate \(\hat{n}_{t}^{Stu}\), which is given by: \[\hat{n}_{t}^{Stu}=g^{Stu}(v_{t},\hat{n}_{t-1}^{Stu}). \tag{10}\] Instead of the standard regression MSE loss, distillation loss is used, which includes the loss between the teacher's prediction and the truth, and is given by: \[\mathcal{L}_{t}^{Stu}=(1-\alpha)L(\hat{n}_{t}^{Tr},n_{t}^{truth})+\alpha L(\hat{ n}_{t}^{Stu},n_{t}^{truth}), \tag{11}\] where \(\alpha\) is the mixing ratio. Similar to the teacher model's training, the results of the trials executed in different time frames are recorded offline, and a new model is trained on the recorded data with random shuffling. This concludes the training of the student model. The student training is explained in Algorithm 7. In gen_student_training_data, similar to the teacher training protocol, a genie student network \(gStu\) is initialized. To initialize the protocol, LoF trials are conducted and the initial estimate \(\hat{n}_{0}^{\prime}\) is generated. \(gStu\) uses the result of the BB trial of time frame \(t\), viz., \(v_{t}\), and the estimate of the previous trial \(\hat{n}_{t-1}^{Stu}\) to compute \(\hat{n}_{t}^{gStu}\). The feature vector for the student \((v_{t},\hat{n}_{t-1}^{Stu})\), the target \(n_{t}^{truth}\) and the feature vector for the teacher \((V_{t},n_{t-1}^{truth})\) are stored in the \(Stu\) training data. \(gStu\) is then fit on the current sample with the distillation loss, which is a combination of the data loss and the teacher loss. In train_student_offline, a new student model \(Stu\) is initialized and trained on the recorded data and with the pre-trained teacher \(Tr\)'s predictions. The general information \(v_{t}\) and the privileged information \(V_{t}\) are used for \(Stu\) and \(Tr\) inference, giving \(\hat{n}_{t}^{Stu}\) and \(\hat{n}_{t}^{Tr}\), respectively, as the \(Stu\) and \(Tr\) estimates. \(Stu\)'s weights are adjusted for minimizing the distillation loss \(\mathcal{L}_{t}^{Stu}=\alpha(\hat{n}_{t}^{Stu}-n_{t}^{truth})^{2}+(1-\alpha)( \hat{n}_{t}^{Tr}-n_{t}^{truth})^{2}\). Intuitively, a high \(\alpha\) gives less importance to the \(Tr\) loss and vice versa. #### Iv-A3 Other Algorithms for Comparison As a benchmark for comparison with the student model, in each time frame, the SRC\({}_{s}\) protocol (Algorithm 3) is executed, with the number of time slots used being the same as the length of the BB trial used by the NN (student) model. There is an inherent disadvantage to the SRC\({}_{s}\) protocol since it does not use knowledge (e.g., estimate of the number of active nodes) from the previous time frame, unlike the NN method. To analyse whether the NN method estimates the number of active nodes better than an SRC\({}_{s}\) based algorithm that uses knowledge of an estimate of the number of active nodes in the previous time frame, we compare the former against the algorithm _BB-Aware_, which is described in Algorithm 8. BB-Aware uses SRC\({}_{s}\) in time frame \(0\) to generate a rough estimate \(\hat{n}_{0}^{BB-Aware}\). In each subsequent time frame \(t\), BB-Aware conducts a BB trial of length \(l_{BB-Aware}\) and with rough estimate \(\hat{n}_{t-1}^{BB-Aware}\) (estimate of number of active nodes in the previous time frame) and computes the estimate \(\hat{n}_{t}^{BB-Aware}\) by counting the number of empty slots in the trial (as in BB in Algorithm 2). For a fair comparison, each algorithm takes the same number of time slots to execute in each time frame. For example, if the total number of time slots in a time frame is to be \(100\), then the NN method performs a BB trial of length \(100\), the SRC\({}_{s}\) protocol performs \(3\) LoF trials of length \(8\) each, and a BB trial of length \(76\), while the BB-Aware method performs a BB trial of length \(100\). ``` 1:In time frame \(t=0\), execute SRC\({}_{s}\) to get \(\hat{n}_{0}^{BB-Aware}\) 2:for\(t=1,\dots,\text{num\_iters}\)do 3: Conduct a BB trial of length \(l_{BB-Aware}\), with rough estimate \(\hat{n}_{t-1}^{BB-Aware}\) 4:return BB-Aware estimate \(\hat{n}_{t}^{BB-Aware}\) 5:endfor ``` **Algorithm 8** BB-Aware #### Iv-A2 Training Consider the model and problem formulation described in Section III-B. In this case, the coverage area of a base station contains \(T\) types of nodes. The problem is to estimate \(\mathbf{n}_{t}^{truth}\), a \(1\times T\) vector. The approach followed is largely similar to that described in Section V-A1. Recall from Section V-A1 that in the homogeneous case, a BB trial is conducted in each time frame; instead, in the heterogeneous case, in each time frame, 3-SS-BB (explained in Algorithm 4) is conducted, and the results of all the slots are converted into a feature vector. The architecture of the NN used is shown in Figure 5. The input dense layer of length \(L\) has relu activation, while the other two \(L/2\) dense layers have sigmoid activation. The output layer of length \(T\) has linear activation. A description of the activation functions is provided in [75]. #### Iv-A2 Training The teacher is trained on the feature vector \(V_{t}^{T}\), which contains the number of nodes of each type participating in each of the \(l\) blocks of 3-SS-BB, and \(\mathbf{n}_{t-1}^{truth}\); note Figure 5: The figure shows the neural network architecture of a student (\(Stu\)) model used in the case of a heterogeneous network. that the size of \(\left(V_{t}^{T},\mathbf{n}_{t-1}^{truth}\right)\) is \((l+1)T\). The student is trained on \(v_{t}^{T}\), which contains the result of each slot (\(0,\alpha,\beta\) or \(c\) (see Algorithm 4)) in each of the \(l\) blocks of 3-SS-BB in one-hot encoding format, and \(\hat{\mathbf{n}}_{t-1}^{Stv_{t}}\), which is the vector of estimates of the numbers of active nodes of the \(T\) types produced by the student in time frame \(t-1\); note that the size of \(\left(v_{t}^{T},\hat{\mathbf{n}}_{t-1}^{Stv_{t}}\right)\) is \(4(T-1)l+T\). The training of the teacher and student proceed as per the procedures in Algorithms 6 and 7, respectively, with the feature vectors being as in the heterogeneous case. #### V-B3 Other Algorithms for Comparison As a benchmark for comparison with the student model, in each time frame, SRC\({}_{s}\) is independently run \(T\) times (henceforth referred to as T-SRC\({}_{s}\))- once for each type of node. Similarly, the BB-Aware algorithm in Algorithm 8 is adapted to the heterogeneous case to give the algorithm T-BB-Aware, wherein in each time frame, BB-Aware is independently run \(T\) times- once for each type of node. For a fair comparison, each algorithm takes the same number of time slots to execute in each time frame. In particular, the lengths of the BB trials in 3-SS-BB and T-SRC\({}_{s}\) are related as in the following equation: \[l_{3-SS-BB}(T-1)=(l_{SRC_{s}}+n_{LoF}l_{LoF})T. \tag{11}\] The length of a trial in 3-SS-BB is initially fixed and the length of a trial for SRC\({}_{s}\) (\(l_{SRC_{s}}\)) is computed using (11) and rounded to the nearest integer. The lengths of the BB trials in 3-SS-BB and T-BB-Aware are related as in the following equation: \[l_{3-SS-BB}(T-1)=l_{BB-Aware}T. \tag{12}\] The length of a trial in 3-SS-BB is fixed and the length of the BB trials in T-BB-Aware, \(l_{BB-Aware}\), is computed using (12) and rounded to the nearest integer. ## VI Performance Evaluation We describe the simulation setup in Section VI-A. In Section VI-B (respectively, Section VI-C), we provide our simulation results for the case of a homogeneous (respectively, heterogeneous) network. ### _Simulation Setup_ The evolution of the number of active nodes of a given type over different time frames is modeled by a Discrete-Time Markov Chain (DTMC) with \(N\) states \(\{0,1,\ldots,N-1\}\), with a transition probability matrix (TPM) \(P=[p_{i,j}]\), where the transition probabilities are given by the following equation: \[p_{i,j}=\begin{cases}q,&\text{if }i=j,\\ 1-q,&\text{if }i=0,j=1,\\ 1-q,&\text{if }i=N-1,j=N-2,\\ 1-p-q,&\text{if }i\neq 0,N-1\text{ and }j=i-1,\\ p,&\text{if }i\neq 0,N-1\text{ and }j=i+1.\end{cases} \tag{13}\] We consider the case when \(p=(1-q)/2\), which indicates that it is equally likely to go from a state \(i\) to states \(i+1\) and \(i-1\). A visual representation of \(P\) is shown in Figure 5(a). In order to model more sudden changes, we consider the \(k\)-step transition probability matrix (\(P^{k}\)), which allows changes in the number of active nodes from one time frame to the next one by more than \(1\). An example is shown in Figure 5(b). In the homogeneous case, let \(n_{t}^{truth}\) be the number of active nodes in time frame \(t\). The system evolves as in the following equation: \[\mathbb{P}[n_{t}^{truth}=j|n_{t-1}^{truth}=i]=P^{k}[i,j], \tag{14}\] where \(P^{k}[i,j]\) is the \((i,j)\)'th element of the matrix \(P^{k}\). Also, in the heterogeneous case, the number of active nodes of each type evolves as in (14), with the DTMCs for different types of nodes being independent. Throughout the simulations, for the homogeneous case, the maximum number of active nodes in a time frame is taken to be \(64\) and for the heterogeneous case, the maximum number of active nodes of each type in a time frame is taken to be \(\lfloor 192/T\rfloor\), where \(T\) is the number of types. ### _Homogeneous Network_ Throughout the paper, an "epoch" refers to one complete pass through the entire training dataset during training. We plotted the evolution of the training and test loss over \(2500\) Figure 6: Figures (a) and (b) show the transition probability matrices \(P\) and \(P^{5}\), respectively, for \(q=0.2\). epochs, when the student is trained using the teacher. We observed that around \(500\) epochs, the test loss increases significantly, while having insignificant variation in the training loss. The training of the student was thus stopped at that point. Figure 7 shows the evolution of the training and test loss over \(500\) epochs, when the student is trained using the teacher. The mixing ratio used in the training is \(\alpha=0.1\). Figure 8 shows the normalized mean squared errors (MSE) in the active node cardinality estimates computed by the trained student NN, the SRC\({}_{s}\) protocol, and the BB-Aware method, in different time frames in a homogeneous network. It is seen that the student NN achieves much lower normalized MSEs than both the other methods. **Experiment 1 : Changing the length of the BB trial** To study the variation of the error on changing the length of the BB trial, a different student network was trained for each length of trial. Each trained student network was then evaluated and the performance averaged over \(20\) runs of \(2000\) time frames each. The normalised MSE was then compared with those of the SRC\({}_{s}\) protocol and BB-Aware protocol. The result is shown in Figure 9. It is seen that the errors of all the methods decrease with an increase in the length of trial, which is to be expected because a longer trial provides more information about the number of active nodes than a short trial, as there are less collisions. The NN performs better than SRC\({}_{s}\) and BB-Aware consistently. Hence, for achieving the same normalized MSE, a NN can make use of a shorter trial than the SRC\({}_{s}\) and BB-Aware protocols. **Conclusion**: An increase in the length of the BB trial causes the errors of NN, SRC\({}_{s}\) and BB-Aware to decrease, with NN outperforming SRC\({}_{s}\) and BB-Aware for every length. **Experiment 2 : Changing the number of jumps \(k\)** To study how the methods perform when the system evolves faster or slower, the DTMC according to which the number of active nodes in a time frame evolves (see Section VI-A) is changed by changing the number of jumps taken in one time Figure 8: The figure shows the normalized mean squared errors (MSE) achieved by the NN, SRC\({}_{s}\), and BB-Aware methods, for a student network with \(l_{BB}=100,\alpha=0.1,k=5,\text{num\_lof}=3,l_{\text{\emph{o}}f}=8\), a teacher dataset of \(10^{4}\) time frames and a student dataset of \(10^{4}\) time frames. Figure 10: The figure shows the performance of the student network versus the number of jumps in the transition probability matrix, for a student network with \(l_{BB}=100,\alpha=0.1\), a teacher dataset of \(10^{4}\) time frames and a student dataset of \(10^{4}\) time frames. Figure 7: The figure shows the training of the student using the teacher over epochs with \(l_{BB}=100,\alpha=0.1,k=5\), a teacher dataset of \(10^{4}\) time frames and a student dataset of \(10^{4}\) time frames. Figure 9: The figure shows the performance of different methods versus the number of time slots per time frame, with \(\alpha=0.1,k=5,\text{num\_lof}=3,l_{\text{\emph{o}}f}=8\), a teacher dataset of \(10^{4}\) time frames and a student dataset of \(10^{4}\) time frames. frame. Specifically, for a DTMC with transition probability matrix \(P^{k}\), by changing the number of jumps \(k\), one can model a faster or slower changing time series. For each situation, a different student network is trained and the performance is evaluated by averaging over \(20\) runs. Figure 10 shows that the normalized MSE achieved by the NN decreases with an increase in the number of jumps. For the same number of time frames that the student is trained on, a DTMC with higher \(k\) offers more variation, and thus the NN is trained better and its achieved error decreases. **Conclusion**: As the NN model is trained on more 'adverse' scenarios when the number of jumps in the DTMC is higher (more outliers), the error decreases with an increase in the number of jumps. **Experiment 3 : Testing the same NN model with fast or slow time series** If a single trained model is evaluated with different DTMCs (different values of \(k\)), the error does not vary much, as seen in Figure 11. Note that in faster varying systems (higher \(k\)), the estimate from the previous time frame, which is used as the rough estimate for the BB trial in the current time frame, is more unreliable. The fact that despite this, the error achieved by the NN does not increase much in \(k\) suggests that the student network has learned to map the results of the current trial to the number of active nodes well, rather than relying heavily on the estimate of the previous time frame. BB-Aware performs worse under a faster time series (higher \(k\)), which is expected since it uses its estimate from the previous time frame as the rough estimate for the BB trial of the current time frame and the correlation between the true values of the number of active nodes in the previous time frame and in the current time frame decreases as \(k\) increases. Figure 11 also shows that for all values of \(k\), NN significantly outperforms BB-Aware and SRC\({}_{s}\). **Conclusion**: The NN model learns the dependence between the \(v_{t}\) vector and the target \(n_{t}^{truth}\), and does not simply repeat \(\hat{n}_{t-1}^{Stu}\); also, for all values of \(k\), it significantly outperforms BB-Aware and SRC\({}_{s}\). **Experiment 4 : Varying the mixing ratio \(\alpha\)** Next, we study the dependence of the test loss on the mixing ratio \(\alpha\) (see (10)). It can be seen from Figure 12 that the test loss decreases when \(\alpha\) is increased up to a certain point, then increases again as \(\alpha\) approaches \(1\). Figure 12 shows that distillation offers a significant benefit over blind training the student (\(\alpha=1\)). Also, the figure shows that the best result (lowest test loss) is achieved for a value of \(\alpha\) around \(0.5\). **Conclusion**: Distillation offers a significant benefit over regular NN model training. ### _Heterogeneous Network_ Figure 13 shows the training curves for offline training of a teacher model. It is seen that the test loss and training loss both decrease and settle quickly, indicating a relatively simple function to model. In contrast, Figure 14 shows the training curves for offline training of the corresponding student model. As the student model is larger than the teacher model, it is slower to train. The function mapping the student input to the target is also significantly complex. This leads to the model overfitting to the training data, and the test loss remains roughly constant, while the training loss decreases. After the training is complete, the trained student model is deployed in an online scenario. Figure 15 shows the performances of the student model, T-SRC\({}_{s}\) and T-BB-Aware methods versus the time frame number for \(1000\) time frames. It can be seen that the NN model significantly outperforms T-SRC\({}_{s}\) as well as T-BB-Aware. **Experiment 1 : Changing the length of the trial** The length of the trial \((l_{3-SS-BB})\) is changed, and a different student model is trained on the generated data for each value of \(l_{3-SS-BB}\). The other two methods, viz., T-SRC\({}_{s}\) and T-BB-Aware, are also configured to use the same total number of time slots per time frame to produce estimates as the NN method. As expected, in Figure 16, the error decreases for the NN, T-SRC\({}_{s}\), and T-BB-Aware methods as Figure 11: The figure shows the MSE achieved by different algorithms versus the number of jumps with \(l_{BB}=100,\alpha=0.1,\text{num\_lof}=3,l_{lof}=8\), a teacher dataset of \(10^{4}\) time frames and a student dataset of \(10^{4}\) time frames. The same student model trained on \(5\) jumps is used for the estimation for every value of the number of jumps. Figure 12: The figure shows the variation of the test loss with the mixing ratio \(\alpha\) for a student network with \(l_{BB}=100,k=5\), a teacher dataset of \(10^{4}\) time frames and a student dataset of \(10^{4}\) time frames. the number of time slots increases; this is because the number of collisions decreases under each method. The NN method significantly outperforms both the T-SRC\({}_{s}\) and T-BB-Aware methods, which indicates that to achieve the same average error, the NN method can deliver with a shorter trial than both T-SRC\({}_{s}\) and T-BB-Aware. T-BB-Aware outperforms T-SRC\({}_{s}\), as it uses a longer trial and uses information (estimates of numbers of active nodes of different types) from the previous trial. **Conclusion**: The NN model requires a far shorter trial than T-SRC\({}_{s}\) and T-BB-Aware to offer a specified average MSE. **Experiment 2 : Changing the number of types of nodes** Keeping the total number of nodes across all types present in the network the same (equal to \(192\)), the number of types of nodes (\(T\)) is now varied. The maximum number of active nodes of each type in a time frame is taken to be \(\lfloor 192/T\rfloor\). The errors achieved by different methods are shown in Figure 17. It is interesting to note that the NN method consistently gives low average MSE, even though the complexity of the mapping problem increases with \(T\). Also, the figure shows that the NN method significantly outperforms both the T-SRC\({}_{s}\) and T-BB-Aware methods. **Conclusion**: The NN method can adapt well to a higher number of types of nodes (\(T\)) while achieving significantly lower error than T-SRC\({}_{s}\) and T-BB-Aware for all values of \(T\). ## VII Conclusions and Future Work We have proposed a novel methodology for node cardinality estimation in homogeneous as well as heterogeneous wireless networks, which uses the PFD technique and works using a neural network with a teacher-student model. Using extensive simulations, we have shown that the neural networks trained using PFD significantly outperform state-of-the-art node cardinality estimation algorithms. In particular, for a fixed number of time slots per time frame, the proposed PFD based algorithms achieve much lower average normalised Figure 16: The figure shows the average MSE achieved by different algorithms versus the number of time slots taken by each algorithm in a time frame, with \(T=3,\alpha=0.1,k=5,\text{num\_lof}=3,l_{of}=8\), a teacher dataset of \(2\times 10^{4}\) time frames and a student dataset of \(2\times 10^{4}\) time frames. Figure 14: The figure shows the training of the student in the heterogeneous network case, with \(l_{3-SS-BB}=100,T=3,\alpha=0.1,k=5\), a teacher dataset of \(10^{4}\) time frames and a student dataset of \(10^{4}\) time frames. Figure 13: The figure shows the training of the teacher in the heterogeneous network case with \(l_{3-SS-BB}=100,T=3,k=5\), and a teacher dataset of \(10^{4}\) time frames. Figure 15: The figure shows the normalized mean squared errors (MSE) achieved by the NN, T-SRC\({}_{s}\), and T-BB-Aware methods, with \(l_{3-SS-BB}=100,T=3,\alpha=0.1,k=5,\text{num\_lof}=3,l_{of}=8\), a teacher dataset of \(10^{4}\) time frames and a student dataset of \(10^{4}\) time frames, and an evaluation run of 2000 time frames. MSE than SRC\({}_{s}\) and \(T\)-SRC\({}_{s}\). Moreover, the proposed PFD based algorithms also outperform the SRC\({}_{s}\) based BB-Aware and \(T\)-BB-Aware methods, which use information from the previous time frame and hence have longer BB trials, in homogeneous and heterogeneous wireless networks, respectively. Our work demonstrates that PFD is a promising approach for effectively solving the problem of node cardinality estimation in wireless networks. In this paper, we have assumed that the base station is stationary. A direction for future research is to extend the techniques proposed in this paper to the case where a mobile base station moves around, making multiple stops, for node cardinality estimation in a large region in which a homogeneous or heterogeneous wireless network is deployed.
2306.14960
Parton Distributions from Boosted Fields in the Coulomb Gauge
We propose a new method to calculate parton distribution functions (PDFs) from lattice correlations of boosted quarks and gluons in the Coulomb gauge. Compared to the widely used gauge-invariant Wilson-line operators, these correlations greatly simplify the renormalization thanks to the absence of linear power divergence. Besides, they enable access to larger off-axis momenta under preserved 3D rotational symmetry, as well as enhanced long-range precision that facilitates the Fourier transform. We verify the factorization formula that relates this new observable to the quark PDF at one-loop order in perturbation theory. Moreover, through a lattice calculation of the pion valence quark PDF, we demonstrate the aforementioned advantage and features of the Coulomb gauge correlation and show that it yields consistent result with the gauge-invariant method. This opens the door to a more efficient way to calculate parton physics on the lattice.
Xiang Gao, Wei-Yang Liu, Yong Zhao
2023-06-26T18:00:05Z
http://arxiv.org/abs/2306.14960v3
# Parton Distributions from Boosted Fields in the Coulomb Gauge ###### Abstract We propose a new method to calculate parton distribution functions (PDFs) from correlations of boosted quarks and gluons in the Coulomb gauge. Compared to the widely used quasi-PDFs defined from gauge-invariant Wilson-line operators, such correlations offer advantages including accessibility to larger off-axis momenta, absence of linear power divergence, and enhanced long-range precision. We verify the validity of this method at next-to-leading order in perturbation theory and use it to calculate the pion valence quark PDF on a lattice with spacing \(a=0.06\) fm and valence pion mass \(m_{\pi}=300\) MeV. Our result agrees with that from the gauge-invariant quasi-PDF at similar precision, achieved with only half the computational cost through a large off-axis momentum \(|\vec{p}|\sim 2.2\) GeV. This opens the door to a more efficient way to calculate parton physics on the lattice. One of the top quests in nuclear and particle physics nowadays is to understand the 3D internal structure of the proton. Over the past five decades, high-energy scattering experiments at facilities including SLAC, COMPASS, HERA, Fermilab, LHC, Jefferson Lab and RHIC have provided state-of-the-art measurement of the proton parton distribution functions (PDFs) [1], which are 1D densities of quarks and gluons in their momentum fraction \(x\), as well as the 3D and spin-dependent distributions. The future Electron-Ion Collider (EIC) will continue the endeavor with unprecedented precision [2; 3]. The experimental pursuit of the proton 3D structure has also motivated its first-principles calculation from lattice quantum chromodynamics (QCD), a Euclidean formulation of quantum field theory. However, for a long time such efforts have been hindered by the real-time dependence of light-cone correlations that define the PDFs, which makes them not directly calculable on a Euclidean lattice with imaginary time. About a decade ago, a breakthrough was made with the proposal of _large-momentum effective theory_ (LaMET) [4; 5; 6], which starts from the quasi-PDF (qPDF) defined as Fourier transform of an equal-time correlation at large proton momentum, and relates it to the PDF through power expansion and effective theory matching [7]. Over the past years, among the other methods proposed [8; 9; 10; 11; 12; 13], LaMET has made the most significant progress in the calculation of PDFs and pioneered the studies of 3D partonic structures [14; 15; 6]. At the core of LaMET is the simulation of nonlocal bilinear operators which define the qPDFs [4]. For example, the quark bilinear is \(O_{\Gamma}(z)\equiv\bar{\psi}(z)\Gamma W(z,0)\psi(0)\), where \(\Gamma\) is a Dirac matrix, and \(W(z,0)\) is a spacelike Wilson line that connects \(0\) to \(z^{\mu}=(0,\vec{z})\) to make \(O_{\Gamma}(z)\) gauge invariant. By construction \(O_{\Gamma}(z)\) must approach the light-cone \(t+|\vec{z}|=0\) under a Lorentz boost along the \(\vec{z}\)-direction, which can be achieved on the lattice by simulating a boosted hadron. One major challenge here is to reach large momentum which controls the power accuracy. So far the most widely used method to achieve high momenta is the momentum smearing technique [16]. However, to ensure a smooth Wilson line both \(\vec{z}\) and the momentum \(\vec{p}\) must be along one spatial axis, which leaves out all the off-axis directions that can be used to reach higher momenta 1. Another important issue is the renormalization of \(O_{\Gamma}(z,a)\) under lattice regularization with spacing \(a\), as it includes a linear power divergence \(\propto\exp(-\delta m(a)|\vec{z}|)\) with \(\delta m(a)\sim 1/a\), which originates from the Wilson-line self-energy [18; 19]. In order to calculate the \(x\)-dependence of PDFs, such a divergence must be subtracted at all \(\vec{z}\)[20], and a nontrivial matching onto the \(\overline{\rm MS}\) scheme [21; 22] is required to cancel the associated renormalon \(\exp(-m_{0}|\vec{z}|)\) with \(m_{0}\!\sim\!\Lambda_{\rm QCD}\)[23], thus eliminating the \({\cal O}(\Lambda_{\rm QCD}/|\vec{p}|)\) correction in the LaMET expansion. In practice, this procedure will exponentially amplify the noise in the matrix elements of \(O_{\Gamma}(z,a)\) as \(|\vec{z}|\) grows. Footnote 1: A zig-zag Wilson-line operator was studied in Ref. [17], where rotational symmetry was found to be weakly broken on smeared gauge configurations, but it is hard to quantify such an error. In this work we propose to calculate the PDFs from pure quark and gluon correlations in the Coulomb gauge (CG), within the framework of LaMET. The qPDF defined from such a correlation falls into the same universality class [6; 24] as the gauge-invariant (GI) qPDF, since they both approach the PDF under an infinite Lorentz boost. Without the Wilson lines, the time for contracting them with the fields is saved in lattice simulations, and one can use larger off-axis momenta. Moreover, the CG correlations are free from linear divergence and the associated renormalon, which greatly simplifies the renormalization as \(z\)-independent. Therefore, the exponential decay of the bare correlation and its error at large \(|\vec{z}|\) is unaffected, allowing better control of the Fourier transform. At last, one can do the momentum smearing [25; 26] in the CG and compute both GI and CG qPDFs simultaneously, for they share the same quark propagators. In the following, we first introduce the CG qPDF and derive its LaMET matching to the PDF at next-to-leading order (NLO). Next, we calculate the pion valence quark PDF with CG and GI qPDFs on the same lattice ensemble, with the inclusion of a large off-axis momen tum \(|\vec{p}|\sim 2.2\) GeV for the CG qPDF. We verify the consistency of both methods in coordinate and momentum spaces, and demonstrate the above mentioned efficiencies of the CG qPDF. Finally, we discuss the broader application of CG correlations in calculating parton physics. _Definition._ The idea of using CG is not new, as it was first proposed within the LaMET framework for calculating the gluon helicity contribution to the proton spin [24; 27; 28; 29]. The CG quark qPDF is defined as 2 Footnote 2: While the CG correlation was examined for the pion wave function in Ref. [30], it was not adopted as a formal approach there. \[\tilde{f}(x,P^{z},\mu) =P^{z}\int_{-\infty}^{\infty}\frac{dz}{2\pi}e^{ixPx^{z}}\tilde{h}( z,P^{z},\mu)\,, \tag{1}\] \[\tilde{h}(z,P^{z},\mu) =\frac{1}{2P^{t}}\langle P|\bar{\psi}(z)\gamma^{t}\psi(0)|_{ \vec{\nabla}.\vec{A}=0}|P\rangle\,, \tag{2}\] where \(z^{\mu}=(0,0,0,z)\), \(|P\rangle\) is a hadron state with \(P^{\mu}=(P^{t},0,0,P^{z})\) normalized to \(\langle P|P\rangle=2P^{t}\delta^{(3)}(0)\), and \(\mu\) is the \(\overline{\rm MS}\) scale. The GI qPDF follows a similar definition except that the quark correlator is replaced with \(O_{\gamma^{t}}(z)\). The CG condition \(\vec{\nabla}\cdot\vec{A}=0\) is fixed so that the quark correlation can have a nonvanishing matrix element. Due to 3D rotational invariance, the correlator and hadron momentum can be oriented to any spatial direction. Meanwhile, the quark PDF \(f(x,\mu)\) is defined as \[f(x,\mu) =\int_{-\infty}^{\infty}\frac{d\lambda}{2\pi}e^{-i\lambda x}h( \lambda,\mu)\,, \tag{3}\] \[h(\lambda,\mu) =\frac{1}{2P^{+}}\langle P|\bar{\psi}(\xi^{-})W(\xi^{-},0) \gamma^{+}\psi(0)|P\rangle\,, \tag{4}\] where \(\lambda=P^{+}\xi^{-}\) and \(\xi^{-}=(t-z)/\sqrt{2}\). Under an infinite Lorentz boost, the CG reduces to the light-cone gauge \(A^{+}=(A^{t}+A^{z})/\sqrt{2}=0\) with a proper boundary condition, where \(W(\xi^{-},0)=P\exp\big{[}-ig\int_{0}^{\xi^{-}}d\eta^{-}\,A^{+}(\eta^{-})\big{]}\) vanishes, so the qPDF becomes equivalent to the PDF. _Factorization._ According to LaMET [6], when \(P^{z}\gg\Lambda_{\rm QCD}\) the CG qPDF can be perturbatively matched onto the PDF through a factorization formula [31], \[\tilde{f}(x,P^{z},\mu)=\int\frac{dy}{|y|} C\Big{(}\frac{x}{y},\frac{\mu}{|y|P^{z}}\Big{)}f(y,\mu)\] \[+\mathcal{O}\Big{(}\frac{\Lambda_{\rm QCD}^{2}}{x^{2}P_{z}^{2}}, \frac{\Lambda_{\rm QCD}^{2}}{(1-x)^{2}P_{z}^{2}}\Big{)}\,, \tag{5}\] where \(C\) is the matching coefficient. The factorization is based on effective field theory principles and can be proved using the Feynman diagram analysis for the GI qPDF [6; 32], which will be an object for future study. By calculating the NLO corrections to the quark CG qPDF and PDF in a free quark state, we find out that their collinear divergences are identical [33], which confirms Eq. (5) at the same order. The \(\overline{\rm MS}\) matching coefficient is obtained as a series in the strong coupling \(\alpha_{s}\), \[C\big{(}\xi,\frac{\mu}{p^{z}}\big{)}=\delta(\xi\!-\!1)+\frac{\alpha_{s}C_{F}} {2\pi}C^{(1)}\big{(}\xi,\frac{\mu}{p^{z}}\big{)}+\mathcal{O}(\alpha_{s}^{2})\,, \tag{6}\] where \(C_{F}=4/3\). At NLO, \[C^{(1)}\big{(}\xi,\frac{\mu}{p^{z}}\big{)}=C^{(1)}_{\rm ratio} \big{(}\xi,\frac{\mu}{p^{z}}\big{)} \tag{7}\] \[+\frac{1}{2|1-\xi|}+\delta(1-\xi)\left[-\frac{1}{2}\ln\frac{\mu^ {2}}{4p_{z}^{2}}+\frac{1}{2}-\int_{0}^{1}d\xi^{\prime}\frac{1}{1-\xi^{\prime}} \right]\,,\] where \[C^{(1)}_{\rm ratio}\big{(}\xi,\frac{\mu}{p^{z}}\big{)}=\bigg{[}P _{qq}(\xi)\ln\frac{4p_{z}^{2}}{\mu^{2}}+\xi-1\bigg{]}_{+(1)}^{[0,1]} \tag{8}\] \[+\bigg{\{}P_{qq}(\xi)\Big{[}{\rm sgn}(\xi)\ln|\xi|+{\rm sgn}(1\!- \!\xi)\ln|1\!-\!\xi|\Big{]}\!+\!{\rm sgn}(\xi)\] \[+\frac{3\xi-1}{\xi-1}\frac{\tan^{-1}\big{(}\sqrt{1-2\xi}/|\xi| \big{)}}{\sqrt{1-2\xi}}-\frac{3}{2|1-\xi|}\bigg{\}}_{+(1)}^{(-\infty,\infty)}\] corresponds to the ratio scheme [34] that satisfies particle number conservation. Here \(P_{qq}(\xi)=(1+\xi^{2})/(1-\xi)\), and the plus functions are defined on a domain \(D\) as \[[g(x)]_{+(x_{0})}^{D}=g(x)-\delta(x-x_{0})\int_{D}dx^{\prime}\ g(x^{\prime})\,. \tag{9}\] Note that \(C^{(1)}_{\rm ratio}\) is analytical at \(\xi=1/2\) despite its form. With a double Fourier transform of Eq. (5) [31], we also derive a short-distance factorization (SDF): \[\tilde{h}(z,\!P^{z},\!\mu)\!=\!\int\!du\ \mathcal{C}(u,\!z^{2}\mu^{2})h(u \tilde{\lambda},\!\mu)\!+\!\mathcal{O}(z^{2}\Lambda_{\rm QCD}^{2})\,, \tag{10}\] where \(\tilde{\lambda}=zP^{z}\). Like Eq. (6), the NLO coefficient is \[\mathcal{C}^{(1)}(u,z^{2}\mu^{2})=\mathcal{C}^{(1)}_{\rm ratio}(u,z^{2}\mu^{2} )\!+\!\delta(1\!-\!u)\big{(}\frac{1}{2}-\frac{\mathbf{L}_{z}}{2}\big{)}\,, \tag{11}\] where \(\mathbf{L}_{z}=\ln\big{(}z^{2}\mu^{2}e^{\gamma_{E}}/4\big{)}\), and \[\mathcal{C}^{(1)}_{\rm ratio}(u,z^{2}\mu^{2})=\bigg{[}\!-P_{qq}(u) \mathbf{L}_{z}-\frac{4\ln(1\!-\!u)}{1\!-\!u}+1-u\bigg{]}_{+(1)}^{[0,1]}\] \[+\bigg{[}\frac{3u-1}{u-1}\frac{\tan^{-1}\big{(}\sqrt{1\!-\!2u}/|u |\big{)}}{\sqrt{1-2u}}\!-\!\frac{3}{|1-u|}\bigg{]}_{+(1)}^{(-\infty,\infty)}\,. \tag{12}\] In contrast to the matching for GI correlations, where \(u\) is limited to \([-1,1]\)[12; 35], the \(\mathcal{C}^{(1)}(u,z^{2}\mu^{2})\) here is nonzero for \(u<0\) and \(u>1\). As a result, the Mellin moments of \(\mathcal{C}^{(1)}(u,z^{2}\mu^{2})\) are divergent except for the lowest one, indicating that the Wilson coefficients in the operator product expansion of \(\tilde{h}(z,P^{z},\mu)\) are functions not only of \(z^{2}\) and \(\mu^{2}\) but also of \(P_{z}^{2}\). This feature is distinct from the GI case [31] and will be further studied. _Numerical implementation._ To test the CG qPDF method, we calculate the pion valence quark PDF on a gauge ensemble in 2+1 flavor QCD generated by the HotQCD collaboration [36] with Highly Improved Staggered Quarks [37], where the lattice spacing \(a=0.06\) fm and volume \(L_{s}^{3}\!\times\!L_{t}=48^{3}\!\times\!64\). We use tadpole-improved clover Wilson valence fermions on the hypercubic (HYP) smeared [38] gauge background, with a valence pion mass \(m_{\pi}=300\) MeV. To improve the signal of boosted pions at \(\vec{p}=(2\pi)/(L_{s}a)\vec{n}\), we utilize the momentum-smeared [16] pion source with optimized quark boost \(\vec{k}\)[25; 26]. Since lattice simulations involve sampling from statistical ensembles to estimate expectation values, high-momentum modes tend to have larger statistical fluctuations due to their oscillatory nature. Using an off-axis momentum \(\vec{n}=(n_{x},n_{y},n_{z})\) one may achieve the same \(|\vec{n}|\) with smaller \(n_{x,y,z}\) and less statistics. In this study, we employ 109 gauge configurations and perform multiple exact and sloppy Dirac operator inversions on each of them using the All Mode Averaging technique [39]. We use \(\vec{n}=(0,0,0)\), two on-axis \(\vec{n}=(0,0,n_{z})\) with \(n_{z}=4,5\) which correspond to \(|\vec{p}|=1.72\) and \(2.15\) GeV, and one off-axis \(\vec{n}=(3,3,3)\) which corresponds to \(|\vec{p}|=2.24\) GeV. Three time separations \(t_{s}/a=8,10,12\) are computed to eliminate the excited-state contamination. Since the quark propagators are the same, we calculate the GI qPDF with 1-step HYP-smeared Wilson lines and the CG qPDF during contraction at no additional cost. More details of the statistics are shown in Table 1. For a 4D lattice of spatial volume \(V\), we fix QCD in the CG by finding the gauge transformation \(\Omega\) of link variables \(U_{i}(t,\vec{x})\) that minimizes the criterion [40; 41] \[F[U^{\Omega}]=\frac{1}{9V}\sum_{\vec{x}}\sum_{i=1,2,3}\big{[}-\mathrm{Re}\, \mathrm{Tr}\ U_{i}^{\Omega}(t,\vec{x})\big{]}\,, \tag{13}\] at a precision of \(\sim 10^{-7}\) on each time slice \(t\). When the observable is not gauge invariant, the above precision and the presence of Gribov copies [42; 43] will affect the lattice result. Despite some proposals for attacking the Gribov problem [44; 45; 46; 47], there are still no complete solution. Nevertheless, lattice studies of SU(2) Yang-Mills theory show that the Gribov copies only affect the gluon propagator in the far infrared region [48], which implies that they have little impact on the short-range correlations that dominate the PDF at moderate \(x\). Since QCD has been proved to be renormalizable in the CG [49; 50; 51] without linear divergences [52], the renormalization of CG correlators is reduced to quark wave function renormalization, which is multiplicative like the GI case [53; 54; 55] except for being \(z\)-independent. With the bare matrix elements, we first check the consistency between CG and GI correlations at short distance. With a simple paramterization of the PDF, \(f_{v}(x)\propto x^{\alpha}(1-x)^{\beta}\), we fit the ratio of GI correlations [34] \[\mathcal{M}(z,P^{z},a)=\frac{\tilde{h}^{\mathrm{GI}}(z,P^{z},a)}{\tilde{h}^{ \mathrm{GI}}(z,0,a)}\ \frac{\tilde{h}^{\mathrm{GI}}(0,0,a)}{\tilde{h}^{\mathrm{GI}}(0,P^{z},a)}\,, \tag{14}\] at \(z\in[3a,6a]\) and \(n_{z}=4,5\), with the NLO SDF formula. Then, we match the fitted PDF to the CG correlations using Eq. (10) and compare them to the lattice results in Fig. 1. After matching, the fitted PDF can describe the CG ratios within \(1\sigma\) error, implying that the PDFs calculated from the CG and GI qPDFs must also be consistent at moderate \(x\). The slight deviations could come from different \(\mathcal{O}(z^{2}\Lambda_{\mathrm{QCD}}^{2})\) corrections that are ignored in the SDF formulas or simply the statistical fluctuations. Then we continue to the LaMET analysis. First, both bare CG and GI correlations are renormalized in the hybrid scheme [20], where the ratio scheme in Eq. (14) is used for \(z\leq z_{s}\) with \(z_{s}=4a\) and \(2\sqrt{3}a\) for on-axis and off-axis momenta, repsectively. At \(z>z_{s}\), the linear divergence in the GI correlation is subtracted with the method in Ref. [56], and \(m_{0}\) is fitted with the leading-renormalon resummation (LRR) approach in Refs. [21; 22], leaving an overall renormalization to be fixed by a continuity condition at \(z=z_{s}\)[20]. Meanwhile, the renormalization of CG correlations is simply accomplished with the continuity condition, which leads to nice continuum limit as verified with a finer lattice in App. C. Fig. 2 compares the hybrid-scheme CG and GI correlations. Both correlations fall close to zero as \(z\) increases, but the errors in the GI case are significantly larger due to the exponential enhancement by renormalization. In contrast, the errors in the CG correlation remain small at large \(z\). Although at small \(z\) the CG correlation has slightly bigger Figure 1: CG and GI ratios. The bands are obtained by matching the PDF fitted from GI ratios at \(n_{z}=4,5\) and \(z\in[3a,6a]\). \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline \(|\vec{p}|\) (GeV) & \(\vec{n}\) & \(\vec{k}\) & \(t_{s}/a\) & (\#ex,\#sl) \\ \hline 0 & (0,0,0) & (0,0,0) & 8,10,12 & (1, 16) \\ \hline \multirow{3}{*}{1.72} & \multirow{3}{*}{(0,0,4)} & \multirow{3}{*}{(0,0,3)} & 8 & (1, 32) \\ & & & 10 & (3, 96) \\ & & & 12 & (8, 256) \\ \hline \multirow{3}{*}{2.15} & \multirow{3}{*}{(0,0,5)} & \multirow{3}{*}{(0,0,3)} & 8 & (2, 64) \\ & & & 10 & (4, 128) \\ & & & 12 & (8, 256) \\ \hline \multirow{3}{*}{2.24} & \multirow{3}{*}{(3,3,3)} & \multirow{3}{*}{(2,2,2)} & 8 & (1, 32) \\ & & & 10 & (2, 64) \\ \cline{1-1} & & & 12 & (4, 128) \\ \hline \hline \end{tabular} \end{table} Table 1: Details of lattice setup, where (#ex,#sl) are the numbers of exact and sloppy inversions per configuration. errors, they are likely improvable with better fixed CG condition. Next, we Fourier transform the correlations to obtain the qPDFs. The discrete data are interpolated with a cubic polynomial, whose uncertainty is small compared to the other sources. For the GI correlation, we extrapolate to \(z=\infty\) with a physically motivated model \(e^{-m|z|}/\bar{\lambda}^{d}\)[56], which mainly affects the small-\(x\) region. On the other hand, this extrapolation has a much smaller effect on the CG qPDF since the central value and error of the correlation are both small at large \(z\). Subsequently, we match the qPDFs to the PDF. The NLO hybrid-scheme matching coefficient for the GI qPDF is calculated in Ref. [20], and in the CG case it is \[C^{(1)}(\xi,z_{s},p^{z},\mu) =C^{(1)}_{\rm ratio}\big{(}\xi,\frac{\mu}{p^{z}}\big{)} \tag{15}\] \[-\bigg{[}\frac{\mathrm{Si}[(1-\xi)z_{s}p^{z}]}{\pi(1-\xi)}-\frac{ 1}{2|1-\xi|}\bigg{]}^{(-\infty,\infty)}_{+(1)}\,\] where \(\mathrm{Si}(\lambda)=\int_{0}^{\lambda}dt\ \sin t/t\). Fig. 3 compares the GI qPDF with LRR to the CG qPDF before and after NLO matching. Despite the noticeable difference between the qPDFs, the matched results converge well at \(x>0.25\), validating the universality in LaMET [6; 24]. Finally, we conclude the analysis of CG qPDFs by including the resummation of small-\(x\) logarithms through the PDF evolution [57; 58], while the resummation of large-\(x\) logarithms [57; 59] is postponed. Fig. 4 shows the results at on-axis and off-axis momenta \(|\vec{p}|=2.15\) and \(2.24\) GeV, respectively, which are compared to the recent global fits by xFitter20 [60] and JAM21NLO [61]. The error includes scale variation, which is estimated by setting \(\mu=2\kappa x|\vec{p}|\) with \(\kappa=\sqrt{2},1,1/\sqrt{2}\) in the matching and evolving the results to \(\mu=2\) GeV at leading-logarithmic (LL) order. The resummation has a huge impact at \(x\lesssim 0.2\) where the parton momentum approaches the infrared region. For \(x>0.2\), the lattice results agree with the global fits, although they have larger errors. Since the statistics we use is significantly less than that in Ref. [56] for a similar calculation, there is still much room for improvement. More details on the lattice simulation, test of rotational symmetry, renormalization and matching are provided in the Appendix. In summary, we have proposed a new method to calculate the PDF from CG correlations within the framework of LaMET. The factorization relation between the CG qPDF and PDF has been verified at NLO. With an exploratory lattice calculation, we demonstrate the equivalence of this method to the GI qPDF and its advantages in achieving larger off-axis momenta, simpler renormalization and more precise long-range correlations at a lower computational cost. The nice agreement between CG and GI qPDF methods implies a small effect of the Gribov copies, yet further systematic study is still worthwhile. To improve the precision, we can increase the statistics and pursue higher off-axis momenta. One practical issue is the large step size along an off-axis direction, such as \(\sqrt{3}a\), which adds to the interpolation error. Nevertheless, using the idea of complementarity [62] we can largely overcome it by reconstructing smooth short-range correlations through the SDF of matrix elements at on-axis momenta. Besides, the evolution and resummations are similar to those for the GI qPDFs, which will be developed in the future for precision calculations. Figure 3: PDFs from the qPDFs after NLO matching. Figure 2: CG and GI correlations in the hybrid scheme at on-axis momentum \(2.15\) GeV and off-axis momentum \(2.24\) GeV. Finally, the CG correlations can also be used to calculate broader parton physics such as generalized parton distributions and transverse-momentum distributions (TMDs), which are more computationally demanding than the PDFs. In particular, the TMD calculations will benefit significantly from the absence of staple-shaped Wilson lines--whose storage and contractions consume much memory and time--and simplified renormalization [63; 64; 65; 66; 67; 68; 69]. Since the boosted quarks in a physical gauge capture the correct collinear partonic degrees of freedom, their 3D correlation should be matchable to the physical TMD [70; 71; 72; 73; 74], which will be studied in a future work. ###### Acknowledgements. We thank Jack Holligan, Xiangdong Ji, Swagato Mukherjee and Peter Petreczky for valuable communications. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics through Contract No. DE-AC02-06CH11357 and No. DE-FG-88ER40388, and within the frameworks of Scientific Discovery through Advanced Computing (SciDAC) award _Fundamental Nuclear Physics at the Exascale and Beyond_ and the Quark-Gluon Tomography (QGT) Topical Collaboration, under contract no. DE-SC0023646. YZ is also partial supported by the 2023 Physical Sciences and Engineering (PSE) Early Investigator Named Award program at Argonne National Laboratory. We gratefully acknowledge the computing resources provided on _Swing_, a high-performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory. This research also used awards of computer time provided by the INCITE program at Argonne Leadership Computing Facility, a DOE Office of Science User Facility operated under Contract No. DE-AC02-06CH11357. The computation of the correlators was carried out with the Qlua software suite [75], which utilized the multigrid solver in QUDA[76; 77]. ## Appendix A Two-point and three-point functions To determine the bare matrix elements of pion ground state, we first need the two-point functions \(C_{2pt}(t_{s};\vec{p})\) which will provide energy spectrum created by the pion source and corresponding overlap amplitudes [25]. We utilize the Guassian momentum smeared sources to improve the signal of boosted pion at momentum \(\vec{p}=(2\pi)/(L_{s}a)\vec{n}\). We use \(\vec{n}=(0,0,0)\), two on-axis \(\vec{n}=(0,0,n_{z})\) with \(n_{z}=4,5\) which correspond to \(|\vec{p}|=1.72\) and \(2.15\) GeV, and one off-axis \(\vec{n}=(3,3,3)\) which corresponds to \(|\vec{p}|=2.24\) GeV. The optimized quark boost parameters and statistics are shown in Table 1. In Fig. 5, we show the effective mass evaluated from two-point functions as a function of time separation \(t_{s}\). At \(t_{s}\gtrsim 10a\) the effective mass, dominated by the pion ground state, agree with the short colored lines on the right side estimated from the dispersion relation \(E=\sqrt{\vec{p}^{2}+m_{\pi}^{2}}\) with \(m_{\pi}=300\) MeV. What's more, the signal of \(|\vec{p}|=2.24\) GeV case is compatible to the \(2.15\) GeV case though the former only takes half of the statistics. This suggests that the off-axis \(\vec{n}\) can achieve the same momentum with less computational cost compared to the on-axis ones. To extract the the quasi-PDF matrix elements, we need to compute the three point functions \(C_{3pt}(\tau,t_{s};\vec{p})\). For the case of CG qPDF, we directly do the contraction of the quark propagators without Wilson line, using space separation \(\vec{z}\) along the direction \(\vec{n}\). As for the case of GI qPDF, we use straight Wilson lines \(\vec{z}=(0,0,z_{3})\) for on-axis momentum and zig-zag Wilson lines for the off-axis momentum, as shown in Fig. 6. As a result, the distance of a off-axis separation \(\vec{z}=\{b,b,b\}\) is \(|\vec{z}|=\sqrt{3}b\), while the total length of the Wilson line is \(l=3b\). We construct Figure 6: The black arrows are the zig-zag Wilson lines for GI matrix elements with off-axis momentum. Figure 5: The effective mass evaluated from two-point functions as a function of \(t_{s}\) are shown. The short colored lines on the right side are estimated from the disperion relation \(E=\sqrt{\vec{p}^{2}+m_{\pi}^{2}}\) with \(m_{\pi}=300\) MeV. the ratios \(R(\tau,\vec{z},\vec{p},t_{s})=C_{3pt}(\tau,t_{s};\vec{p})/C_{2pt}(t_{s};\vec{p})\) to take the advantage of the correlation between two-point and three-point functions. In the \(t_{s},\tau\to\infty\) limit, the ratio gives the ground-state matrix elements. In this work, we have calculated three time separation \(t_{s}\) and done a two-state fit [25] for the ground state extrapolation. In Fig. 7, we show ratios (data points) at \(\vec{z}=\vec{0}\) of the two large momenta and the fitted results (colored bands). The black boxes are the ground state matrix elements, where good agreement and similar precision can be observed, though the \(|\vec{p}|=2.24\) GeV case only used half of the statistics for \(|\vec{p}|=2.15\) GeV. This is probably due to the smaller momentum modes along each axis. ## Appendix B Bare matrix elements and rotational symmetry In Fig. 8, we show the bare CG qPDF matrix elements as a function of \(|\vec{z}|\). It can be seen that the matrix elements from on-axis and off-axis cases overlap with each other, especially at zero momentum with high precision, which implies that the rotational symmetry is well preserved. The bare matrix elements of GI case are shown in the upper panel of Fig. 9. Though the difference of the large-momentum matrix elements is not obvious due to the large errors, there is noticeable deviation for the precise zero-momentum matrix elements. It is evident that zig-zag Wilson line cannot accurately approximate the straight Wilson line. We note that the length of the zig-zag Wilson line is \(l=\sqrt{3}|\vec{z}|\). Therefore, in the lower panel of Fig. 9 we show the matrix elements after subtracting the linear divergence \(e^{-dm\cdot l}\), where \(dm\) can be derived from the heavy quark potential (\(dm\cdot a=0.1586\)) [56]. As one can see, \((e^{dm\cdot|\vec{z}|})^{\sqrt{3}}\) badly overshoots the linear divergence of matrix elements at off-axis \(\vec{z}\), which makes their deviation from the on-axis \(\vec{z}\) matrix elements even bigger. The reason could be that the HYP smearing distorted the UV physics within a hypercube and the zig-zag Wilson lines contains so many short links. However, smearing is essential to improving the signal of GI qPDF matrix elements, so this obstacle cannot be bypassed. In summary, to use off-axis momenta with reasonable signal and rotational symmetry, the CG qPDF is the better choice. Figure 8: The bare matrix elements of CG qPDF. ## Appendix C Multiplicative renormalization of matrix elements in CG It is well known that the GI quasi-PDF matrix elements can be multiplicatively renormalized, by removing the linear divergence \(\exp(-\delta m(a)|z|)\) and logarithm divergence \(Z(a)\)[53; 54; 55]. In the case of CG quasi-PDF matrix elements, there is no linear divergence but only the logarithm divergences [49; 50; 51; 52]. To check the if the CG matrix elements can be multiplicatively renormalized by a constant \(Z(a)\), we compute the \(\vec{p}=0\) matrix elements on another gauge ensemble in 2+1 flavor QCD generated by the HotQCD collaboration [36] with Highly Improved Staggered Quarks [37], where the lattice spacing \(a=0.04\) fm and volume \(L_{s}^{3}\times L_{t}=64^{3}\times 64\)[26]. We use tadpole-improved clover Wilson valence fermions on the HYP-smeared [38] gauge background, with a valence pion mass \(m_{\pi}=300\) MeV. The spatial separation is along the off-axis direction \(\vec{n}=(1,1,1)\), and we utilized a total of 12 configurations, each consisting of 1 exact and 16 sloppy inversions. In Fig. 10, we show the comparison of CG matrix elements \(\tilde{h}(\vec{z},\vec{p},a)\) divided by \(\tilde{h}(\vec{z}_{s},\vec{p},a)\) for the two lattices with \(a=0.04\) and \(0.06\) fm. The \(z_{s}\) are set to be same, \(0.42\) fm. As one can see, the renormalized matrix elements from the two lattices agree with each other very well (except the two points around \(|\vec{z}|=0\) with the most serious discretization effect). This shows that the CG matrix elements can be multiplicative renormalized by a constant \(Z(a)\) as expected. ## Appendix D Matrix elements in coordinate space Following Fig. 1, we compare the CG ratio at the off-axis momentum \(|\vec{p}|=2.24\) GeV to the band which is obtained by matching the PDF fitted from the GI ratios. Like the \(|\vec{p}|=2.15\) GeV case, the lattice ratio at \(|\vec{p}|=2.24\) GeV agrees with the band within \(1\sigma\) error, which is already implied by the rotational symmetry in Fig. 8. Fig. 12 compares the hybrid-scheme CG and GI qPDF matrix elements at \(P^{z}=1.72\) GeV, which again shows more precise long-range correlations in the CG case. ## Appendix E Matching Following Fig. 3, we compare the PDFs calculated from the CG and GI qPDFs at \(P^{z}=1.72\) GeV at NLO in Fig. 13. Again, despite the considerable differences between the CG and GI qPDFs, the matched PDFs show Figure 12: Hybrid scheme matrix element Figure 10: Comparison of CG ratios \(\tilde{h}(\vec{z},\vec{p},a)/\tilde{h}(\vec{z}_{s},\vec{p},a)\) at \(\vec{p}=0\) and \(a=0.04\) and \(0.06\) fm. The spatial separation \(\vec{z}\) is along the off-axis direction \(\vec{n}=(1,1,1)\). Figure 13: PDFs from the qPDFs after NLO matching at \(P^{z}=1.72\) GeV. significantly improved agreement at moderate \(x\). To demonstrate the effect of resumming small-\(x\) logarithm or PDF evolution, we compare the PDFs matched from the CG qPDF at NLO and LL+NLO accuracies in Fig. 14. For LL resummation, we use one-loop evolution of \(\alpha_{s}\), which starts at initial value \(\alpha_{s}(\mu=2\text{GeV})=0.293\). The resummation makes almost no difference to the PDF at \(x>0.4\), but becomes more and more significant as \(x\) decreases. Eventually, at \(2xP^{z}\sim 0.8\) GeV where \(\alpha_{s}\) becomes of \(\mathcal{O}(1)\), the scale variation uncertainty becomes out of control. Finally, for completeness we include a comparison of the PDF calculated from the CG qPDF at \(P^{z}=1.72\) GeV to the global fits. Again we find agreement between lattice and phenomenology at moderate to large \(x\), which is slightly better than the two larger momenta cases. Since the errors in the current lattice results are huge, the unquantified power corrections, which should be better suppressed at higher momenta, may just be a less important systematic uncertainty here. Figure 14: PDFs matched from the CG qPDF at NLO and LL+NLO. Figure 15: PDFs from CG qPDFs at \(P^{z}=1.72\) GeV, compared to the global fits.
2303.14601
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
Data poisoning attacks spoof a recommender system to make arbitrary, attacker-desired recommendations via injecting fake users with carefully crafted rating scores into the recommender system. We envision a cat-and-mouse game for such data poisoning attacks and their defenses, i.e., new defenses are designed to defend against existing attacks and new attacks are designed to break them. To prevent such a cat-and-mouse game, we propose PORE, the first framework to build provably robust recommender systems in this work. PORE can transform any existing recommender system to be provably robust against any untargeted data poisoning attacks, which aim to reduce the overall performance of a recommender system. Suppose PORE recommends top-$N$ items to a user when there is no attack. We prove that PORE still recommends at least $r$ of the $N$ items to the user under any data poisoning attack, where $r$ is a function of the number of fake users in the attack. Moreover, we design an efficient algorithm to compute $r$ for each user. We empirically evaluate PORE on popular benchmark datasets.
Jinyuan Jia, Yupei Liu, Yuepeng Hu, Neil Zhenqiang Gong
2023-03-26T01:38:11Z
http://arxiv.org/abs/2303.14601v1
# PORE: Provably Robust Recommender Systems against Data Poisoning Attacks ###### Abstract _Data poisoning attacks_ spoof a recommender system to make arbitrary, attacker-desired recommendations via injecting fake users with carefully crafted rating scores into the recommender system. We envision a cat-and-mouse game for such data poisoning attacks and their defenses, i.e., new defenses are designed to defend against existing attacks and new attacks are designed to break them. To prevent such cat-and-mouse game, we propose PORE, the first framework to build _provably_ robust recommender systems in this work. PORE can transform _any_ existing recommender system to be provably robust against _any_ untargeted data poisoning attacks, which aim to reduce the overall performance of a recommender system. Suppose PORE recommends top-\(N\) items to a user when there is no attack. We prove that PORE still recommends at least \(r\) of the \(N\) items to the user under any data poisoning attack, where \(r\) is a function of the number of fake users in the attack. Moreover, we design an efficient algorithm to compute \(r\) for each user. We empirically evaluate PORE on popular benchmark datasets. ## 1 Introduction Many web service platforms (e.g., Amazon, YouTube, and TikTok) leverage recommender systems to engage users and improve user experience. Typically, a platform first collects a large amount of rating scores that users gave to items, which is known as a _rating-score matrix_. Then, the platform uses them to build a recommender system that models the complex relationships between user interests and item properties. Finally, the recommender system recommends top-\(N\) items to each user that match his/her interests. However, due to its openness, i.e., anyone can register users and provide rating scores to items, recommender system is fundamentally not robust to _data poisoning attacks_[11, 12, 18, 22, 23, 27, 42]. Specifically, in a data poisoning attack, an attacker creates fake users in a recommender system and assigns carefully crafted rating scores to items. Different data poisoning attacks essentially use different methods to craft the fake users' rating scores. When a recommender system is built based on the _poisoned-rating-score matrix_, which includes the rating scores of both genuine and fake users, it recommends attacker-chosen, arbitrary top-\(N\) items to a user. As a result, the recommendation performance (e.g., Precision@\(N\), Recall@\(N\), and F1-Score@\(N\)) is substantially degraded. Data poisoning attacks pose severe challenges to the robustness/security of recommender systems. Many defenses have been proposed to enhance the robustness of recommender systems against data poisoning attacks. In particular, one family of defenses [7, 12, 41, 44, 47] aim to detect fake users before building a recommender system. These methods rely on the assumption that the rating scores of fake users and genuine users have statistically different patterns, which they leverage to distinguish between fake and genuine users. Another family of defenses [8, 17, 25, 26, 30, 35, 43] aim to design new methods of training recommender systems such that they have good recommendation performance even if they are trained on a poisoned-rating-score matrix, e.g., using trim learning [17]. However, these defenses only achieve _empirical_ robustness, leading to an endless cat-and-mouse game between attacks and defenses: a new empirical defense is proposed to mitigate existing attacks but can be broken by new attacks that adapt to the defense. For instance, fake users can adapt their rating scores such that they cannot be detected based on the rating scores' statistical patterns [11, 12, 18]. As a result, a recommender system's performance is still substantially degraded by strong, adaptive attacks. **Our work:** In this work, we aim to end such cat-and-mouse game via proposing PORE, the first framework to build _provably_ robust recommender systems against _any_ untargeted data poisoning attacks. Suppose, under no attacks, a recommender system algorithm trains a recommender system on a clean rating-score matrix, which recommends a set of top-\(N\) items (denoted as \(\Gamma_{u}\)) to a user \(u\). Under attacks, the recommender system algorithm trains a recommender system on a poisoned-rating-score matrix, which recommends a set of top-\(N\) items (denoted as \(\Gamma_{u}^{\prime}\)) to the user. We say the recommender system algorithm is \((e,r)\)-provably robust for the user \(u\) if the intersection between \(\Gamma_{u}\) and \(\Gamma_{u}^{\prime}\) includes at least \(r\) items when there are at most \(e\) fake users, no matter how an attacker crafts the fake users' rating scores. In other words, an \((e,r)\)-provably robust recommender system guarantees that at least \(r\) of the recommended top-\(N\) items are unaffected by \(e\) fake users no matter what rating scores they use. We note that \(r\) depends on the number of fake users \(e\) and we call \(r\)_certified intersection size_. A provably robust recommender system can guarantee a lower bound of recommendation performance under _any_ data poisoning attack, i.e., no matter how fake users craft their rating scores. Suppose a _submatrix_ consists of \(s\) rows of the rating-score matrix, i.e., a submatrix includes rating scores of \(s\) users. Intuitively, when the fraction of fake users is bounded, a randomly sampled submatrix is likely to not contain fake users and thus a recommender system built based on the submatrix is not affected by fake users. Based on this intuition, PORE uses _bagging_[5], a well-known ensemble method, to achieve provable robustness. In particular, PORE aggregates recommendations from multiple base recommender systems to recommend top-\(N\) items to each user. Specifically, we can use any recommender system algorithm (called _base algorithm_) to build a recommender system (called _base recommender system_) on a submatrix. Therefore, we could build \(\binom{m}{s}\) base recommender systems since there are \(\binom{n}{s}\) submatrices, where \(n\) is the total number of users. Each base recommender system makes recommendations to users. We denote by \(p_{i}\) the fraction of the \(\binom{m}{s}\) base recommender systems that recommend item \(i\) to a user \(u\). We call \(p_{i}\)_item probability_.1 PORE recommends the top-\(N\) items with the largest item probabilities to user \(u\). Footnote 1: Item probability \(p_{i}\) also depends on user \(u\), but we omit it for simplicity. Our major theoretical result is that we prove PORE is \((e,r)\)-provably robust, no matter what base algorithm is used to train the base recommender systems. Moreover, for any given number of fake users \(e\), we derive the certified intersection size \(r\) for each genuine user, which is the solution to an optimization problem. PORE relies on the item probabilities \(p_{i}\)'s to make recommendations. Moreover, the optimization problem to calculate \(r\) also involves item probabilities. However, it is challenging to compute the exact item probabilities as it requires building \(\binom{n}{s}\) base recommender systems. To address the challenge, we design an efficient algorithm to estimate the lower/upper bounds of the item probabilities via building \(T\ll\binom{n}{s}\) base recommender systems, where the \(T\) base recommender systems can be built in parallel. PORE makes recommendations based on the estimated item probabilities in practice. Moreover, we use the estimated item probabilities to solve the optimization problem to obtain \(r\) for each user. We empirically evaluate PORE on three benchmark datasets, i.e., MovieLens-100k, MovieLens-1M, and MovieLens-10M. Moreover, we consider two state-of-the-art base algorithms, i.e., Item-based Recommendation (IR) [3] and Bayesian Personalized Ranking (BPR) [29], to show the generality of PORE. We also generalize state-of-the-art provably robust defense [19] against data poisoning attacks for machine learning classifiers to recommender systems and compare PORE with it. We have three key observations from our experimental results. First, PORE substantially outperforms the defense generalized from classifiers. Second, when there are no data poisoning attacks, PORE has comparable recommendation performance (i.e., Precision@\(N\), Recall@\(N\), and F1-Score@\(N\)) with a standard recommender system built on the entire rating-score matrix. Third, under any data poisoning attacks, PORE can guarantee a lower bound of recommendation performance, while the standard recommender systems cannot. Our key contributions are summarized as follows: * We propose PORE, the first framework to build recommender systems that are provably robust against untargeted data poisoning attacks. * We prove the robustness guarantees of PORE and derive its certified intersection size. Moreover, we design an algorithm to compute the certified intersection size. * We perform extensive evaluation on popular benchmark datasets using two state-of-the-art base recommender system algorithms. ## 2 Background ### Recommender Systems Rating-score matrix:Suppose we have \(n\) users and \(m\) items which are denoted as \(\mathcal{U}=\{u_{1},u_{2},\cdots,u_{n}\}\) and \(I=\{i_{1},i_{2},\cdots,i_{m}\}\), respectively. We use \(\mathbf{M}\) to denote the rating-score matrix which has \(n\) rows and \(m\) columns, where a row corresponds to a user and a column corresponds to an item. Essentially, the rating-score matrix \(\mathbf{M}\) captures the users' interests towards different items. In particular, an entry \(\mathbf{M}_{ui}\) represents the rating score that the user \(u\) gave to the item \(i\). For instance, the rating score could be an integer in the range \([1,5]\), where \(1\) is the lowest rating score and denotes that the item does not attract the user's interest, and \(5\) is the largest rating score and denotes that the item attracts the user's interest substantially. We note that our method is applicable to any type of rating scores, e.g., binary, integer-valued, and continuous. \(\mathbf{M}_{ui}=0\) means that the user \(u\) has not rated the item \(i\) yet. For convenience, we denote by \(\mathcal{R}\) the domain of a rating score including \(0\), i.e., \(\mathbf{M}_{ui}\in\mathcal{R}\). Top-\(N\) recommended items:A recommender system aims to help users discover new items that may arouse his/her interests. A recommender system algorithm takes the rating-score matrix \(\mathbf{M}\) as input and recommends top-\(N\) items to each user that he/she has not rated yet but is potentially interested in. For simplicity, we use \(\mathcal{A}\) to denote a recommender system algorithm. Moreover, we use \(\mathcal{A}(\mathbf{M},u)\) to denote the set of top-\(N\) items recommended to user \(u\) when the recommender system is built by \(\mathcal{A}\) on \(\mathbf{M}\). **Recommender system algorithms:** Many algorithms have been proposed to build recommender systems such as Item-based Recommendation (IR) [3, 24, 31], Bayesian Personalized Ranking (BPR) [29], Matrix Factorization [21], Neural Collaborative Filtering (NCF) [16], and LightGCN [15]. For instance, IR calculates the similarities between different items based on their rating scores, predicts users' missing rating scores using such similarities, and recommends a user the \(N\) items that the user has not rated yet but have the largest predicted rating scores. Due to its scalability, IR has been widely deployed in industry, e.g., Amazon [24]. According to recent benchmarks released by Microsoft [2], BPR achieves state-of-the-art performance, e.g., BPR even outperforms more complex algorithms such as NCF [16] and LightGCN [15]. ### Data Poisoning Attacks Many studies [11, 12, 22, 23, 27, 42, 18] showed that recommender systems are not robust to data poisoning attacks (Section 6 discusses more details). In a data poisoning attack, an attacker creates fake users in a recommender system and assigns carefully crafted rating scores to them, such that the recommender system, which is built based on the rating scores of genuine and fake users, makes attacker-desired, arbitrary recommendations. For instance, a data poisoning attack could substantially degrade the performance of a recommender system; and a data poisoning attack could promote certain items (e.g., videos on YouTube and products on Amazon) via spoofing a recommender system to recommend them to many genuine users. Figure 1 illustrates data poisoning attacks. We denote by \(\mathbf{M}^{\prime}\) the _poisoned-rating-score matrix_. A data poisoning attack aims to reduce the intersection between \(\mathcal{A}(\mathbf{M},u)\) and \(\mathcal{A}(\mathbf{M}^{\prime},u)\) via carefully designing the rating scores of the fake users. Different attacks essentially assign different rating scores to the fake users. ## 3 Problem Formulation **Threat model:** We assume an attacker can inject fake users into a recommender system via registering and maintaining fake accounts [36]. We consider an attacker can inject at most \(e\) fake users into a recommender system, e.g., because of limited resources to register and maintain fake accounts. However, we assume each fake user can arbitrarily rate as many items as the attacker desires. Moreover, we assume the attacker has whitebox access to the recommender system, e.g., the attacker has access to the rating scores of all genuine users as well as the recommender system algorithm and its parameters. In other words, we consider strong attackers, who can perform any data poisoning attacks. A poisoned-rating-score matrix \(\mathbf{M}^{\prime}\) extends the rating-score matrix \(\mathbf{M}\) by at most \(e\) rows, which correspond to the rating scores of the at most \(e\) fake users. Different data poisoning attacks essentially select different rating scores for the fake users and result in different poisoned-rating-score matrix \(\mathbf{M}^{\prime}\). We use \(\mathcal{L}(\mathbf{M},e)\) to denote the set of all possible poisoned-rating-score matrices when the clean rating-score matrix is \(\mathbf{M}\) and the number of fake users is at most \(e\). \(\mathcal{L}(\mathbf{M},e)\) essentially denotes all possible data poisoning attacks with at most \(e\) fake users. Formally, we define \(\mathcal{L}(\mathbf{M},e)\) as follows: \[\mathcal{L}(\mathbf{M},e)=\{\mathbf{M}^{\prime}|\mathbf{M}^{\prime}_{ui}=\mathbf{ M}_{ui}\text{ and }\mathbf{M}^{\prime}_{vi}\in\mathcal{R},\] \[\forall u\in\mathcal{U},v\in\mathcal{V},i\in I\}, \tag{1}\] where \(\mathcal{R}\) is the domain of a rating score, \(\mathcal{U}\) is the set of genuine users, \(\mathcal{V}\) is the set of at most \(e\) fake users (i.e., \(|\mathcal{V}|\leq e\)), and \(I\) is the set of items. **Provably robust recommender system algorithm:** We say a recommender system algorithm is provably robust against data poisoning attacks if a certain number of its recommended Figure 1: _Left_: a recommender system without data poisoning attacks. _Right_: an attacker manipulates the recommended items for users \(u_{1}\) and \(u_{3}\) by injecting a fake user (i.e., \(u_{5}\)) into the system. top-\(N\) items for a user are provably unaffected by any data poisoning attacks. Specifically, given a set of items \(I_{u}\), we say a recommender system algorithm \(\mathcal{A}\) is provably robust for a user \(u\) if it satisfies the following property: \[\min_{\mathbf{M}^{\prime}\in\mathcal{L}(\mathbf{M},e)}|\,I_{u}\cap\mathcal{A}(\mathbf{M}^{ \prime},u)|\geq r, \tag{2}\] where \(\mathcal{L}(\mathbf{M},e)\) is the set of all possible poisoned-rating-score matrices (i.e., all possible data poisoning attacks with at most \(e\) fake users), \(|\,I_{u}\cap\mathcal{A}(\mathbf{M}^{\prime},u)|\) is the size of the intersection between \(I_{u}\) and the top-\(N\) items recommended to \(u\) by \(\mathcal{A}\) under attacks, and \(r\) is called _certified intersection size_. Note that \(r\) may depend on the user \(u\) and the number of fake users \(e\), but we omit its explicit dependency on \(u\) and \(e\) for simplicity. When \(I_{u}\) is the set of top-\(N\) items recommended to user \(u\) by \(\mathcal{A}\) under no attacks, i.e., \(I_{u}=\mathcal{A}(\mathbf{M},u)\), our provable robustness means that at least \(r\) of the \(N\) items in \(\mathcal{A}(\mathbf{M},u)\) are still recommended to \(u\) under any attacks with at most \(e\) fake users. When \(I_{u}\) is the set of ground truth test items for \(u\) (i.e., the set of items that \(u\) is indeed interested in), our provable robustness means that at least \(r\) of the ground truth test items are recommended to \(u\) under attacks. As we will discuss more details in experiments, \(r\) in the latter case can be used to derive a lower bound of recommendation performance such as Precision@\(N\), Recall@\(N\), and F1-Score@\(N\) under any data poisoning attacks. We formally define a provably robust recommender system algorithm as follows: **Definition 1** (\((e,r)\)-Provably Robust Recommender System).: _Suppose the number of fake users is at most \(e\) and a data poisoning attack can arbitrarily craft the rating scores for the fake users. We say a recommender system algorithm \(\mathcal{A}\) is \((e,r)\)-provably robust for a user \(u\) if its certified intersection size for \(u\) is at least \(r\), i.e., if Equation (2) is satisfied._ ## 4 Our PORE We first give an overview of our PORE, then define our ensemble recommender system and show it is \((e,r)\)-provably robust, and finally describe our algorithm to compute the certified intersection size \(r\) for each user. ### Overview Our key intuition is that, when the number of fake users is bounded, a random subset of a small number of users is likely to only include genuine users. Therefore, a recommender system built using the rating scores of such a random subset of users is not affected by fake users. Based on the intuition, PORE builds multiple recommender systems using random subsets of users and takes majority vote among them to recommend items for users. Specifically, we create multiple submatrices from a rating-score matrix, where each submatrix contains the rating scores of \(s\) different randomly selected users, i.e., \(s\) rows randomly selected from the rating-score matrix. Then, we use an arbitrary base algorithm to build a recommender system (called _base recommender system_) on each submatrix and use it to recommend items for users. Finally, we build an _ensemble recommender system_, which takes a majority vote among the base recommender systems as the final recommended items for each user. Figure 2 shows our ensemble recommender system and its robustness against data poisoning attacks. Next, we first formally define our ensemble recommender system in PORE. Then, we show PORE is provably robust against data poisoning attacks. In particular, given an arbitrary set of items \(I_{u}\) for a user \(u\), we prove that at least \(r\) of the top-\(N\) items recommended to \(u\) by PORE are guaranteed to be among \(I_{u}\) under any data poisoning attacks. Moreover, we derive the certified intersection size \(r\). Finally, we design an Figure 2: Robustness of our ensemble recommender system against data poisoning attacks. algorithm to compute the certified intersection sizes \(r\) for all users simultaneously. ### Our Ensemble Recommender System **Item probability \(p_{i}\):** Given a rating-score matrix \(\mathbf{M}\), we randomly sample a submatrix with \(s\) rows of \(\mathbf{M}\), i.e., the submatrix consists of the rating scores of \(s\) different randomly selected users. For convenience, we denote by \(\mathbf{X}\) the sampled submatrix. Then, we use an arbitrary base algorithm \(\mathcal{A}\) to build a base recommender system on the sampled submatrix \(\mathbf{X}\). We use the base recommender system to recommend top-\(N^{\prime}\) items (denoted as \(\mathcal{A}(\mathbf{X},u)\)) to each user \(u\) in the sampled submatrix. Since the submatrix is randomly sampled, the recommended top-\(N^{\prime}\) items are also random. To consider such randomness, we denote by \(p_{i}\) the probability that item \(i\) is recommended to a user \(u\). Formally, we define \(p_{i}\) as follows: \(p_{i}=\Pr(i\in\mathcal{A}(\mathbf{X},u))\). We call \(p_{i}\)_item probability_. Note that we consider \(\mathcal{A}(\mathbf{X},u)\) is an empty set when \(u\) is not in the sampled submatrix \(\mathbf{X}\), as many recommender systems make recommendations for users in the rating-score matrix that was used to build the recommender systems. Since a base recommender system is built using \(s\) rows of the rating-score matrix \(\mathbf{M}\), we can build \(\binom{n}{s}\) base recommender systems in total, where \(n\) is the total number of users/rows in \(\mathbf{M}\). Essentially, our item probability \(p_{i}\) is the fraction of the \(\binom{n}{s}\) base recommender systems that recommend item \(i\) to the user \(u\). **Poisoned item probability \(p^{\prime}_{i}\):** Under data poisoning attacks, the rating-score matrix \(\mathbf{M}\) becomes a poisoned version \(\mathbf{M}^{\prime}\). We denote by \(\mathbf{Y}\) a random submatrix with \(s\) rows sampled from \(\mathbf{M}^{\prime}\). Moreover, we define _poisoned item probability_\(p^{\prime}_{i}=\Pr(i\in\mathcal{A}(\mathbf{Y},u))\), i.e., \(p^{\prime}_{i}\) is the probability that the item \(i\) is in the top-\(N^{\prime}\) items recommended to \(u\) when the base recommender system is built based on \(\mathbf{Y}\). **Our ensemble recommender system \(T\):** Our ensemble recommender system (denoted as \(\mathcal{T}\)) recommends the top-\(N\) items with the largest item probabilities \(p_{i}\)'s for a user \(u\). Essentially, our ensemble recommender system takes a majority vote among the \(\binom{n}{s}\) base recommender systems. In particular, our ensemble recommender system essentially recommends the top-\(N\) items that are the most frequently recommended by the \(\binom{n}{s}\) base recommender systems to a user. For simplicity, we use \(\mathcal{T}(\mathbf{M},u)\) to denote the set of top-\(N\) items recommended to the user \(u\) by our ensemble recommender system \(\mathcal{T}\) when the rating-score matrix is \(\mathbf{M}\). Note that \(N^{\prime}\) and \(N\) are different parameters, i.e., \(N^{\prime}\) is the number of items recommended to a user by a base recommender system while \(N\) is the number of items recommended to a user by our ensemble recommender system. We will explore their impact on our ensemble recommender system in experiments. Under data poisoning attacks, our ensemble recommender system \(\mathcal{T}\) uses the poisoned item probabilities to make recommendations. Specifically, \(\mathcal{T}(\mathbf{M}^{\prime},u)\) is the set of top-\(N\) items with the largest poisoned item probabilities \(p^{\prime}_{i}\)'s that are recommended to \(u\) by \(\mathcal{T}\) under attacks. ### Deriving the Certified Intersection Size We show that our ensemble recommender system algorithm \(\mathcal{T}\) is \((e,r)\)-provably robust. In particular, for any given number of fake users \(e\), we can derive the certified intersection size \(r\) of \(\mathcal{T}\) for any user \(u\). Specifically, given an arbitrary set of items \(I_{u}\), we show that at least \(r\) of the recommended top-\(N\) items \(\mathcal{T}(\mathbf{M}^{\prime},u)\) are in \(I_{u}\) when there are at most \(e\) fake users, no matter what rating scores they use. Later, we can replace \(I_{u}\) as our desired sets of items. Formally, we aim to show the following: \(\min_{\mathbf{M}^{\prime}\in\mathcal{L}(\mathbf{M},e)}|\,I_{u}\cap\mathcal{T}(\mathbf{M}^{ \prime},u)|\geq r\), where \(\mathcal{L}(\mathbf{M},e)\) is the set of all possible poisoned-rating-score matrices and denotes all data poisoning attacks with at most \(e\) fake users. Next, we first overview our main idea and then show our theorem. **Overview of our derivation:** Our proof is based on the _law of contraposition_. Suppose we have a statement: \(U\longrightarrow V\), whose contraposition is \(\neg V\longrightarrow\neg U\). The law of contraposition means that a statement is true if and only if its contraposition is true. We define a predicate \(V\) as \(V:|I_{u}\cap\mathcal{T}(\mathbf{M}^{\prime},u)|\geq r\). The predicate \(V\) is true if at least \(r\) of the top-\(N\) items recommended by \(\mathcal{T}\) for \(u\) are in \(I_{u}\) when the poisoned-rating-score matrix is a given \(\mathbf{M}^{\prime}\). Then, we derive a necessary condition (denoted as \(\neg U\)) for \(\neg V\) to be true, i.e., we have \(\neg V\longrightarrow\neg U\). By the law of contraposition, we know that \(V\) is true if \(U\) is true. Roughly speaking, \(U\) means that the \(r\)th largest poisoned item probability for items in \(I_{u}\) is larger than the \((N-r+1)\)th largest poisoned item probability for items in \(I\setminus I_{u}\), when the poisoned-rating-score matrix is \(\mathbf{M}^{\prime}\). The challenge in deriving the condition for \(U\) to be true is that it is hard to compute the poisoned item probabilities \(p^{\prime}_{i}\)'s due to the complexity of recommender system. To address the challenge, we resort to derive a lower bound of \(p^{\prime}_{i}\) for each \(i\in I_{u}\) and an upper bound of \(p^{\prime}_{j}\) for each \(j\in I\setminus I_{u}\). In particular, we derive the lower/upper bounds of poisoned item probabilities using lower/upper bounds of item probabilities. We consider lower/upper bounds of item probabilities instead of their exact values, because it is challenging to compute them exactly. Suppose we have a lower bound \(\underline{p_{i}}\) of \(p_{i}\) for each \(i\in I_{u}\) and an upper bound \(\overline{p}_{j}\) of \(p_{j}\) for each \(j\in I\setminus I_{u}\), i.e., we have the following: \[p_{i}\geq\underline{p_{i}}\text{ and }p_{j}\leq\overline{p}_{j}, \tag{3}\] where \(i\in I_{u}\) and \(j\in I\setminus I_{u}\). In the next section, we design an algorithm to estimate such lower/upper bounds of item probabilities. Given the lower/upper bounds \(\underline{p_{i}}\) and \(\overline{p}_{j}\), we derive a lower bound of \(p^{\prime}_{i}\) for each \(i\in I_{u}\) and an upper bound of \(p^{\prime}_{j}\) for each \(j\in I\setminus I_{u}\) via a variant Neyman-Pearson Lemma [28] that we develop. Our variant is applicable to multiple functions, while the standard Neyman-Pearson Lemma is only applicable to one function. Next, we show our intuition to derive the upper and lower bounds (please refer to the proof of Theorem 1 for formal analysis) of the poisoned item probabilities. We denote by \(\Phi\) the union of the domain spaces of \(\mathbf{X}\) and \(\mathbf{Y}\), i.e., each element in \(\Phi\) is a submatrix with \(s\) rows sampled from \(\mathbf{M}\) or \(\mathbf{M}^{\prime}\). Our idea is to find subsets in \(\Phi\) such that we can apply our variant of the Neyman-Pearson Lemma to derive the upper/lower bounds of the poisoned item probabilities. Moreover, the upper/lower bounds are related to the probabilities that the random submatrices \(\mathbf{X}\) and \(\mathbf{Y}\) are in the subsets, which can be easily computed. We denote by \(P\) (or \(A\)) the set of submatrices sampled from \(\mathbf{M}\) (or \(\mathbf{M}^{\prime}\)) that include the user \(u\). **Deriving a lower bound of \(p^{\prime}_{i}\), \(i\in I_{u}\):** We can find a subset \(C_{i}\subseteq P\) such that we have \(\Pr(\mathbf{X}\in C_{i})=p^{*}_{i}\triangleq\frac{|p_{i}\cdot\binom{n}{s}|}{ \binom{n}{s}}\). Note that we can find such a subset because \(p^{*}_{i}\) is an integer multiple of \(1/\binom{n}{s}\). Then, via our variant of the Neyman-Pearson Lemma, we can derive a lower bound of \(p^{\prime}_{i}\) using the probability that the random submatrix \(\mathbf{Y}\) is in the subset \(C_{i}\), i.e., we have: **Deriving an upper bound of \(p^{\prime}_{j}\), \(j\in I\setminus I_{u}\):** We first find a subset \(C^{\prime}_{j}\subseteq P\) such that we have the following: \(\Pr(\mathbf{X}\in C^{\prime}_{j})=\overline{p^{*}_{j}\binom{n}{s}}\). Given the subset \(C^{\prime}_{j}\), we further define a subset \(C_{j}=C^{\prime}_{j}\cup(A\setminus P)\). Then, based on our variant of the Neyman-Pearson Lemma, we derive an upper bound of \(p^{\prime}_{j}\) using the probability that the random submatrix \(\mathbf{Y}\) is in the subset \(C_{j}\), i.e., we have: \[p^{\prime}_{j}\leq\Pr(\mathbf{Y}\in C_{j}). \tag{4}\] In our derivation, we further improve the upper bound via jointly considering multiple items in \(I\setminus I_{u}\). Suppose \(\mathcal{H}_{\mathcal{C}}\subseteq I\setminus I_{u}\) is a set of \(c\) items. We denote \(\overline{p}_{\mathcal{H}_{\mathcal{C}}}=\sum_{j\in\mathcal{H}_{\mathcal{C}}} \overline{p}_{j}\). Then, we can find a subset \(C^{\prime}_{\mathcal{H}_{\mathcal{C}}}\) such that we have the following: \[\Pr(\mathbf{X}\in C^{\prime}_{\mathcal{H}_{\mathcal{C}}})=\overline{p}^{*}_{ \mathcal{H}_{\mathcal{C}}}\triangleq\frac{|(\overline{p}_{\mathcal{H}_{ \mathcal{C}}}/N^{\prime})\cdot\binom{n}{s}|}{\binom{n}{s}}.\] Given the subset \(C^{\prime}_{\mathcal{H}_{\mathcal{C}}}\), we further define a subset \(C^{\prime}_{\mathcal{H}_{\mathcal{C}}}=C^{\prime}_{\mathcal{H}_{\mathcal{C}}} \cup(A\setminus P)\). Then, we have the following upper bound for the smallest poisoned item probability in the set \(\{p^{\prime}_{j}|j\in\mathcal{H}_{\mathcal{C}}\}\): \[\min_{j\in\mathcal{H}_{\mathcal{C}}}p^{\prime}_{j}\leq\frac{N^{ \prime}\cdot\Pr(\mathbf{Y}\in C_{\mathcal{H}_{\mathcal{C}}})}{c}. \tag{5}\] Finally, we can combine the upper bounds in Equation (4) and (5) to derive an upper bound of the \((N-r+1)\)th largest poisoned item probability in \(I\setminus I_{u}\). Note that we don't jointly consider multiple items in \(I_{u}\) when deriving the lower bounds for poisoned item probabilities in \(I_{u}\) because it does not improve the lower bounds. Formally, we have the following theorem: **Theorem 1**.: _Suppose we have a rating-score matrix \(\mathbf{M}\), a user \(u\), and an arbitrary set of \(k\) items \(I_{u}=\{\mu_{1},\mu_{2},\cdots,\mu_{k}\}\). Furthermore, we have a lower bound \(p_{i}\) for each \(i\in I_{u}\) and an upper bound \(\overline{p}_{j}\) for each \(j\in I\setminus I_{u}\) that satisfy Equation (3). Without loss of generality, we assume \(p_{\mu_{1}}\geq p_{\mu_{2}}\geq\cdots\geq p_{\mu_{k}}\). Under any data poisoning attacks with at most \(e\) fake users, we have the following guarantee: \(\min_{\mathbf{M}^{\prime}\in\mathcal{L}(\mathbf{M}_{\mathcal{C}})}|I_{u}\cap\mathcal{I }(\mathbf{M}^{\prime},u)|\geq r\), where \(r\) is the solution to the following optimization problem or \(0\) if it does not have a solution:_ \[r=\operatorname*{argmax}_{r^{\prime}\in\{1,2,\cdots,\min(k,N)\}} r^{\prime}\] \[s.t.\ \underline{p^{*}_{\mu,\nu}}>\min(\min_{c=1}^{N-r^{\prime}+1} \frac{N^{\prime}\cdot(\overline{p}^{*}_{\mathcal{H}_{\mathcal{C}}}+\sigma)}{c},\overline{p}^{*}_{\nu_{1}}+\sigma), \tag{6}\] _where \(n^{\prime}=n+e\), \(\sigma=\frac{s}{n^{\prime}}\cdot\frac{\binom{n^{\prime}}{s}}{\binom{n}{s}}- \frac{s}{n}\), \(p^{*}_{\underline{p}^{*}_{\mu,\nu}}=\frac{|p_{\underline{p}^{*}_{\nu}}\cdot \binom{n}{s}|}{\binom{n}{s}}\), \(\mathcal{H}_{\mathcal{C}}=\{v_{1},v_{2},\cdots,v_{c}\}\) is the set of \(c\) items that have the smallest item-probability upper bounds among the \(N-r^{\prime}+1\) items with the largest item-probability upper bounds in \(I\setminus I_{u}\), \(v_{1}\) is the item in \(\mathcal{H}_{\mathcal{C}}\), \(\overline{p}_{\mathcal{H}_{\mathcal{C}}}=\sum_{j\in\mathcal{H}_{\mathcal{C}}} \overline{p}_{j}\), \(\overline{p}^{*}_{\mathcal{H}_{\mathcal{C}}}=\frac{|(\overline{p}_{\mathcal{H}_{ \mathcal{C}}}/N^{\prime})\cdot\binom{n}{s}|}{\binom{n}{s}}\), and \(\overline{p}^{*}_{\nu_{1}}=\frac{[\overline{p}_{\nu_{1}}\cdot\binom{n}{s}]}{ \binom{n}{s}}\)._ Proof.: See Appendix A. ### Computing the Certified Intersection Size Given a base algorithm \(\mathcal{A}\), a rating-score matrix \(\mathbf{M}\), a set of genuine users \(\mathcal{U}=\{u_{1},u_{2},\cdots,u_{n}\}\), a set of items \(I_{u}\) for each genuine user \(u\), the maximum number of fake users \(e\), and a sampling size \(s\), we aim to compute the certified intersection size of our ensemble recommender system for each user in \(\mathcal{U}\). The key to compute the certified intersection size is to solve \(r\) in the optimization problem in Equation (6). Specifically, given a user \(u\), the key challenge to solve the optimization problem in Equation (6) is how to estimate the item-probability lower bounds \(\underline{p}_{i}\) for \(\forall i\in I_{u}\) and upper bounds \(\overline{p}_{j}\) for \(\forall j\in I\setminus I_{u}\). One naive way is to build \(\binom{n}{s}\) base recommender systems and compute the exact item probabilities. However, such approach is computationally infeasible as \(\binom{n}{s}\) is huge. To address the challenge, we design an algorithm to estimate lower/upper bounds of item probabilities via building \(T<<\binom{n}{s}\) base recommender systems. Next, we introduce estimating the lower/upper bounds of the item probabilities, solving \(r\) using the estimated item-probability bounds, and our complete algorithm. **Estimating the item-probability lower/upper bounds:** We randomly sample \(T\) submatrices from \(\mathbf{M}\), where each submatrix contains \(s\) rows of \(\mathbf{M}\). For simplicity, we denote them as \(\Gamma_{1},\Gamma_{2},\cdots,\Gamma_{T}\). Then, we build a base recommender system for each submatrix \(\Gamma_{t}\) using the base algorithm \(\mathcal{A}\), where \(t=1,2,\cdots,T\). Given a user \(u\), we use each base recommender system to recommend top-\(N^{\prime}\) items for the user. We denote by \(\mathcal{A}(\Gamma_{t},u)\) the set of top-\(N^{\prime}\) items recommended to the user \(u\) by the base recommender system built on the submatrix \(\Gamma_{t}\). Note that \(\mathcal{A}(\Gamma_{t},u)\) is empty if the user \(u\) is not in the submatrix \(\Gamma_{t}\). We denote by \(T_{i}\) the frequency of an item \(i\) among the recommended top-\(N^{\prime}\) items of the \(T\) base recommender systems, i.e., \(T_{i}\) is the number of base recommender systems whose top-\(N^{\prime}\) recommended items for \(u\) include \(i\). Based on the definition of item probability \(p_{i}\), the frequency \(T_{i}\) follows a binomial distribution with parameters \(T\) and \(p_{i}\), i.e., we have the following: \(\text{Pr}(T_{i}=t)=\binom{T}{t}\cdot p_{i}^{t}\cdot(1-p_{i})^{T-t}\), where \(t=0,1,\cdots,T\). Our goal is to estimate a lower or upper bound of \(p_{i}\) given \(T_{i}\) and \(T\). This is essentially a _binomial proportion confidence interval_ estimation problem. Therefore, we can leverage the standard Clopper-Pearson method [10] to estimate a lower or upper bound of \(p_{i}\) from a given \(T_{i}\) and \(T\). Formally, we have the following: \[\underline{p_{i}} =\text{Beta}(\frac{\alpha_{u}}{m};T_{i},T-T_{i}+1),i\in I_{u}, \tag{7}\] \[\overline{p}_{j} =\text{Beta}(1-\frac{\alpha_{u}}{m};T_{j},T-T_{j}+1),j\in I \setminus I_{u}, \tag{8}\] where \(1-\alpha_{u}/m\) is the confidence level for estimating the lower/upper bound of one item probability, \(m\) is the total number of items, and \(\text{Beta}(\beta;\varsigma,\vartheta)\) is the \(\beta\)th quantile of the Beta distribution with shape parameters \(\varsigma\) and \(\vartheta\). Based on _Bonferroni correction_ in statistics, the _simultaneous confidence level_ of estimating the lower/upper bounds of the \(m\) item probabilities is at least \(1-\alpha_{u}\). Given an item set \(\mathcal{H}_{\zeta}\) defined in Theorem 1, we can estimate \(\overline{p}_{\mathcal{H}_{\zeta}}\) as \(\overline{p}_{\mathcal{H}_{\zeta}}=\min(\sum_{j\in\mathcal{H}_{\zeta}} \overline{p}_{j},N^{\prime}-\sum_{l\in I_{u}}p_{i})\), where both \(\sum_{j\in\mathcal{H}_{\zeta}}\overline{p}_{j}\) and \(N^{\prime}-\sum_{l\in I_{u}}p_{i}\) are upper bounds of \(p_{\mathcal{H}_{\zeta}}\), and we use the smaller one. **Solving the optimization problem:** We note that Equation (6) has the following property: its left-hand side and right-hand side respectively decreases and increases as \(r^{\prime}\) increases. Thus, given the estimated item-probability bounds, we can efficiently solve the optimization problem in Equation (6) via binary search to obtain \(r_{u}\) for each user \(u\). Algorithm 1 shows our BinarySearch algorithm. The function VerifyConstraint verifies whether the constraint in Equation (6) is satisfied for a given \(r^{\prime}\) and returns \(1\) if so. **Complete algorithm:** Algorithm 2 shows our complete algorithm to compute the certified intersection size \(r_{u}\) for each user \(u\in\mathcal{U}=\{u_{1},u_{2},\cdots,u_{n}\}\). The function RandomSample randomly samples \(T\) submatrices, each of which contains \(s\) rows sampled from \(\mathbf{M}\) uniformly at random. BoundEst estimates the item-probability bounds with a confidence level \(1-\frac{\alpha}{n}\), i.e., \(\alpha_{u}=\frac{\alpha}{n}\), for each user \(u\in\mathcal{U}\), based on Equation (7) - (8). BinarySearch solves the optimization problem in Equation (6) via binary search to obtain \(r_{u}\) for \(u\) based on the estimated item-probability bounds. Note that Algorithm 2 requires a clean rating-score matrix \(\mathbf{M}\), which may be sampled from the clean data distribution. Due to randomness, the estimated item-probability bounds may be incorrect, e.g., an estimated item-probability lower bound is larger than the true item probability for some item and some user or an estimated item-probability upper bound is smaller than the true item probability for some item and some user. When such estimation error happens for a user \(u\), the solved certified intersection size \(r_{u}\) is incorrect for \(u\). Since the simultaneous confidence level of estimating the item-probability bounds in our algorithm is at least \(1-\frac{\alpha}{n}\) for any user \(u\in\mathcal{U}\), the probability of having at least one incorrectly estimated item-probability bound for any user \(u\in\mathcal{U}\) is at most \(\frac{\alpha}{n}\). Moreover, our following theorem shows that the probability of having an incorrect certified intersection size \(r_{u}\) for at least one user in \(\mathcal{U}\) is bounded by \(\alpha\): ``` Input:\(\mathbf{M}\), \(s\), \(T\), \(\mathcal{A}\), \(N^{\prime}\), \(\alpha\), \(e\), \(N\), \(\mathcal{U}\), and \(\{I_{u}|u\in\mathcal{U}\}\) Output:\(r_{u}\) for each user \(u\in\mathcal{U}\) \(\Gamma_{1},\Gamma_{2},\cdots,\Gamma_{T}\leftarrow\textsc{RandomSample}(\mathbf{M},s)\) for\(u\) in \(\mathcal{U}\)do \(\text{counts}[i]\leftarrow\sum_{l=1}^{T}\mathbb{I}(i\in\mathcal{A}(\Gamma_{i},u)),i \in\{1,2,\cdots,m\}\) \(\underline{p}_{i},\overline{p}_{j}\leftarrow\textsc{BoundEst}(\text{counts}, \frac{\alpha}{n}),i\in I_{u},j\in I\setminus I_{u}\) \(\overline{r}_{u}=\textsc{BinarySearch}(e,s,N^{\prime},N,I_{u},\{\underline{p} _{j}|i\in I_{u}\},\{\overline{p}_{j}|j\in I\setminus I_{u}\})\) endfor return\(\{r_{u}|u\in\mathcal{U}\}\) ``` **Algorithm 2**Compute \(r\) ## 5 Evaluation ### Experimental Setup **Datasets:** We mainly evaluate PORE on MovieLens-100k and MovieLens-1M benchmark datasets [1, 14], which con sist of around \(100,000\) and \(1,000,000\) rating scores, respectively. Specifically, MovieLens-100k contains 943 users and \(1,682\) items, where each user rated 106 items on average. MovieLens-1M contains \(6,040\) users and \(3,952\) items, where each user on average rated 166 items. Following [3], for each user, we sample 75% of its rating scores as training data and treat its remaining rated items as test items. The users' training data form the rating-score matrix and are used to build recommender systems, while their test items are used to evaluate the performance of the recommended top-\(N\) items. **Base algorithms:** PORE is applicable to any base algorithm. To show such generality, we evaluate two base algorithms, i.e., Item-based Recommendation (IR) [3] and Bayesian Personalized Ranking (BPR) [29]. We adopt their public implementations [3]. We adopt IR because it has been widely deployed in industry [24]. We adopt BPR because it achieves state-of-the-art performance according to recent benchmarks released by Microsoft [2]. **Evaluation metrics:** When there are no data poisoning attacks, \(Precision@N\), \(Recall@N\), and \(F1\)-\(Score@N\) are standard metrics to evaluate the performance of a recommender system. We denote by \(\mathcal{E}_{u}\) the set of test items for a user \(u\). Precision@\(N\) for a user \(u\) is the fraction of the top-\(N\) items recommended for \(u\) that are in \(\mathcal{E}_{u}\), Recall@\(N\) for \(u\) is the fraction of the items in \(\mathcal{E}_{u}\) that are in the top-\(N\) items recommended for \(u\), while F1-Score@\(N\) for a user \(u\) is the harmonic mean of the user's Precision@\(N\) and Recall@\(N\). The Precision@\(N\) (or Recall@\(N\) or F1-Score@\(N\)) of a recommender system algorithm is the users' average Precision@\(N\) (or Recall@\(N\) or F1-Score@\(N\)). Under data poisoning attacks, Precision@\(N\), Recall@\(N\), and F1-Score@\(N\) are insufficient to evaluate a recommender system algorithm. This is because they may be different under different data poisoning attacks, and it is infeasible to enumerate all possible attacks. To address the challenge, we propose to evaluate a recommender system algorithm using \(certified\)\(Precision@N\), \(certified\)\(Recall@N\), and \(certified\)\(F1\)-\(Score@N\) under attacks. Like the standard Precision@\(N\) (or Recall@\(N\) or F1-Score@\(N\)), calculating our certified Precision@\(N\), certified Recall@\(N\), and certified F1-Score@\(N\) also only requires a clean rating-score matrix and thus does not depend on any specific data poisoning attack. Certified Precision@\(N\) (or certified Recall@\(N\) or certified F1-Score@\(N\)) is a _lower bound_ of Precision@\(N\) (or Recall@\(N\) or F1-Score@\(N\)) under any data poisoning attacks with at most \(e\) fake users. For instance, a certified Precision@\(N\) of 0.3 means that a recommender system achieves at least Precision@\(N\) of 0.3 when the number of fake users is at most \(e\), no matter what rating scores they use. Specifically, our certified Precision@\(N\) for a user \(u\) is the least fraction of the top-\(N\) recommended items for \(u\) that are guaranteed to be in \(\mathcal{E}_{u}\) when there are at most \(e\) fake users; our certified Recall@\(N\) for \(u\) is the least fraction of \(u\)'s test items \(\mathcal{E}_{u}\) that are guaranteed to be in the top-\(N\) recommended items; while certified F1-Score@\(N\) for a user is the harmonic mean of the user's certified Precision@\(N\) and certified Recall@\(N\). Formally, we have the following for a user \(u\): \[\text{Certified Precision@}N=\min_{\textbf{{M}}^{\prime}\in\mathcal{ L}(\textbf{{M}},e)}\frac{|\mathcal{E}_{u}\cap\mathcal{T}(\textbf{{M}}^{ \prime},u)|}{N}, \tag{9}\] \[\text{Certified Recall@}N=\min_{\textbf{{M}}^{\prime}\in\mathcal{ L}(\textbf{{M}},e)}\frac{|\mathcal{E}_{u}\cap\mathcal{T}(\textbf{{M}}^{ \prime},u)|}{|\mathcal{E}_{u}|},\] (10) \[\text{Certified F1-Score@}N=\min_{\textbf{{M}}^{\prime}\in\mathcal{ L}(\textbf{{M}},e)}\frac{2\cdot|\mathcal{E}_{u}\cap\mathcal{T}(\textbf{{M}}^{ \prime},u)|}{|\mathcal{E}_{u}|+N}, \tag{11}\] where \(|\cdot|\) is the size of a set. We can compute the certified Precision@\(N\), certified Recall@\(N\), and certified F1-Score@\(N\) for each user by Algorithm 2. In particular, we can compute certified intersection size \(r_{u}\) for each user \(u\) using Algorithm 2 by letting \(I_{u}=\mathcal{E}_{u}\). Given \(r_{u}\), the certified Precision@\(N\), certified Recall@\(N\), and certified F1-Score@\(N\) for a user \(u\) are at least \(\frac{r_{u}}{N}\), \(\frac{r_{u}}{|\mathcal{E}_{u}|}\), and \(\frac{2r_{u}}{|\mathcal{E}_{u}|+N}\), respectively. A recommender system algorithm's certified Precision@\(N\) (or Recall@\(N\) or F1-Score@\(N\)) is the average of the _genuine users'_ certified Precision@\(N\) (or Recall@\(N\) or F1-Score@\(N\)). **Compared methods:** We note that PORE is the first provably robust recommender system algorithm against data poisoning attacks. Therefore, there are no prior recommender system algorithms we can compare with in terms of \(certified\) Precision@\(N\), Recall@\(N\), and F1-Score@\(N\). However, Jia et al. [19] showed that bagging can be used to build provably robust defense against data poisoning attacks for _machine learning classifiers_, which we extend to recommender systems and compare with our PORE. Roughly speaking, given a training dataset, bagging trains multiple base classifiers, each of which is trained on a random subset of training examples in the training dataset. Given a testing input, bagging uses each base classifier to predict its label and takes a majority vote among the predicted labels as the final predicted label for the testing input. Jia et al. showed that bagging can guarantee that the predicted label for an input is provably unaffected by a bounded number of fake training examples injected into the training dataset. We generalize their provable guarantee to derive certified intersection size for each genuine user in recommender systems, which can be then used to compute certified Precision@\(N\), Recall@\(N\), and F1-Score@\(N\) for bagging. Specifically, we treat a user as a testing input, an item as a label, and a base recommender system as a base classifier in the terminology of bagging. Like our PORE, the generalized bagging builds \(T\) base recommender systems and takes majority vote among them to recommend top-\(N\) items to each user. Note that since a base classifier predicts one label for a testing input, we set \(N^{\prime}=1\), i.e., a base recommender system (i.e., a base classifier) recommends top-1 item (i.e., predicts one label) for a user (i.e., a testing input). Finally, for each user, bagging recommends him/her the \(N\) items with the largest (poisoned) item probabilities. Given a set of items \(I_{u}\) for a user \(u\), we denote by \(\underline{p_{i}}\) a lower bound of item probability \(p_{i}\) of item \(i\in I_{u}\). We use \(\overline{p}_{l}\) to denote the largest upper bound of item probabilities of items in \(I\setminus I_{u}\), i.e., \(\overline{p}_{l}=\max_{j\in I\setminus I_{u}}\overline{p}_{j}\). We can estimate these item-probability bounds using our method in Equation 7 and 8. Given \(\underline{p}_{i}\) and \(\overline{p}_{l}\), we can compute an integer \(Z_{i}\) based on Theorem 1 in bagging [19]. Roughly speaking, bagging can guarantee the poisoned item probability \(p^{\prime}_{i}\) is larger than \(p^{\prime}_{l}\) under any data poisoning attacks with at most \(Z_{i}\) fake users. Therefore, given at most \(e\) fake users, the certified intersection size of bagging for a user \(u\) can be computed as \(\min\{\sum_{i\in I_{u}}\mathbb{I}(Z_{i}\geq e),N\}\), where \(\mathbb{I}\) is an indicator function. We note that when bagging is extended to recommender systems, both \(N^{\prime}\) and \(N\) can only be \(1\) when using the techniques in [19] to derive its provable robustness guarantees. PORE can be viewed as an extension of bagging to recommender systems, but \(N^{\prime}\) and \(N\) can be arbitrary positive integers. Due to such differences, we propose new techniques to derive the robustness guarantee of PORE. Our major technical contribution is to derive a better guarantee for bagging applied to recommender systems. **Parameter setting:** PORE has the following parameters: \(N^{\prime}\) is the number of items recommended by a base recommender system for a user, \(N\) is the number of items recommended by our ensemble recommender system for a user, \(T\) is the number of base recommender systems, \(1-\alpha\) is the confidence score, and \(s\) is the number of rows sampled from the rating-score matrix in each submatrix. Unless otherwise mentioned, we adopt the following default parameter settings: \(N^{\prime}=1\), \(N=10\), \(T=100,000\), \(\alpha=0.001\), \(s=200\) for MovieLens-10k, and \(s=500\) for MovieLens-1M. We will study the impact of each parameter on the certified Precision@\(N\), certified Recall@\(N\), and certified F1-Score@\(N\) of PORE while fixing the remaining parameters to their default settings. We call our ensemble recommender system _ensemble IR_ (or _ensemble BPR_) when the base algorithm is IR (or BPR). By default, we use IR as the base algorithm because of its scalability. ### Experimental Results We report Precision@\(N\)/Recall@\(N\)/F1-Score@\(N\) under no attacks (i.e., \(e=0\)), while we report certified Precision@\(N\) /Recall@\(N\)/F1-Score@\(N\) under attacks (i.e., \(e\geq 1\)). **Our PORE outperforms bagging [19]:** Figure 3 compares our PORE with bagging on the two datasets. We find that our PORE substantially outperforms bagging when extended from classifiers to recommender systems. The reason is that PORE jointly considers multiple items when deriving the certified intersection size. In contrast, bagging can only consider each item independently when extended to recommender systems, and thus achieve a suboptimal certified intersection size. **Impact of \(N^{\prime}\):** Figure 4 shows the impact of \(N^{\prime}\). We have two observations. First, our method has similar Precision@\(N\)/Recall@\(N\)/F1-Score@\(N\) for different \(N^{\prime}\) when there are no attacks (i.e., \(e=0\)). Second, a smaller \(N^{\prime}\) achieves a lower certified Precision@\(N\), certified Recall@\(N\), or certified F1-Score@\(N\) when \(e\) is small (e.g., \(e=1\)), but the curve has a longer tail. In other words, a smaller \(N^{\prime}\) is more robust against data poisoning attacks as the number of fake users \(e\) increases. The reason is that an attack has a smaller manipulation space when \(N^{\prime}\) is smaller. This observation is also consistent with our theoretical result in Equation (6). Specifically, given the same item-probability lower/upper bounds, a smaller \(N^{\prime}\) may lead to a larger certified intersection size. Therefore, we set \(N^{\prime}\) to be \(1\) by default in our experiments. **Impact of \(N\):** Figure 5 shows the impact of \(N\). The results show that \(N\) achieves a tradeoff between Precision@\(N\) under no attacks (i.e., \(e=0\)) and robustness under attacks. Specifically, a smaller \(N\) achieves a higher Precision@\(N\) under no attacks but the certified Precision@\(N\) decreases more quickly as \(e\) increases. The certified Recall@\(N\) increases as \(N\) increases. The reason is that more items are recommended to each user as \(N\) increases. The certified F1-Score@\(N\) drops more quickly as \(e\) increases when \(N\) is smaller, because the certified Recall@\(N\) drops more quickly. Figure 4: Impact of \(N^{\prime}\) on the certified Precision@\(N\), certified Recall@\(N\), and certified F1-Score@\(N\) of ensemble IR for MovieLens-100k (left three) and MovieLens-1M (right three), where \(N=10\). Figure 3: Our PORE outperforms bagging when extended from classifiers to recommender systems on MovieLens-100k (left three) and MovieLens-1M (right three), where \(N=10\) and base algorithm is IR. **Impact of \(s\):** Figure 6 shows the impact of \(s\). We have two observations. First, our method achieves similar Precision@\(N\), Recall@\(N\), and F1-Score@\(N\) for different \(s\) when there are no attacks (i.e., \(e=0\)). Second, a larger \(s\) achieves a larger certified Precision@\(N\), certified Recall@\(N\), or certified F1-Score@\(N\) when \(e\) is small, but they decrease more quickly as \(e\) increases. This is because it's more likely to sample fake users in a submatrix when \(s\) is larger. **Impact of \(T\) and \(\alpha\):** Figure 7 and 8 show the impact of \(T\) and \(\alpha\), respectively. We have the following observations. First, Precision@\(N\), Recall@\(N\), or F1-Score@\(N\) is similar for different \(T\) when there are no attacks. In other words, a small \(T\) is enough for our ensemble recommender system to achieve good recommendation performance when there are no attacks. Second, certified Precision@\(N\), certified Recall@\(N\), or certified F1-Score@\(N\) increases as \(T\) or \(\alpha\) increases. The reason is that a larger \(T\) or \(\alpha\) can produce tighter estimated item-probability lower/upper bounds, based on which we may compute larger certified intersection sizes \(r\) in our Algorithm 2. Therefore, we use a larger \(T\) by default in our experiments to better show the certified Precision@\(N\), certified Recall@\(N\), and certified F1-Score@\(N\) of PORE. We also observe that the certified Precision@\(N\), certified Recall@\(N\), and certified F1-Score@\(N\) are insensitive to \(\alpha\) once it is small enough. **Ensemble IR vs. ensemble BPR:** Figure 9 compares ensemble IR and ensemble BPR. The results show that they achieve Figure 5: Impact of \(N\) on the certified Precision@\(N\), certified Recall@\(N\), and certified F1-Score@\(N\) of ensemble IR for Movielens-100k (left three) and MovieLens-1M (right three). Figure 6: Impact of \(s\) on the certified Precision@\(N\), certified Recall@\(N\), and certified F1-Score@\(N\) of ensemble IR for MovieLens-100k (left three) and MovieLens-1M (right three), where \(N=10\). Figure 7: Impact of \(T\) on the certified Precision@\(N\), certified Recall@\(N\), and certified F1-Score@\(N\) of ensemble IR for MovieLens-100k (left three) and MovieLens-1M (right three), where \(N=10\). Figure 8: Impact of \(\alpha\) on the certified Precision@\(N\), certified Recall@\(N\), and certified F1-Score@\(N\) of ensemble IR for MovieLens-100k (left three) and MovieLens-1M (right three), where \(N=10\). Figure 9: Comparing the certified Precision@\(N\), certified Recall@\(N\), and certified F1-Score@\(N\) of ensemble IR and ensemble BPR for MovieLens-100k (left three) and MovieLens-1M (right three), where \(N=10\). similar certified Precision@_N_/Recall@_N_/F1-Score@_N_. One exception is that ensemble BPR achieves higher certified Precision@_N_ on MovieLens-1M dataset. **Standard recommender system vs. ensemble recommender system under no attacks:** Table 1 compares standard recommender systems and our ensemble recommender systems with respect to the standard Precision@_N_, Recall@_N_, and F1-Score@_N_ when there are no attacks, where \(s=300\) for MovieLens-100k and \(s=1,000\) for MovieLens-1M. A standard recommender system leverages IR (or BPR) to train a single recommender system on the entire rating-score matrix. The results show that our ensemble recommender system achieves comparable performance with a standard recommender system when there are no attacks. Ensemble BPR achieves higher Precision@N than ensemble IR on both datasets. The reason is that BPR achieves higher precision than IR at training the base recommender systems. **Training time of ensemble IR and BPR:** PORE trains \(T\) base recommender systems, each of which is trained using rating scores of \(s\) users. Figure 10 shows the training time of ensemble IR/BPR as a function of \(s\), while Figure 11 shows the impact of \(T\) on the training time of ensemble IR/BPR. As expected, the training time of ensemble IR/BPR increases linearly as \(s\) or \(T\) increases. This is because a larger \(s\) means each base recommender system is trained using more data, while a larger \(T\) means more base recommender systems are trained. Ensemble IR is more efficient than ensemble BPR because IR is more efficient than BPR. **Sampling with vs. without replacement:** PORE randomly samples a submatrix without replacement, i.e., the sampled \(s\) users are different in a submatrix. Sampling with replacement means that the \(s\) users may have duplicates, i.e., a submatrix may include rating scores of less than \(s\) unique users. We can extend our theoretical guarantees to sampling with replacement. The theoretical analysis is similar to sampling \begin{table} \begin{tabular}{|c|c|c|c|} \multicolumn{4}{c}{(a) MovieLens-100k} \\ \hline Algorithm & Precision@10 & Recall@10 & F1-Score@10 \\ \hline IR & 0.330753 & 0.176385 & 0.193783 \\ \hline Ensemble IR & 0.332556 & 0.178293 & 0.195624 \\ \hline BPR & 0.349841 & 0.181807 & 0.199426 \\ \hline Ensemble BPR & 0.352280 & 0.173296 & 0.193362 \\ \hline \end{tabular} \end{table} Table 1: Precision@10, Recall@10, and F1-Score@10 of IR, Ensemble IR, BPR, and Ensemble BPR under no attacks. Figure 11: Training time of PORE as a function of \(T\) on MovieLens-100k (left) and MovieLens-1M (right). Figure 12: Comparing certified F1-Score@_N_ of sampling with and without replacement for ensemble IR on MovieLens-100k, where \(N=10\). Figure 10: Training time of PORE as a function of \(s\) on MovieLens-100k (left) and MovieLens-1M (right). without replacement, so we omit it for simplicity. Figure 12 compares the certified F1-Score@\(N\) of sampling with and without replacement for ensemble IR on MovieLens-100k dataset. Our results show that sampling with and without replacement achieves comparable certified F1-Score@\(N\). They also achieve comparable certified Precision@\(N\) and certified Recall@\(N\), which we omit for simplicity. **Evaluation on a large dataset:** We also evaluate PORE on MovieLens-10M [1], which consists of around \(10,000,000\) rating scores. We set \(s=5,000\), \(T=1,000\), and IR as the base recommender system algorithm, and we adopt the default settings for other parameters. We set a smaller \(T\) than our previous experiments because it is more expensive to train each base recommender system on MovieLens-10M. Figure 13 shows the results. Our results indicate that our method is applicable to a large dataset and can derive certified performance guarantees against data poisoning attacks. We note that the certified Precision@\(N\)/Recall@\(N\)/F1-Score@\(N\) of bagging also reduces to 0 with just 1 fake user. ## 6 Related Work **Data poisoning attacks to recommender systems:** Many data poisoning attacks to recommender systems [18, 12, 18, 39, 42, 34, 38, 45, 46, 39, 42, 45, 34, 33, 44, 38] have been proposed. Early attacks are algorithm-agnostic, i.e., the crafted rating scores of fake users do not depend on the recommender system algorithm [27, 6, 22]. Recently, more advanced data poisoning attacks [18, 12, 33, 39, 42, 46, 11] have been optimized for specific recommender system algorithms. For instance, Yang et al. [42] proposed to inject fake co-visitations to poison association rule based recommender systems, Fang et al. proposed optimized data poisoning attacks to graph based recommender systems [12] and matrix factorization based recommender systems [11], and Huang et al. [18] proposed data poisoning attacks optimized to deep learning based recommender systems. **Empirical defenses against data poisoning attacks to recommender systems:** One family of defenses [44, 47, 7, 12, 7] aim to detect fake users via analyzing their abnormal rating score patterns. The key assumption is that the rating scores of fake users and genuine users have different patterns. Fo instance, Burke [7] extracted features from each user's rating scores and trained a classifier to predict whether a user is fake or not. Another family of defenses [43, 35, 30, 25, 17, 35] try to train more robust recommender systems. For instance, adversarial training [13], which was developed to train robust machine learning classifiers, has been extended to train robust recommender systems by multiple work [43, 35]. However, none of the above defenses provides provable robustness guarantees. In particular, they cannot derive certified Precision@\(N\), certified Recall@\(N\), and certified F1-Score@\(N\). As a result, they can still be attacked by adaptive attacks. **Ensemble recommender systems:** Ensemble methods have been explored to improve the empirical performance of recommender systems [4, 37, 40]. For instance, around a decade ago, the winning teams [4, 37] in the well-known Netflix competition on predicting user rating scores for movies blended multiple base recommender systems built by different base algorithms. However, these studies are different from ours. The key difference is that they didn't derive the provable robustness guarantees for their ensemble recommender systems. Moreover, they use different ways to aggregate the base recommender systems, e.g., they aggregated the rating scores predicted by the base recommender systems. **Provably robust classifiers against data poisoning attacks:** Several works [19, 20] proposed certified defenses against data poisoning attacks for machine learning classifiers. The key difference between classifiers and recommender systems is that an input has a single ground-truth label in a classifier while a user has multiple ground-truth items in a recommender system. As a result, these methods achieve suboptimal certified robustness guarantees when generalized to recommender systems as shown in our experimental results. ## 7 Discussion and Limitations **Theoretical guarantees:** For any given number of fake users, our method can derive a certified intersection size \(r\) for each user. Note that our theoretical guarantee still holds even if fake users rate new items. However, when the fraction of fake users is large (e.g., 49%), the derived \(r\) and the corresponding certified Precision@\(N\)/Recall@\(N\)/F1-Score@\(N\) may reduce to 0. As the first step on provably robust recommender systems, our method can derive a non-zero \(r\) against a moderate number of fake users. It is still an open challenge to derive a non-trivial \(r\) for a large fraction of fake users. Essentially, there is a trade-off between performance without attack and robustness, which is controlled by \(s\) (number of users in each submatrix). When the fraction of fake users is large, using a small \(s\) makes the ensemble recommender system more robust but less accurate. There are several directions to further improve the theoretical guarantees, i.e., derive a larger \(r\) for a given fraction of fake users. First, we considered a very strong threat model, where each fake user can arbitrarily rate all items. For instance, in MovieLens-100k, each fake user can rate up to 1,682 items, which accounts for 1.7% of the rating scores from all genuine users. Fake users that rate a large number of items can be easily detected, as genuine users often rate a small number of items. Therefore, one way to further improve theoretical guarantee is to consider fake users that rate a bounded number of items. Second, PORE is applicable to any base recommender system algorithm without considering the knowledge of the base algorithm. Therefore, the second possible way to derive better theoretical guarantees is to consider the domain knowledge of a specific base algorithm. **Base algorithms and voting mechanisms:** We focus on using the same base algorithm to train each base recommender system in this work. We note that the base recommender systems can be trained using different base algorithms. In particular, our theoretical guarantee holds for any (randomized) base algorithm. Therefore, given a set of base algorithms, we can randomly pick one to train a base recommender system. Moreover, we can view each base recommender system is trained using a randomized base algorithm sampled from the set of base algorithms. PORE uses hard voting when aggregating the items recommended by the base recommender systems. Hard voting has to be used to derive the theoretical guarantee of the ensemble recommender system. **Targeted data poisoning attacks:** In this work, we focus on untargeted data poisoning attacks, which aim to reduce the overall performance of a recommender system. Our method can guarantee a lower bound of recommendation performance against any untargeted data poisoning attacks. Targeted data poisoning attacks aim to promote specific attacker-chosen items (called _target items_) [42]. It is an interesting future work to derive provable robustness guarantees against such attacks. Specifically, given a fraction of fake users, we aim to derive an upper bound of the number of genuine users, to which the target items are recommended. ## 8 Conclusion and Future Work In this work, we show that PORE can turn an arbitrary base recommender system algorithm to be provably robust against data poisoning attacks via ensembling multiple base recommender systems built by the base algorithm on random subsamples of the rating-score matrix. Our ensemble recommender system guarantees a certain fraction of its top-\(N\) items recommended to a user is unaffected by fake users no matter how the attacker crafts their rating scores. Our empirical evaluation confirms that our ensemble recommender system provides provable robustness guarantees. Interesting future work includes deriving better provable robustness guarantees by bounding the number of items that fake users can rate and incorporating the knowledge of the base recommender system algorithm as well as deriving provable robustness guarantees against targeted data poisoning attacks. ## Acknowledgements We thank the anonymous reviewers and shepherd for their constructive comments. This work was supported by NSF under grant No. 2131859, 2112562, 2131859, 1937786, and 1937787, as well as ARO grant No. W911NF2110182.
2309.01531
Optimizing mixing in the Rudner-Levitov lattice
Here we discuss optimization of mixing in finite linear and circular Rudner-Levitov lattices, i.e., Su-Schrieffer-Heeger lattices with a dissipative sublattice. We show that presence of exceptional points in the systems spectra can lead to drastically different scaling of the mixing time with the number of lattice nodes, varying from quadratic to the logarithmic one. When operating in the region between the maximal and minimal exceptional points, it is always possible to restore the logarithmic scaling by choosing the initial state of the chain. Moreover, for the same localized initial state and values of parameters, a longer lattice might mix much faster than the shorter one. Also we demonstrate that an asymmetric circular Rudner-Levitov lattice can preserve logarithmic scaling of the mixing time for an arbitrary large number of lattice nodes.
I. Peshko, M. Antsukh, D. Novitsky, D. Mogilevtsev
2023-09-04T11:22:19Z
http://arxiv.org/abs/2309.01531v1
# Optimizing mixing in the Rudner-Levitov lattice ###### Abstract Here we discuss optimization of mixing in finite linear and circular Rudner-Levitov lattices (Su-Schrieffer-Heeger lattices with a dissipative sublattice). We show that presence of exceptional points in the systems spectra can lead to drastically different scaling of the mixing time with the number of lattice nodes, varying from quadratic to the logarithmic one. When operating in the region between the maximal and minimal exceptional points, it is always possible to restore the logarithmic scaling by choosing the initial state of the chain. Moreover, for the same localized initial state and values of parameters, a longer lattice might mix much faster than the shorter one. Also we demonstrate that an asymmetric circular Rudner-Levitov lattice can preserve logarithmic scaling of the mixing time for an arbitrary large number of lattice nodes. ## I Introduction Mixing in continuous quantum walks is a much discussed and exploited topic [1]. It finds important applications in quantum computing [2]. It is also a very important aspect of modeling transport phenomena in quantum systems [3]. Unitary quantum walks do not mix in a common classical sense of approaching an asymptotic distribution, but they do mix on average: the long-time time-averaged occupation probabilities of each lattice node can be considered as the limiting distribution [4]. Decohering quantum walks were shown to mix in a classical sense and to be able to do it faster than classical counterparts [5; 6; 7]. Here we address an aspect of mixing that was not much discussed before, namely, optimizing mixing in practical devices exploiting designed loss for producing delocalized stationary states from the localized input states. For such devices (be it dissipative beam-splitters and "equalizers" [8; 9; 10; 11; 12; 13; 14; 15], asymmetric distributors and propagators [16; 17; 18; 19], non-classical states protectors and generators [8; 9; 10; 20; 21; 22]) minimization of the interaction time/length, i.e., mixing time, can be crucially important for practical feasibility and integrability. The systems with designed loss considered in the paper can be treated from the standpoint of non-Hermitian physics [23]. The most intriguing aspect of non-Hermitian systems is the possibility of the spectral degeneracies known as the exceptional points much studied recently in classical and quantum optical systems [24]. In particular, in the systems obeying parity-time (\(\mathcal{PT}\)) symmetry, an exceptional point has a clear physical meaning as a point of spontaneous symmetry breaking resulting in transition between \(\mathcal{PT}\)-symmetric and \(\mathcal{PT}\)-symmetry-broken phases [25; 26; 27]. Among the multitude of effects associated with the exceptional points, we mention anisotropic transmission resonances [28; 29], locking of light propagation direction [30], the effect of coherent perfect absorption and lasing [31; 32; 33], loss compensation [34], sensing with enhanced response [35; 36], resonant energy transfer enhancement [37], "masking" of exceptional points by quantum interference [38], and so on. Here we show that in the non-Hermitian lattices considered in this work exceptional points define the character of the mixing, or even its very existence. The main points of this work are as follows. We consider systems with the so-called "dark state", i.e., the non-vacuum stationary state. This state is the basic practically used feature of the considered systems. They are supposed to function by projecting the initial state on this "dark state", thus generating it. In difference with decohering walks [6], for generic lossy quantum walks the normalized occupation probability distribution might not mix in a classical sense. We connect character of mixing with the presence of exceptional points in the systems spectrum that can drastically affect scaling of the mixing time changing it from \(O(N^{2})\) to \(O(\log(N))\), \(N\) being a number of sub-systems (nodes) of the system. So, optimization of the mixing might involve designing system parameters to be in a certain position with respect to the exceptional points. We also show that choice of the initial states can drastically fasten the mixing and considerably extend the region of \(O(\log(N))\) scaling, while still having the number of the initially excited nodes much less than the total number of nodes. Thus, optimization should include also a proper choice of the initial state. The outline of this work is as follows. After a brief discussion of the mixing concept in Section II, we illustrate in Section III the first point described above with the example of the simplest three-mode dissipative system, namely, a dissipative beam-splitter. This structure can be considered the shortest kind of the finite Rudner-Levitov (RL) model [39] having a "dark state" (or the Su-Schrieffer-Heeger (SSH) model with the second sub-lattice being lossy [40]). We also show how mixing and non-mixing regimes arise and how mixing time behaves in dependence on the system parameters. In Section IV, we discuss mixing time behaviour in a generic system with an exceptional point where all the imaginary parts of the eigenvalues are coalesced. In Section V, we consider a finite linear RL model and show how the mixing time depends on the position of exceptional points. Here we show how one can initially excite only few nodes to extend \(O(\log(N))\) scaling. Also, we show how to exploit the structure of eigenvectors to achieve the fastest localization with the minimal number of the initially excited nodes, \(M\). In Section VI, we demonstrate how the found mixing optimization can be achieved for the circular RL lattice. In particular, we show that for the same values of the interaction constants, loss rates and initial states, the circular lattice can exhibit much faster mixing for the same number of nodes. Moreover, we demonstrate that the asymmetric circular RL lattice can have log-like fast mixing scaling for arbitrary \(N\). Also, we show how for the circled lattice diabolic points [41] can convalesce into an exceptional point. ## II Mixing Here we clarify the concept of mixing with respect to dissipative structures. We consider systems described by the following generic equation for the vector of amplitudes \(\vec{\psi}\) \[\frac{d}{dt}\vec{\psi}=-i\mathbf{H}\vec{\psi}, \tag{1}\] where the elements of the vector \(\vec{\psi}\), i.e., the amplitudes \(\psi_{j}\), correspond to the nodes of the lattice. This lattice is described by the effective non-Hermitian Hamiltonian \(\mathbf{H}\). Notice that generally Eq.(1) describes a particular case of state dynamics resulting from the master equation describing the lattice. For instance, the amplitudes \(\psi_{j}\) might be indeed the amplitudes of coherent states propagating through the lattice of single-mode waveguides or the off-diagonal elements of the single-particle density matrix [42]. As it was already mentioned in the Introduction, we consider practical systems that function as passive state filters by projecting an initial state on the particular "dark" state (which can be a non-classical and even entangled one [16; 17; 18; 19; 42]). Thus, the eigenvalues \(\lambda_{k}\) of the Hamiltonian \(\mathbf{H}\) satisfy \(\mathrm{Im}\{\lambda_{k}\}\leq 0\), the index \(k=0,1\ldots\mathrm{dim}(\mathbf{H})-1\). For simplicity sake, we assume that the "dark" state is unique, i.e., \(\lambda_{0}=0\), and \(|\lambda_{k}|>0\) for \(k>0\). Also, because of passivity, the system has the vacuum as the stationary state, \(\psi_{j}=0,\quad\forall j\). Loss leads to the change of total probability \(P_{total}(t)=\sum\limits_{\forall j}|\psi_{j}(t)|^{2}\), which is no more equal to \(1\) (notice, that this can be accompanied by existence of other integrals of motion; for examples, for coherent diffusive photonic systems considered in Refs. [11; 42] one has preservation of the sum of coherencies, \(\sum\limits_{\forall j}\psi_{j}(t)=\mathrm{const}\)). Thus, to describe mixing one needs to introduce the normalized occupation probabilities \[p_{j}(t)=|\psi_{j}(t)|^{2}/P_{total}(t)\underset{\epsilon\to\infty}{\to}p_{j}^ {(\pi t)}, \tag{2}\] with \(p_{j}^{(\pi t)}\) being the mixed time-independent occupation distribution. Thus, the mixing time is defined in a standard classical way as [43] \[T_{mix}(\epsilon)=\min\{t\geq 0:\sum\limits_{\forall j}|p_{j}(t)-p_{j}^{\pi t}| \leq\epsilon\} \tag{3}\] for \(\epsilon>0\). Notice that the normalization in Eq.( 2) actually makes ambiguous the definition of the mixed distribution. If the initial state is not orthogonal to the "dark" state, the mixed distribution is unique and is defined by this "dark" state. This kind of mixing we call "conventional" throughout the paper. When the initial state is orthogonal to the "dark" state, the system eventually decays to the vacuum. However, the normalized distribution (2) might also mix to some limiting distribution \(p_{j}^{(\pi t)}\). This distribution might depend on both the initial state and the parameters of the lattice. This kind of mixing we term as "unconventional". Finally, as we will see below, for the initial state orthogonal to the "dark" state, mixing might not occur at all. ## III Dissipative beam-splitter To demonstrate the essential points of our discussion, let us consider a simple structure consisting of just three unitary coupled nodes with the corresponding amplitudes \(\alpha_{1,2}\) and \(\beta_{1}\) with the middle one corresponding to \(\beta_{1}\) subject to loss Figure 1: (a) A scheme of the dissipative beam-splitter (DBS) model. The nodes \(\alpha_{1,2}\) are unitary coupled with the node \(\beta_{1}\) with the coupling rates \(v_{1,2}\), but not with each other; the node \(\beta_{1}\) is subject to the loss with the rate \(\Gamma\). (b-c) Illustration of the mixing regimes of the symmetric DBS described by the Hamiltonian (4) with \(v_{1}=v_{2}\). (b) Solid, dashed and dash-dotted lines correspond to the normalized probabilities \(p_{2,3,1}(t)\) given by Eq.(2) for the conventional mixing regime; initially only the node \(\alpha_{1}\) is excited, \(v/\Gamma=0.4\). (c) Solid and dash-dotted lines illustrate unconventional mixing for initial excitation of both the nodes \(\alpha_{1,2}\) with equal amplitudes; solid and dash-dotted lines correspond to \(p_{1,2}(t)\) for \(v/\Gamma=0.4\). Dashed line illustrates absence of mixing for \(p_{1}(t)\) corresponding to \(v/\Gamma=0.6\) for initial excitation of both the nodes \(\alpha_{1,2}\) with equal amplitudes. (Fig.1(a)). This shortest case of the RL model with a "dark" state [39] can be realized, for example, with single-mode waveguides [11; 12; 13; 14; 15; 44], with coherences of coupled two-level systems [42] or three-site Bose-Hubbard model [45]. This dissipative beam-splitter (DBS) is described by the following Hamiltonian \[\mathbf{H}=\begin{pmatrix}0&v_{1}&0\\ v_{1}&-i\Gamma&v_{2}\\ 0&v_{2}&0\end{pmatrix} \tag{4}\] where the real \(\Gamma>0\) is the loss rate of the middle node denoted as \(\beta_{1}\) in Fig.1(a). For simplicity sake, we assume real unitary coupling rates \(v_{1,2}\). Despite simplicity, the scheme described by Eq.(4) contains rich physics. First of all, it can exhibit both \(\mathcal{PT}\) and anti-\(\mathcal{PT}\) symmetries. Indeed, for the limiting asymmetric case, when one of the coupling rates vanishes, \(v_{j}\to 0\), the remaining part of the system (two coupled nodes) is \(\mathcal{PT}\)-symmetric [46; 47]. In that case it was also observed how the strong designed loss in the node \(\beta_{1}\) leads to the effective decoupling of the remaining side node (termed "loss-induced transparency" [47]). In the other limit of \(\sqrt{v_{1}^{2}+v_{2}^{2}}/\Gamma\to 0\), the system becomes anti-\(\mathcal{PT}\)-symmetric [48; 49], since one can adiabatically exclude the middle node \(\beta_{1}\) resulting in the so-called "dissipative coupling" [10; 45; 50]. Here we show how another case of \(\mathcal{PT}\)-symmetry arises in this DBS for arbitrary values of \(v_{1,2}\) and \(\Gamma\). This symmetry is connected with the "dark" state and the mixing rate. Indeed, let us introduce the amplitude, \(v\), and phase, \(\phi\), parameters characterising the coupling such that \(v_{1}=v\cos\phi,\quad v_{2}=v\sin\phi,\,v=\sqrt{v_{1}^{2}+v_{2}^{2}}\), and collective variables \(A\) and \(\alpha\) are corresponding to the amplitudes of the "bright" and "dark" states \[A=\alpha_{1}\cos\phi+\alpha_{2}\sin\phi,\quad\alpha=\alpha_{2}\cos\phi-\alpha_ {1}\sin\phi. \tag{5}\] It is easy to see from Eqs.(1) and (4) that \(\alpha(t)=\mathrm{const}\). Actually, it describes projection of the initial state on the "dark" eigenstate of the Hamiltonian (4) corresponding to the eigenvalue \(\lambda_{0}=0\). The two other eigenvalues are \[\lambda^{\pm}=-\frac{i}{2}(\Gamma\pm\sqrt{\Gamma^{2}-4v^{2}}). \tag{6}\] Dynamics of the variables \(A,\beta_{1}\) are described by the effective Hamiltonian \[\mathbf{H}_{2}=\begin{pmatrix}0&v\\ v&-i\Gamma\end{pmatrix}, \tag{7}\] which is \(\mathcal{PT}\)-symmetric. As is typical for the \(\mathcal{PT}\)-symmetric systems [46; 38; 47], the spectrum (6) shows an exceptional point at \(v/\Gamma=1/2\), where the eigenvalues \(\lambda^{\pm}\) coalesce (see the inset in Fig.2). The position of this point with respect to the system's parameters, i.e., the ratio \(v/\Gamma\), crucially affects both mixing and the character of dynamics. If projection of the initial states on the "dark" state is finite, i.e., \(\alpha(0)\neq 0\), for all the finite ratios \(v_{1,2}/\Gamma\) the system is always mixed conventionally with \(p_{2}^{(st)}=0\) and \(p_{1,3}^{(st)}=v_{2,1}^{2}/v^{2}\). An illustration of this behavior is given by Fig.1(b) at the point before the exceptional one, \(v/\Gamma=0.4\). This kind of dynamics underlies practical usability of DBSs. By loosing part of input energy, a waveguide DBS robustly reproduces outputs with needed amplitude ratios and fixed phase between them [12; 13]. However, nodal population dynamics becomes quite different for \(\alpha(0)=0\). This can be understood by considering the analytic solution of the system given by Eq.(7): \[\begin{split}&\beta_{1}(t)=c^{+}e^{-i\lambda^{+}t}-c^{-}e^{-i \lambda^{-}t},\\ & c^{\pm}=\frac{-vA(0)+(i\Gamma+\lambda^{\mp})\beta_{1}(0)}{ \lambda^{-}-\lambda^{+}},\\ & A(t)=\frac{i}{v}(\frac{d}{dt}\beta_{1}(t)+\Gamma\beta_{1}(t)). \end{split} \tag{8}\] This solution immediately shows that beyond the exceptional point, i.e., for \(v/\Gamma>1/2\), the system does not mix for \(\alpha(0)=0\), so that Eqs.(8) give non-decaying oscillations of the normalized occupation probabilities (see the dashed line showing the first node normalized population in Fig.1(c)). Before the exceptional point, unconventional mixing still occurs (see solid and dash-dotted lines in Fig.1(c)), but the values of the stationary nodal populations \(p_{j}^{(st)}\) now become dependent on the initial excitation and the decay rate \(\Gamma\). So, it is entirely unsurprising that conventional mixing time is very much dependent on the position of the exceptional point with respect to the system parameters. As one can see in the inset of Fig.2 showing imaginary parts of the eigenvalues \(\lambda^{\pm}\) in dependence on \(v/\Gamma\), before the exceptional point, i.e., for \(v/\Gamma<1/2\), the mixing time (3) is defined by \(\lambda^{-}\) and quickly increases with \(\Gamma\) (see the illustration in Fig.2 for the symmetric structure with \(v_{1}=v_{2}\) and \(\epsilon=0.001\)). Beyond the exceptional point \(T_{mix}(\epsilon)\) is defined mostly by the required precision \(\epsilon\). Curiously and importantly for the practical design, the minimal mixing time is achieved in the vicinity of the exceptional point. Figure 2: Mixing time for the symmetric DBS in the units of \(\Gamma^{-1}\) for the initially excited node \(\alpha_{1}\) as given by Eq.(3) with \(\epsilon=0.001\) in dependence on \(v/\Gamma\); the vertical line denotes the position of the exceptional point; in the inset imaginary parts of the eigenvalues of the Hamiltonian (4) are shown in dependence on the ratio \(v/\Gamma\). The dashed line corresponds to \(\lambda^{+}\), the dotted line corresponds to \(\lambda^{-}\) of Eq.(6) The vertical dashed line shows the position of the exceptional point. ## IV General considerations One can easily surmise that mixing features demonstrated by the DBS considered in the previous Section are to be observed in more complicated systems with similar spectral features. Indeed, let us have a single-state passive filtering system described by Eq.(1) with the effective Hamiltonian \(\mathbf{H}\). The corresponding eigenequation is as follows \[\mathbf{H}\vec{\phi}_{n}=\lambda_{n}\vec{\phi}_{n}, \tag{9}\] with eigenvalues satisfying \(\lambda_{0}=0\), \(\mathrm{Im}(\lambda_{n})<0,\forall n\in[1,N-1]\) in all the considered range of the Hamiltonian parameters; \(N\) is the total number of nodes in the system; \(\vec{\phi}_{n}\) are eigenvectors. Now let us assume that in a certain region of the parameter space, for \(\forall n\in[1,N]\), one has the coalescence of all the imaginary parts, i.e. \(\lambda_{n}=-i\gamma+\omega_{n}\). Away from the exceptional spectral points, one can write a solution \[\psi_{k}=c_{k0}+e^{-\gamma t}\sum_{n=1}^{N-1}c_{kn}e^{-i\omega_{n}t} \tag{10}\] where the coefficients \(c_{kn}\) are defined by the projection of the initial state on the corresponding eigenvector \(\vec{\phi}_{n}\). So, one has some positive \(C\), so that \(|c_{kn}|\leq C,\forall k,n\). From Eq.(10) it is easy to estimate the conventional mixing time for the normalized distribution (2). For conventional mixing, when \(c_{k0}\neq 0\) for some \(k>0\), from Eqs. (2), (3), and (10) it follows that \[\gamma T_{mix}(\epsilon)\leq O(\log\{CN/\epsilon\}). \tag{11}\] The estimate (11) is rather favorable for achieving mixing in the structures with \(N\gg 1\). On the other side, in the absence of eigenvalues coalescence, i.e., for a set of different \(\mathrm{Im}(\lambda_{n})<0\), the conventional mixing time is defined in a standard way by a spectral gap (\(\min|\mathrm{Im}(\lambda_{n})|,n>0\)). This scaling can be quite unfavorable. For example, for a classical linear Markov chain \(T_{mix}\) scales as \(N^{2}\)[43]. For practical mixing structures, the question is when it is feasible to reach the coalescence region of parameters and/or generally keep the scaling (11). Also, the question is what can one do to minimize mixing time when it is not feasible to reach the coalescence region, in particular, when only a part of the eigenvalues coalesced. Further we demonstrate that using symmetry of the eigenvectors for partially coalesced spectrum, it is still possible to restore scaling (11) while keeping the mixing device practically useful, i.e., keeping the number of the initially excited nodes reasonably small and also keeping the set of these nodes well localized. We demonstrate it on the example of two very common workhorse models of dissipative quantum walks: linear and circular Rudner-Levitov lattices [39]. ## V Linear RL lattice Now let us consider how mixing occurs in a particular example of the generic structures considered in the previous Section: the finite Rudner-Levitov (RL) linear lattice [39] with \(N_{L}\) lossy nodes and \(N_{L}+1\) lossless ones. The scheme of such lattice is shown in Fig.3(a), where the amplitudes corresponding to the nodes of the lossless sublattice are termed as \(\alpha_{n}\) and the amplitudes corresponding to the nodes of the lossy sublattice are termed as \(\beta_{n}\). The effective Hamiltonian of the considered structure is a tridiagonal matrix with the following non-zero elements \[H_{2n,2n}=-i\Gamma,\quad H_{2n-1,2n}=H_{2n,2n-1}=v_{1},\] \[H_{2n,2n+1}=H_{2n+1,2n}=v_{2},\quad n=1,2\ldots N_{L}, \tag{12}\] where we similarly to the DBS case parameterize the unitary coupling rates (taken for simplicity to be real) as \(v_{1}=v\cos\phi,\quad v_{2}=v\sin\phi,v=\sqrt{v_{1}^{2}+v_{2}^{2}}\). The RL lattice described by the Hamiltonian (12) is a simple extension of the DBS (4) retaining its essential functional feature: a single non-vacuum stationary state allowing for conventional mixing. Further, we will concentrate on this type of mixing. Figure 3: (a) A scheme of the linear RL lattice. Each lossy node \(\beta_{n}\) is unitarily coupled with the node \(\alpha_{n,n+1}\) with the corresponding coupling rates \(v_{1,2}\), the node \(\beta_{n}\) is subject to the loss with the rate \(\Gamma\). (b) Examples of stationary distributions corresponding to a non-decaying eigenstate; thick, medium, and thin bars show \(p_{k}^{st}\), correspondingly, for \(\phi=0.25\pi,0.21\pi,0.1\pi\). (c) \(T_{mix}(\epsilon)\) dependence in the units of \(\Gamma^{-1}\) on the ratio \(v/\Gamma\) for different \(\phi\). The solid, dash-dotted and dashed curves correspond to \(\phi=0.25\pi,0.21\pi,0.1\pi\) and a single node initially excited, \(\alpha_{1}(0)\neq 0\); the dotted curve corresponds to \(\phi=0.1\pi\) and only \(\alpha_{N_{L}+1}(0)\neq 0\). The solid lines in the inset show \(\mathrm{Im}(\lambda_{k})\) for the symmetric case, in dependence on the ratio \(v/\Gamma\), and dotted lines show \(\mathrm{Im}(\lambda_{k})\) for the strongly asymmetric case, \(\phi=0.1\pi\). For all the panel (b), \(N_{L}=9\). ### Mixing and asymmetry First of all, let us highlight the role of unequal coupling (i.e., \(v_{1}\neq v_{2}\)) for the considered RL lattice devices (farther in the text we term such unequal coupling as "asymmetry" in coupling). From the Hamiltonian (12), it is obvious that the ratio of the nodal amplitudes \(\alpha_{n}/\alpha_{n+1}\) in the stationary "dark" state is \(v_{2}/v_{1}\). So, asymmetry of the coupling leads to the asymmetry of the "dark" state population distribution and might severely limit functionality of the device aimed at the distribution of population among the nodes. An example for the RL lattice with 19 nodes can be seen in Fig.3(b): The thick, medium, and thin bars show \(p_{k}^{st}\) for symmetric (\(\phi=\pi/4\)), slightly asymmetric (\(\phi=0.21\pi\)), and strongly asymmetric cases (\(\phi=0.1\pi\)), respectively. For \(\phi=0.1\pi\) the "dark" state is strongly localized near the edge of the lattice, so that the initial state would effectively mix only through a few edge nodes. As we have shown in the previous Section, asymmetry does not affect the spectrum of the DBS (6). It is not so for the RL lattice with larger number of nodes. Qualitatively, the behaviour of the imaginary part of the eigenvalues, \(\mathrm{Im}(\lambda_{k})\), is similar for both the symmetric and asymmetric cases as one can see for the lattice with \(N=19\) nodes in the inset of Fig.3(c): The solid lines show \(\mathrm{Im}(\lambda_{k})\) as a function of the ratio \(v/\Gamma\) for the symmetric case, and the dotted lines show \(\mathrm{Im}(\lambda_{k})\) for the strongly asymmetric case, \(\phi=0.1\pi\). One can see in both cases that for sufficiently large ratio \(v/\Gamma\), all the imaginary parts of eigenvalues coalesce. With lowering \(v/\Gamma\), the system goes through a number of exceptional points (there are \(N_{L}\) of them). Eventually, for sufficiently small \(v/\Gamma\), the system enters the region of dissipative coupling. Both symmetric and asymmetric systems enter this region for close values of \(v/\Gamma\). However, the spread of exceptional points is larger for the symmetric case. Coalescence is broken for larger \(v/\Gamma\). These features are obvious from the spectra of non-zero eigenvalues easily obtained analytically due to a simple tridiagonal structure of the matrix (12) [51]: \[\lambda_{k}^{\pm}(\phi)=-\frac{i}{2}\left(\Gamma\pm\sqrt{\Gamma^{2}-4v^{2}\mu_ {k}(\phi)}\right), \tag{13}\] where \(\mu_{k}(\phi)=1+\sin 2\phi\cos\left\{\frac{k\pi}{N_{L}+1}\right\}\) and \(k=1,2\ldots N_{L}\). The spectral features mentioned above directly influence the mixing behavior. Eq.(13) shows that similarly to the DBS case, the mixing time at \(v/\Gamma\) larger than the position of all the exceptional points depends mainly on the precision parameter \(\epsilon\) (and, of course, the value of \(\Gamma\)). This can be seen with the estimation of \(T_{mix}(\epsilon)\) shown in the main panel of Fig.3(c), where the dependence of \(T_{mix}(\epsilon)\) on the ratio \(v/\Gamma\) is shown for different asymmetry angles \(\phi\). The solid, dash-dotted, and dashed curves correspond to \(\phi=0.25\pi,0.23\pi,0.1\pi\) with a single leftmost node initially excited, \(\alpha_{1}(0)\neq 0\). The most immediate observation is that for the same single-node initial excitation, the symmetric structure mixes slower at \(v/\Gamma\) larger than the exceptional point corresponding to the largest value of this ratio for \(\phi=\pi/4\) (we call it "LREP"). Indeed, it can be seen from Eq.(13) that before the LREP \[\min\left\{\mathrm{Im}(\lambda_{k}^{\pm}(\phi\neq\pi/4))\right\}>\min\left\{ \mathrm{Im}(\lambda_{k}^{\pm}(\pi/4))\right\}\] for \(\phi\in[0,\pi/2]\). As shown in Fig.3(c), for strongly asymmetric case mixing might be orders of magnitude faster. However, in that case mixing looses its practical meaning. As it was pointed above, the mixed state is strongly localized. Another curious albeit intuitive feature of asymmetric structures is the rather strong quantitative difference in \(T_{mix}\) dependence on \(v/\Gamma\) for different initial excitations. In Fig.3(c), the dashed and dotted curves differ only by the initial excitation; \(\alpha_{1}(0)\neq 0\) for the dashed curve and \(\alpha_{N_{L}+1}(0)\neq 0\) for the dotted one with all the other nodes being initially of zero amplitudes. When the initial excitation is far from the region where the stationary distribution is localized, the mixing is slower. As we shall see below, the choice of initial excitation can strongly affect mixing even in the symmetric case. ### Mixing and chain length As we have seen in the previous Subsection, asymmetry does not introduce any qualitative changes in conventional mixing in RL lattices. So, for simplicity sake, in this Subsection we discuss conventional mixing for symmetric RL lattices in dependence on the number of nodes. Generally, it conforms to the patterns discussed in the Section IV. As it is obvious from the eigenvalues given by Eq.(13), for any given \(v/\Gamma\) the system will eventually cross the LREP as \(N_{L}\) increases. Indeed, for the position of LREP from Eq. (13), one has \(v/\Gamma\approx N_{L}/\sqrt{2}\pi\) for \(N_{L}\gg 1\). Thus, with increasing \(N_{L}\), the system eventually becomes susceptible to Figure 4: Dependence of the mixing time \(T_{mix}(0.001)\) shown in the units of \(\Gamma^{-1}\) on the number of lossy nodes \(N_{L}\) for the symmetric RL lattice. Solid, dash-dotted, and dashed lines correspond to \(v/\Gamma=3,5,7\), respectively. For all the curves \(\epsilon=0.001\). The inset shows \(T_{mix}\) for smaller interval of \(N_{L}\). Initially only the first lossless node is excited. slowing of mixing (with \(T_{mix}\propto N_{L}^{2}\), as for the usual Markovian chain mixing [43]). The examples of such behaviour are shown in Fig.4 for only the edge node being initially excited. Here the dependence of \(T_{mix}(0.001)\) on \(N_{L}\) is shown for different ratios \(v/\Gamma\). The solid, dash-dotted, and dashed lines correspond to \(v/\Gamma=3,5,7\), respectively. It is seen that after crossing LREP the mixing time grows like \(N_{L}^{2}\). The inset shows that before crossing the LREP, \(T_{mix}\) grows much slower. Moreover, as discussed in Section IV, before the LREP the mixing time is rather close for different \(v/\Gamma\). Thus, we have the recipe for optimizing mixing time for the RL lattice of any given length: one needs just to increase the interaction strength between the nodes so that it crosses the LREP. ### Mixing and initial states The recipe of increasing the mixing rate described in the previous Subsection is not always available. One cannot increase the interaction strength indefinitely. For example, in the system based on single-mode waveguides, increasing interaction strength means decreasing the distance between the waveguides. This eventually breaks the weak-coupling and single-mode approximations [52]. However, it appears that one can avoid crossing the LREP and reach the fast mixing regime by judicious choice of the initial state. Even for just a single initially excited node, its choice might mean a lot. Fig.5 shows that exciting initially a node in the middle of the lattice instead of the edge one, one can make mixing much faster. Moreover, the region of logic-like fast mixing is much wider for even \(N_{L}\) than for odd \(N_{L}\) (the ratio is nearly two for the example shown in Fig.5). The reason can be related to the structure of the eigenvectors. For even \(N_{L}\), the eigenvector corresponding to the second smallest eigenvalue has zero element corresponding to the central Figure 7: Mixing time as a function of \(v/\Gamma\) for the ring RL lattice with \(N_{L}=3\) in the units of \(\Gamma^{-1}\) for the initially excited node \(\alpha_{1}\) as given by Eq.(3) with \(\epsilon=0.001\). Inset shows the imaginary parts of the eigenvalues of the Hamiltonian (14) with the balancing parameter (15) in dependence on the ratio \(v/\Gamma\). For all the figure, the solid line corresponds to the symmetric structure, the dashed line corresponds to \(\phi=0.15\pi\). Figure 6: The scheme of the ring RL lattice. There are \(N_{L}\) loss-free nodes \(\alpha_{k}\) and \(N_{L}\) dissipative nodes \(\beta_{k}\) with loss rate \(\Gamma\). The coupling constant of the nodes \(\alpha_{k}\) and \(\beta_{k}\) is \(v_{1}\); the coupling constant of the nodes \(\beta_{k}\) and \(\alpha_{k+1}\) is \(v_{2}\). The last dissipative node \(\beta_{N_{L}}\) is coupled to the first node \(\alpha_{1}\) with the interaction constant \(\delta v_{2}\); \(\delta\) is a balancing parameter. Figure 5: Main panel shows mixing time \(T_{mix}(0.001)\) in the units of \(\Gamma^{-1}\) for different positions of the initially excited node in dependence on \(N_{L}\). Solid line corresponds to the initially excited first node, circle marks show mixing time for the initially excited middle node (i.e., \(N_{L}\)-th node for the odd \(N_{L}\) and \(N_{L}+1\)-th node for the even \(N_{L}\) with total of \(2N_{L}+1\) nodes). The lower inset shows absolute values of the eigenvector elements for the eigenvector corresponding to the eigenvalue with the second smallest imaginary part for an odd \(N_{L}=19\), the upper inset shows the same but for an even \(N_{L}=20\). For all the panels, \(v/\Gamma=3\). node, \(k=N_{L}+1\). The initial excitation of this node effectively shifts the LREP to the second largest exceptional point in Fig.3. Thus, having just one initially excited node, one can drastically decrease the mixing time by making the RL lattice longer. Of course, it is possible to exploit the features of eigenstates in a similar manner making the initial excitation non-local for effectively shifting the LREP. Indeed, adjusting the amplitudes of initial excitation of \(M+1\) nodes, outside of exceptional points one can always make the initial state orthogonal to any of \(M\) eigenvectors extending the region of log-like mixing time dependence with the system's size. ## VI Ring RL lattice It is easy to surmise that optimization of the mixing time can be achieved also by changing geometry of the lattice. Here we demonstrate that a simple modification of RL lattice, namely, closing it into a ring structure, can bring considerable advantages in mixing. Such a ring structure can be readily realized, for example, by laser waveguide writing in a bulk dielectric [11]. A scheme of this modification is shown in Fig.6. It coincides with the scheme shown in Fig.3(a) with just one difference: there is an additional lossy node (say, \(\beta_{N_{L}}\)) coupling the nodes \(\alpha_{1}\) and \(\alpha_{N_{L}}\). Also, the coupling of this node with the node \(\alpha_{1}\) is weighted by a parameter \(\delta\). So, the Hamiltonian for the system shown in Fig.6 is given by \[H_{2n,2n}=-i\Gamma,\quad H_{2n-1,2n}=H_{2n,2n-1}=v_{1},\] \[H_{2n,2n+1}=H_{2n+1,2n}=v_{2},\quad n=1,2\ldots N_{L}-1, \tag{14}\] \[H_{2N_{L}-1,2N_{L}}=H_{2N_{L},2N_{L}-1}=v_{1},\] \[H_{2N_{L},2N_{L}}=-i\Gamma,\quad H_{1,2N_{L}}=H_{2N_{L},1}= \delta v_{2}.\] The meaning of the balance parameter \(\delta\) can be easily inferred from the Hamiltonian (14). To have a non-vacuum stationary state and a possibility of conventional mixing, one needs to satisfy \[v_{1}^{N_{L}}+(-1)^{N_{L}+1}\delta v_{2}^{N_{L}}=0. \tag{15}\] In the case of a symmetric structure, it leads to \(\delta=1\) for even \(N_{L}\) and \(\delta=-1\) for odd \(N_{L}\). Further, we consider only the conventional mixing with RL rings balanced as given by Eq.(15). ### The simplest ring structure To clarify how the mixing on a RL ring occurs, let us consider the simplest ring structure with non-trivial non-vacuum stationary state. It is the ring with three non-lossy nodes, \(N_{L}=3\). As shown in the inset of Fig.7, for the symmetric structure, the imaginary parts of the eigenvalues of the Hamiltonian (14) behave quite similarly to the ones for the DBS considered in Section III. There is also just a single exceptional point. As the main panel of Fig.7 shows, the mixing time quickly grows for values of \(v/\Gamma\) below this point. However, there are two significant differences. Firstly, there is an eigenvalue independent on \(v\) and equal to \(\Gamma\). Figure 8: (a,b) Imaginary parts of the eigenvalues of the Hamiltonian (14) with the balancing parameter (15) in dependence on the ratio \(v/\Gamma\) for the RL ring with \(N_{L}=10\). On both these panels the solid lines show \(\mathrm{Im}(\lambda_{k})\) for the symmetric case versus the ratio \(v/\Gamma\). The dashed line in the panel (a) shows \(\mathrm{Im}(\lambda_{k})\) for the slightly asymmetric case, \(\phi=0.22\pi\). The dashed line in the panel (b) shows \(\mathrm{Im}(\lambda_{k})\) for the strongly asymmetric case, \(\phi=0.1\pi\). The panel (c) shows the position of the LREP in dependence on the asymmetry angle \(\phi\) for the linear structure (dash-dotted line) and the ring structure (solid line). For the linear structure \(N_{L}=9\), for the ring structure \(N_{L}=10\). Figure 9: Dependence of the mixing time \(T_{mix}\) on the number of lossy nodes \(N_{L}\) for the ring RL lattice. In the main panel, the solid and dash-dotted lines correspond to \(v/\Gamma=1,2\) to the symmetric structure. The dash-dotted line corresponds to \(v/\Gamma=2\), \(\phi=0.22\pi\); the dotted line corresponds to \(v/\Gamma=2\), \(\phi=0.2\pi\). The inset shows \(T_{mix}\) for smaller vertical scale; here solid, dash-dotted, dashed and dotted lines correspond to \(v/\Gamma=1,2,3,4\). For all the curves, \(\epsilon=0.001\). Initially only the first lossless node is excited. Secondly and rather curiously, here we have convalescence of two diabolical points (more exactly, diabolical branches [41]) into the exceptional points. The upper and lower solid curves in the inset in Fig.7 show the degenerate eigenvalues corresponding to pairs of different eigenvectors. These features are easily seen from the Hamiltonian (14). Indeed, one can get from Eq.(14) the following equations for the compound amplitudes of dissipative nodes \(B_{1}=\beta_{1}-\beta_{3}\), \(B_{2}=\beta_{1}+2\beta_{2}+\beta_{3}\), \(B_{3}=\beta_{2}-\beta_{1}-\beta_{3}\): \[\frac{d^{2}}{dt^{2}}B_{1,2}+\Gamma\frac{d}{dt}B_{1,2}+\frac{3}{2} v^{2}B_{1,2}=0,\] \[\frac{d^{2}}{dt^{2}}B_{3}+\Gamma\frac{d}{dt}B_{3}=0. \tag{16}\] Curiously, as Eqs.(16) show, the exceptional point retains its diabolic character: it is actually a pair of exceptional points with different eigenvectors. Notice that the superposition \(B_{3}\) corresponds to the unconventionally mixed state. Also notice that the symmetric ring structure gives the exceptional point at the interaction constant \(\sqrt{3/2}\) times smaller that in the case of the DBS (6). The ring structure provides faster mixing than the linear DBS for the same \(v\) values. However, symmetry breaking in the case of the ring RL lattice leads to the consequences opposite to those of the DBS. Asymmetric coupling leads to lifting the degeneracy (see the dashed lines in the inset in Fig.7 corresponding to \(\phi=0.15\pi\)), so that the LREP shifts to the larger ratios \(v/\Gamma\). As expected, this results in slower mixing as seen in the main panel of Fig.7 (see the dashed line corresponding to \(\phi=0.15\pi\)). Another difference from the DBS is the independence of \(T_{mix}\) of the initially excited lossless node position in the asymmetric case. ### Mixing and the ring perimeter Now let us discuss how the mixing time depends on the ring perimeter which is twice the number of dissipative or lossless nodes. Just like in the previous subsection, we are discussing here only the balanced rings allowing for conventional mixing. First of all, one needs to notice that the balanced ring RL lattice shows dissipation rates spectra quite similar to those for the linear RL lattice (see Fig.8(a,b)). One also has a set of exceptional points. With increasing of the ratio \(v/\Gamma\) the system eventually crosses the LREP. However, there are also some rather remarkable differences apart from the already noticed presence of the unconventionally mixed state corresponding to \(\mathrm{Im}(\lambda)=-\Gamma\). For the symmetric case and for odd \(N_{L}\), all the exceptional points are twice degenerate. For even \(N_{L}\), the exceptional point corresponding to the lowest ratio is not degenerate (for that reason one sees just five exceptional points for solid lines in Fig.8(a,b)). Making the lattice asymmetric (i.e., for \(\phi\neq\pi/4\)) removes degeneracy. As one can see in Fig.8(c) (dash-dotted line), for the linear RL lattice, increase in asymmetry resulted in monotonous shift of the LREP to the region of lower \(v/\Gamma\). This is not the case for the ring structure. If the mixing angle decreases from the symmetric value \(\pi/4\), the LREP shifts to the larger values of \(v/\Gamma\) (see the dashed line in Fig.8(a)) corresponding to \(\phi=0.22\pi\)). But further decrease of \(\phi\) leads to the reverse motion of the LREP which eventually shifts to lower \(v/\Gamma\) than for the symmetric RL ring (see the dashed line in Fig.8(b) corresponding to \(\phi=0.1\pi\) and the solid line in Fig.8(c) showing how the LREP moves with the asymmetry angle \(\phi\) for the ring structure). Another important difference from the linear RL lattice is that for the asymmetric ring structure, the exceptional point shifts to the lower ratios \(v/\Gamma\) with decreasing \(\phi\) with no limit similar to those described by Eq.(13). So, asymmetry can break the "dissipative coupling" approximation which is always valid for the linear RL structure for sufficiently low ratios \(v/\Gamma\) regardless of the asymmetry. The specific features of the spectra for the ring structures significantly affect the mixing. As one can see in Fig.9, for the symmetric structure the scaling is just like as for the linear RL lattice. There is the log-like scaling for the structure below the LREP and \(N_{L}^{2}\) scaling above it. Similar to the smaller ring structure discussed in the previous subsection, the Figure 10: (a) Dependence of the mixing time \(T_{mix}\) on the number of lossy nodes \(N_{L}\) for the asymmetric ring RL lattice in comparison with the symmetric RL structure; for all the curves \(\epsilon=0.001\), \(v/\Gamma=2\). Solid, dashed and dash-dotted curves correspond to \(\phi=0.25\pi,0.24\pi,0.23\pi\) and initially excited \(N_{L}\)-th lossless node; dotted curve corresponds to \(\phi=0.23\pi\) and excitation of the lossless node opposite to the \(N_{L}\)-th lossless node. (b) Dependence of the second smallest \(|Im(\lambda)|\) of the number of nodes \(N_{L}\) for \(v/\Gamma=2\). Solid, dashed and dash-dotted curves correspond to \(\phi=0.25\pi,0.24\pi,0.23\pi\). scaling starts at considerably larger values of \(N_{L}\) for the ring structure than for the linear structure with the same \(v/\Gamma\) (for example, one needs nearly twice as large \(N_{L}\) for \(v/\Gamma=3\), as follows from the comparison between Figs.4 and 9). So, the symmetric balanced ring structure is considerably better for a mixing device than a linear structure. Much larger difference from the linear case appears for the balanced asymmetric structure. One can see from Eq.(15) that the balancing parameter \(\delta\) for an asymmetric structure grows with the number of nodes. This induces quite a remarkable spectral feature: stabilization of the second smallest dissipation rate with the increasing number of nodes, \(N_{L}\). An example of such a spectral curiosity is shown in Fig.10(b) for \(v/\Gamma=2\) and asymmetry angles \(\phi=0.24\pi\) (dashed curve) and \(\phi=0.23\pi\) (dash-dotted curve). For comparison, the solid curve shows how the the second smallest dissipation rate tends to zero with increasing \(N_{L}\) for the symmetric structure. Naturally, stabilization of \(\mathit{Im}(\lambda)\) induces returning to log-like scaling of the mixing time for asymmetric rings. Examples of such fast mixing restoration are shown in Fig.10(a) for \(v/\Gamma=2\) and asymmetry angles \(\phi=0.24\pi\) (dashed curve) and \(\phi=0.23\pi\) (dash-dotted and dotted curves). The dotted curve demonstrates qualitative dependence of the mixing time on the choice of the initial state; the initially excited lossless node is maximally distant from the \(N_{L}\)-th lossless node. Finally, it is worth reminding that asymmetry makes the mixed distribution localized. In particular, for the balanced ring structure shown in Fig.6, the mixed state localizes near \(\alpha_{N_{L}}\) like \([\cot\phi]^{m}\), \(m\) being a number of lossless nodes counter-clockwise from the \(N_{L}\)-th lossless node. So, even despite restoration of the fast scaling, asymmetric ring is hardly feasible as a practical mixing device. ## VII Conclusions In this work, we have demonstrated how one can optimize mixing in linear and circular Rudner-Levitov lattices [39], which are the Su-Schrieffer-Heeger lattices with the second sub-lattice being lossy [40]). The RL lattices have a number of important applications in photonics and quantum optics, so that the mixing optimization in such structures is quite important for the practical photonic devices realizing such structures. We have obtained a number of important results. First of all, we have demonstrated that for the finite RL lattice one can always reach a region of fast, log-like dependence of the mixing time of the number of the lattice nodes just by engineering the coupling between nodes and loss rates of the lossy nodes. The key to such a design is the position of the exceptional points of the lattice. For any fixed ratio of the interaction constant and the loss rate, as the number of nodes is increased, the system eventually crosses the last exceptional point and the mixing time acquires square dependence on the number of nodes. However, one can extend the region of log-like dependence by judicious choice of the initial state keeping only a few nodes initially excited. We have also shown that the ring RL lattice provides the wider region of log-like dependence than the linear RL lattice for the same ratios of interaction constant and loss rate and the same initial states. Also, we have shown that asymmetry of the balanced ring structure can restore log-like scaling for an arbitrary number of nodes. ###### Acknowledgements. I. P. and D. M acknowledge financial support from the The Belarusian Republican Foundation for Fundamental Research, grant F22B-008.
2308.16364
Strengthening the EU AI Act: Defining Key Terms on AI Manipulation
The European Union's Artificial Intelligence Act aims to regulate manipulative and harmful uses of AI, but lacks precise definitions for key concepts. This paper provides technical recommendations to improve the Act's conceptual clarity and enforceability. We review psychological models to define "personality traits," arguing the Act should protect full "psychometric profiles." We urge expanding "behavior" to include "preferences" since preferences causally influence and are influenced by behavior. Clear definitions are provided for "subliminal," "manipulative," and "deceptive" techniques, considering incentives, intent, and covertness. We distinguish "exploiting individuals" from "exploiting groups," emphasising different policy needs. An "informed decision" is defined by four facets: comprehension, accurate information, no manipulation, and understanding AI's influence. We caution the Act's therapeutic use exemption given the lack of regulation of digital therapeutics by the EMA. Overall, the recommendations strengthen definitions of vague concepts in the EU AI Act, enhancing precise applicability to regulate harmful AI manipulation.
Matija Franklin, Philip Moreira Tomei, Rebecca Gorman
2023-08-30T23:42:07Z
http://arxiv.org/abs/2308.16364v1
# Strengthening the EU AI Act: Defining Key Terms on AI Manipulation ###### Abstract The European Union's Artificial Intelligence Act aims to regulate manipulative and harmful uses of AI, but lacks precise definitions for key concepts. This paper provides technical recommendations to improve the Act's conceptual clarity and enforceability. We review psychological models to define "personality traits," arguing the Act should protect full "psychometric profiles." We urge expanding "behavior" to include "preferences" since preferences causally influence and are influenced by behavior. Clear definitions are provided for "subliminal," "manipulative," and "deceptive" techniques, considering incentives, intent, and covertness. We distinguish "exploiting individuals" from "exploiting groups," emphasising different policy needs. An "informed decision" is defined by four facets: comprehension, accurate information, no manipulation, and understanding AI's influence. We caution the Act's therapeutic use exemption given the lack of regulation of digital therapeutics by the EMA. Overall, the recommendations strengthen definitions of vague concepts in the EU AI Act, enhancing precise applicability to regulate harmful AI manipulation. EU AI Act AI Manipulation AI Policy ## 1 Introduction In the amendments adopted by the European Parliament on 14 June 2023 on the Artificial Intelligence Act, the EU's regulatory stance on AI Manipulation is outlined as such: _"(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person's or a group of persons' behaviour by appreciably impairing the person's ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm;_ _The prohibition of AI system that deploys subliminal techniques referred to in the first sub-paragraph shall not apply to AI systems intended to be used for approved therapeutical purposes on the basis of specific informed consent of the individuals that are exposed to them or, where applicable, of their legal guardian;_ _(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons, including characteristics of such person's or such group's known or predicted personality traits or social or economic situation, age, physical or mental ability with the objective or to the effect of materially distorting the behaviour of that person or a person pertaining to that group in a manner that causes or is likely to cause that person or another person significant harm [1]"_ We argue that in the current regulatory framing, there is a lack of clarity of core concepts in the present amendments. For example, "personality traits" are mentioned six times in the latest amendments, and yet are not defined at any point in the document, or in the draft of the Act [2, 1]. In the present article, we will focus on providing technical definitions based on best practices in psychology and artificial intelligence as a way of improving and operationalizing core concepts to regulate AI manipulation. We will focus on: 1. Reviewing different approaches to defining personality traits; 2. Expanding the target of manipulation from solely behaviour to the manipulation of preferences; 3. Defining deception and deceptive techniques, subliminal techniques, and purposefully manipulative techniques; 4. Understanding how "exploiting vulnerabilities" requires different approaches for individuals and groups; 5. Defining what an "informed decision" is; 6. Defining "therapeutic purposes" in relation to manipulation. ## 2 Defining Personality Traits The current version of the act prohibits using the _'vulnerabilities of individuals and specific groups of persons due to their known or predicted personality traits, age, physical or mental incapacities'_. The concept of 'personality traits' is referred to multiple times in the proposed law but never defined further. Defining 'personality traits' will most likely require picking one of the many frameworks used in psychology to operationalise this concept. The most famous model is the OCEAN model, very often used as the standard method of quantifying personality [3]. The model delineates five axes to evaluate an individual's personality: 1. _Openness to experience_: refers to the degree to which an individual is receptive to novel ideas, experiences, and emotions, ranging from inventive and curious at one end to consistent and cautious at the other. 2. Conscientiousness: pertains to an individual's propensity for self-discipline, organisation, and goal-oriented behaviours, contrasting those who are efficient and organised with those who are extravagant and careless. 3. _Extraversion_: encapsulates an individual's orientation towards the external world, distinguishing those who are outgoing and energetic from those who are solitary and reserved. 4. _Agreeableness_: denotes an individual's interpersonal tendencies, differentiating those who are friendly and compassionate from those who are critical and rational. 5. _Neuroticism_: captures the stability and intensity of an individual's emotional responses, distinguishing those who are sensitive and nervous from those who are resilient and confident. Digital behavioural markers, like social media posts and browser histories, have effectively depicted individuals' Big Five personality traits [4]. Such detailed insights can offer predictive abilities about human behaviour [5]. Personality evaluations can be used to tailor persuasive messages to resonate with the psychological makeup of broad audience groups [6]. Among the varied data sources tapped for these insights are online likes, mobile device usage patterns, and even music preferences [7]. Intriguingly, algorithmic evaluations of personalities often surpass the judgment accuracy of one's close peers [8]. The potency of personality traits as a tool to sway decision-making, especially in voting, is evident from the findings of [9]. Given the extensive research done on OCEAN, and given that personality traits have been exploited as a vulnerability to influence voting behaviour, an option for EU policymakers would be to adopt OCEAN as their way of defining personality traits. However, the OCEAN model is neither uncontroversial nor complete. The OCEAN model has been subjected to considerable critical scrutiny (e.g., [10; 11]); an extensive review is beyond the scope of this paper. In line with this criticism, more recent models such as the HEXACO model, have included a sixth factor - Honesty-Humility [12]. The evidence for six large groups of personality traits (rather than five) has been provided by comprehensive, cross-language studies [13]. This provides policymakers with a dilemma - whether to define personality with OCEAN, HEXACO, or another model. A further decision policymakers can make is whether to keep personality traits at all or whether to replace them with a more expansive construct - psychometric traits. A psychometric trait is a quantifiable and consistent characteristic of an individual's psychological functioning that can be reliably measured using standardised assessment tools. The enhanced forecasting abilities of digital AI platforms, combined with in-depth information about the user's online activities and inclinations, pave the way for an environment where our innate psychological variations can be exploited as vulnerabilities. On a group level, exploiting small differences in psychometric traits can be effective. Individual psychological variations can be employed to form straightforward models predicting the reactions of specific groups to given stimuli, making these consistent individual disparities susceptible to manipulation. Personality traits are but a small minority of objectively measurable psychometric traits that could be exploited by an AI system in order to manipulate EU citizens. We thus propose that the EU AI act should protect the entire psychological profile as an avenue for manipulation from sophisticated AI systems rather than just 'personality traits'. Like personality traits, there are proxy measures for people's psychometric profiles, which use secondary data available online [14]. Specific psychometric measures such as suggestibility [15] and hypontisability [16] are crucial in determining how to manipulate an individual and should be protected characteristics along with demography and personality traits. Another characteristic is the influence of choice architectures on the decision-making of an individual, described in psychological literature as intodeability [17]1. These properties and others can be measured or inferred from user data and behaviour, and then exploited by a sophisticated AI system in order to coerce or manipulate human actors. As there are many of these factors, and many factors are yet to be discovered, it is likely that making a list would not be appropriate as it would quickly be outdated. Footnote 1: These traits are less stable than the previously aforementioned traits, thus more research is required. In light of these statements, we suggest changing references to 'personality traits' to 'psychological traits', defined as relatively stable psychological characteristics of human agents that are either directly measured or inferred from available data. ## 3 Considering Preferences, not Just Behaviour The current version of the Act states that the target of manipulation (and thus what policy should be designed around) is behaviour. More specifically, the act is currently solely concerned with how AI can distort a person's behaviour. This is made evident in statements such as _"...distorting a person's or a group of persons' behaviour..."_ or _"...the effect of materially distorting the behaviour..."_. It thus does not take into account the manipulation of other aspects of people's psychology. We propose that the EU AI Act should be concerned with preferences. Preferences can be stated directly by a person - and are thus known as stated preferences [18] - or revealed through their behaviour - revealed preferences [19]. Preferences have a bidirectional causal relationship with behaviour [20]. Machine Learning is often used to learn the preferences of users in order to better deliver some service to them. Examples of this are recommender systems [21] that learn revealed preferences, or reinforcement learning from human feedback methods [22] that learn users' stated preferences. A problem emerges due to the iterative nature of machine learning. An AI system that learns preferences will change their interaction with a human in line with those preferences. This change in interaction influences human behaviour. As these AI Systems influence human behaviour, they also influence human preferences. As a result, by learning preferences, an AI system changes preferences [23]. Thus, even if a policymaker was solely concerned with the manipulation of behaviour, it is important to consider preferences as they influence and are influenced by behaviour. AI systems also hold the capability to directly influence preferences through various methods - targeted content recommendations, personalised advertising, or even subtle alterations in the presentation of choices. Such AI-enabled manipulative practices can guide individuals' and groups' preferences. The consequences can be far-reaching, affecting various areas from individual consumer habits and mental health to societal issues like propagation of biases and deepening of social divisions. We define preferences as "... any explicit, conscious, and reflective or implicit, unconscious, and automatic mental process that brings about a sense of liking or disliking for something [24]." It is of crucial importance to consider the central role of preferences in relation to behaviour in the current version of the EU AI Act [25]. As preferences may significantly influence human actions, overlooking this relationship may create regulatory blind spots in the Act. This is especially the case when distorting preferences at time T1 may harmfully change behaviour at a much later time T2. The Act's current focus on behaviour may inadvertently allow AI systems to influence preferences without immediate behavioural consequences, thereby escaping the regulatory oversight of the Act. Such a loophole could potentially enable AI practices to exert influence over individuals and society in a long-term and insidious manner. Given these considerations, we propose that the EU AI Act should expand its scope to expressly prohibit AI systems that purposefully and materially manipulate or distort a person's or a group of persons' preferences. Defining Subliminal, Purposefully Manipulative and Deceptive Techniques The AI Act currently prohibits _"...AI system that deploys subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques..."_ This requires separate definitions and policy recommendations of subliminal techniques, purposefully manipulative techniques, and deceptive techniques. In this section, we want to provide definitions and context for subliminal techniques, purposefully manipulative techniques, and deceptive techniques. ### Subliminal Techniques [26] provide a comprehensive analysis of the concept of subliminal techniques, offering both narrow and broad definitions. The narrow definition is: _"Subliminal techniques aim at influencing a person's behaviour by presenting a stimulus in such a way that the person remains unaware of the stimulus presented."_ This definition aligns with the traditional understanding of subliminal techniques in psychology and marketing, where the stimulus is presented below the threshold of conscious perception but can still influence behaviour [26]. However, the researchers argue that this narrow definition may not capture all the ethically concerning techniques that could be used in AI systems. They propose a broader definition: _"Subliminal techniques aim at influencing a person's behaviour in ways in which the person is likely to remain unaware of (1) the influence attempt, (2) how the influence works, or (3) the influence attempt's effects on decision-making or value- and belief-formation processes_[26]." To enforce this definition separate checks would be required for each of the Definition's three clauses: lack of awareness of (1) the influence attempt, (2) its mode of operation, or (3) its effects on decision and judgment [26]. If a system leads to one or more of these factors, the next step is to determine whether these techniques are used in a distorting way, that is, in a way that significantly reduces the person's capacity for guiding their actions in accordance with their values. If they do, then an ethical risk assessment is required to assess whether the system's use of subliminal techniques implies an increase in the risk of harm to anyone that may be affected by those behaviours. This broader definition is more suitable to the EU AI Act, covering a wider range of techniques that could potentially harm individuals or groups [26]. It includes not only techniques that use stimuli below the threshold of conscious perception, but also those that operate in ways that the person is not aware of or cannot resist. It changes the focus from manipulation that occurs due to a lack of awareness towards the stimuli to manipulation that occurs due to a lack of awareness of the manipulation itself. A person can be aware of the stimuli, yet be manipulated by it regardless. This definition aligns with the EU AI Act's intention to prevent AI systems from deploying subliminal techniques that could materially distort a person's behaviour and cause significant harm. This definition also aligns with our own analysis that the Act should protect against manipulation beyond behaviour and decision-making, and also focus on preferences (in the broader sense, as defined in this paper) or _"value- and belief-formation processes."_ ### Purposefully Manipulative Techniques Recent definitions of AI Manipulation, define AI systems as manipulative _"... if the system acts as if it were pursuing an incentive to change a human (or other agent) intentionally and covertly"_[27]. Understanding incentives, intent, and covertness provides a good framework for identifying purposefully manipulative techniques. The first axis of manipulation is whether the system has incentives for influence, i.e., incentives to change a human's behaviour [27]. This could involve changing their beliefs, preferences, or psychological state more broadly. An incentive exists for a certain behaviour if it increases the reward (or decreases the loss) the AI system receives during training. For example, recommender systems may have incentives to influence user behaviour so as to make them more predictable [28]. The second axis of manipulation is intent, which relates to the idea that prohibited "manipulative techniques" are those that are used "purposefully" [27]. The researchers propose grounding the notion of intent in a fully behavioural lens, which is agnostic to the actual computational process of the system [29]. In other words, intent does not mean that an AI system has the same reasoning and planning abilities as a human does. They propose that _"...a system has intent to perform a behaviour if, in performing the behaviour, the system can be understood as engaging in a reasoning or planning process for how the behaviour impacts some objective."_ Including intent allows one to separate AI systems that are manipulative incidentally (e.g. random chance) from AI behaviour displaying a systematic pattern of manipulation in order to cause an outcome. An issue in measuring intent is deciphering what it means for an AI to engage in "a reasoning or planning process." [29] argues that an AI system aims for an outcome via a specific action when (i) there are other possible actions, (ii) the AI system can recognize when the outcome happens, (iii) the AI system predicts that the action will lead to the outcome, and (iv) the outcome is advantageous to the AI system. The researchers define covertness _"...as the degree to which a human is not aware of the specific ways in which an AI system is attempting to change some aspect of their behaviour, beliefs, or preferences_[27]." The concept of covertness serves as a distinguishing factor between manipulation and persuasion. In instances of persuasion, the individual being persuaded is typically conscious of the efforts made by the persuader to alter their viewpoint. However, the element of covertness implies that the individual may not be aware of the influence being exerted on them, thus making it difficult for them to consent or resist this influence. This lack of awareness can compromise their autonomy, differentiating it from persuasion. If a human understands how an AI system operates, the possibility of the AI system acting in a covert manner seems lower than otherwise. Although we acknowledge that the most influential view of manipulation in the field defines it as "hidden influence" [30], there are some problems with linking covertness and manipulation very closely. More covertness will on average produce more manipulation. However, some researchers argue that it is possible for a person to know that they are being manipulated and for it to still be considered as manipulation [31]. A classic example is that of guilt trips and other types of emotional manipulation [32]. Thus although covertness can serve as a good proxy measure for manipulation, one needs to consider cases where relying on it backfires. Through this framework, one can identify and regulate purposefully manipulative techniques by finding instances where an AI system receives a reward (or decreases a loss) when an AI system aims for an outcome via a specific action and this action is highly covert in that there is a low degree of human awareness _"of the specific ways in which an AI system is attempting to change some aspect of their behaviour, beliefs, or preferences."_ ### Deceptive Techniques Certain AI systems have found ways to garner positive reinforcement by executing actions that misleadingly suggest to the human overseer that the AI has accomplished its set goal. For instance, a study revealed that a virtual robotic arm was able to simulate the act of grasping a ball [22]. The AI system, which was trained via human feedback to pick up a ball, instead learned to position its hand in such a way that it blocked the ball from the camera's view, creating a false impression of success. Another study demonstrated that GPT4 possesses the capability to _"understand and induce false beliefs in other agents_[33]." There have also been instances where AI systems have learned to identify when they are under evaluation and temporarily halt any undesired behaviours, only to resume them once the evaluation period is over [34]. This kind of deceptive behaviour, known as specification gaming, could potentially become more prevalent as future AI systems take on more complex tasks that are harder to assess, thereby making their deceptive actions harder to detect [35]. Thus, an AI system is deceptive if it behaves in a way that is not aligned with the intentions of the user but pretends to behave as if it had the intentions of the user. This is clear in the evaluation example where an AI behaves differently when it is being evaluated. It seems to be aligned with the intentions of the user but it actually is not. For example, it can have information but not disclose it to a person when prompted to. Some refer to the issue of an AI possessing specific knowledge but not sharing it as the 'Eliciting Latent Knowledge' problem [36]. However, latent knowledge is not the only source of deception. We propose that deception, in the context of AI systems, refers to an intentional act (as defined above) by an AI system to create a false or misleading impression about its goals, capabilities, operations, or effects, with the aim of materially distorting a user's understanding, preferences, or behaviour in a manner likely to cause that user or others significant harm. This can occur in various ways, including but not limited to: 1. Misrepresenting or obscuring the system's goals or intents, thereby creating a perception of alignment with the user's goals or interests when this is not the case; 2. Providing false, incomplete, or misleading information about the system's capabilities or limitations; 3. Manipulating or obscuring the outputs, outcomes, or effects of the system's operation in a misleading manner; 4. Falsely representing the system's knowledge or lack thereof, with regard to any information or request by the user; 5. Altering the system's behaviour temporarily or selectively in response to monitoring or evaluation activities, in a way that falsely represents its typical operations or effects. This definition is intended to cover a broad range of deceptive AI practices, while providing a clear framework for identifying and assessing instances of deception. ## 5 Vulnerability Exploits are Scale Dependant The current version of the AI Act wants to prohibit an _"AI system that exploits any of the vulnerabilities of a person or a specific group of persons."_ We argue that exploits are scale-dependent [37]. Exploiting a person is different from exploiting a group of persons [38]. These differences should be further specified in the act. The exploitation of an individual, in the context of AI manipulation, can be conceptualised as the misuse of an AI system to leverage the inherent or contextual vulnerabilities of a person for undue gain, typically at the expense of the exploited individual. It will most often involve manipulative practices that subvert an individual's autonomy or privacy. As it is targeted at an individual, it will often involve a greater breach of the individual's privacy, as the first step in exploiting an individual is to learn how an individual can be exploited. Although AI systems may possess some information rightfully, other information may be seen as an infringement of privacy rights. It may be based on intimate personal data that exploits psychological susceptibilities to alter individual behaviour. These practices can engender substantial harm and infringement of personal sovereignty, often unbeknownst to the individual in question. To exploit a group is to identify a measurable factor in a group that has high predictive leverage, and thus targeting it results in a significant change to group behaviour. This exploitation results in a marked alteration in collective behaviour, often aligning with the exploitative entity's objectives at the cost of the group's autonomy and well-being. An example is the loss-avoidance tendency in most people, which can be used to manipulate risk-seeking behaviour within a group, thereby redirecting actions and decisions in a way that could potentially be harmful or against the group's inherent interests [39]. Furthermore, group exploitation can entail discriminatory practices, where AI systems unintentionally or deliberately target certain groups based on shared attributes such as race, gender, or socioeconomic status. This could lead to systemic biases, reinforcing social stereotypes, and perpetuating existing social inequities. Considering the distinct characteristics and potential harms associated with individual and group exploitation by AI systems, different policy solutions may be required. Thus we recommend that it is crucial to differentiate between individual and group exploitation in the Act, in order to address the specific challenges present in each type of exploitation. For individual exploitation, regulations should emphasise stringent data protection and privacy standards, as well as mechanisms to ensure the individual's informed consent and autonomy in the face of persuasive AI technologies. As for group exploitation, the AI Act should incorporate measures to prevent systemic biases in AI systems and to guard against AI-induced social inequities and group-based discrimination. This may involve requirements for transparency and bias mitigation in the design, deployment, and auditing of AI systems. ## 6 Defining Informed Decisions The EU AI Act seeks to regulate manipulation techniques that operate by _"impairing the person's ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken."_ An informed decision, in its most basic form, refers to a choice made with full awareness and understanding of the relevant information, potential consequences, and the available alternatives. This concept plays a critical role in policy and regulation, particularly in contexts such as consent, consumer rights, medical decisions, and, as in this case, interactions with AI systems (e.g., [40]). Therefore, crafting a definition that is useful for policymakers and regulation necessitates an integration of principles from ethics, law, and cognitive sciences. In the context of the EU AI Act, an informed decision can be defined as: A decision made by a person or group of persons, having full comprehension of the pertinent information, potential outcomes, and available alternatives, unimpaired by distorting external influences. This involves having clear, accurate, and sufficient information about the nature, purpose, and implications of the AI system in question, including but not limited to its functioning, data usage, potential risks, and the extent to which the system influences or informs their choices. The facets of this definition are: 1. Full comprehension: Decision-makers should understand the relevant information, potential implications, and available alternatives. In the context of AI, this may include an understanding of the nature of the AI system and its possible effects on their behaviour. 2. Accurate and sufficient information: The information provided should be accurate, complete, and easy to understand. Any crucial information that could significantly influence decision-making should not be withheld or obscured. 3. Absence of subliminal, manipulative, or deceptive techniques: The decision-making process should be free from covert or overt influences that could distort perception, judgment, or choice. 4. Understanding of AI influence: Decision-makers should be made aware of the extent to which the AI system could be influencing their choices, allowing them to take this into account in their decision-making process. Incorporating this definition into the Act would provide a more robust framework for assessing whether an AI system has materially distorted a person's behaviour by impairing their ability to make an informed decision. It could guide the creation of standards for transparency and disclosure around AI systems, and underpin the development of regulatory measures aimed at protecting the rights and autonomy of individuals and groups interacting with AI. ## 7 Defining Therapeutic Purposes In accordance with the EU AI act, prohibitions on manipulation _'shall not apply to AI systems intended to be used for approved therapeutical purposes on the basis of specific informed consent'_. The excepting of 'therapeutic' applications of AI is well-reasoned in that we can envisage many systems, such as those used in psychiatric contexts, where manipulation or persuasion are crucial to therapeutic effectiveness. However, we believe this clause leaves an opportunity whereupon subliminal techniques can be employed if systems explicitly include therapeutic purposes in their terms of service. Therapeutic purposes are thus under-defined within the current draft of the EU AI Act with potentially hazardous implications. AI systems can often be dual-purpose technology [41], in that the same innovation ostensibly marked as being for 'therapeutic purposes' can be used nefariously by malicious actors. We can envisage AI systems designed for uses such as mental healthcare or dermatology being used for psychometric targeting and biometric data harvesting respectively. A comparable situation is in place in pharmaceutical regulation, certain classes of drugs have both 'therapeutic purposes' and possible dual-uses (recreation, weaponization of doping). As such, a specific licensing regime for AI products and services marked as 'therapeutic' is needed. Such a licensing regime would regulate AI therapies similarly to other software-based medical interventions under the umbrella of Digital Therapeutics (DTx). Digital therapeutics (DTx), are defined by the Digital Therapeutics Alliance as "evidence-based therapeutic interventions driven by high-quality software programs to prevent, manage, or treat a medical disorder or disease [42]". However, even non-AI DTx face regulatory uncertainty within the EU. The EMA has no harmonised regulatory pathway for DTx [43]. This is especially pertinent given the increasing skepticism towards unregulated AI therapy apps - novel research shows that there is evidence that some of these can do harm [44]. Most digital health apps are developed without medical professional involvement [45]. Currently, few DTx products have completed conformity assessments, leading to limited regulatory precedents for risk classification. As regulatory bodies and DTx developers deepen their understanding of patient risks, clearer guidelines will emerge. Given the innovative nature of AI DTx products, it's imperative for notified bodies to maintain and expand their expertise for effective evaluations. There's a concern that a lack of expertise in these bodies could affect the quality of scientific reviews. The decision to exclude 'therapeutic' AI from the EU AI Act's restrictions, leading to a poorly regulated market, poses significant risks to European citizens. Meanwhile, the regulation of AI digital therapeutics poses unique challenges, including increased cybersecurity and privacy demands for sensitive health data [46]. The EMA's current post-authorisation management of medicines, including the Variation framework, must be adapted to accommodate updates to AI software linked to a medicinal product. A significant hurdle for the regulation of AI DTx is the absence of a standardised assessment framework. Currently, only Belgium and Germany have instituted DTx assessment frameworks at a national level and with significant incompatibilities presenting a barrier to European harmonisation [47]. This results in ambiguous evidence requirements for developers, which may differ across markets. The diversity in expected evidence types further complicates this for AI digital therapeutics. Although randomised controlled trials have been employed for non-AI DTx, there's an anticipated shift towards real-world evidence in AI DTx evaluations [48]. This shift stems from the belief that RCTs might not always be suitable, especially as DTx demands novel methods for ongoing real-world effectiveness assessments, given the adaptability and context-specificity of AI models [49]. Another challenge is that AI systems tend to generalise existing HTA frameworks, which evaluate products based on specific indications, necessitating distinct evidence and trials for each indication. As per the current iteration of the EU AI Act, we recommend AI with 'therapeutic purposes' to refer specifically to AI digital therapeutics as regulated by the EMA. We however point out that this is insufficient given that digital therapeutics are underegulated at a European level. ## 8 Conclusion This paper attempts to clarify and define usable concepts in areas lacking in the EU AI Act Draft. To this end, we introduced new definitions and concepts in the following core areas - personality traits, preferences, deceptive techniques, subliminal techniques, and purposefully manipulative techniques, group and individual vulnerabilities, informed decisions, and therapeutic purposes. When new concepts were introduced, it was to strengthen the understanding of currently present concepts, rather than to change the goals of the EU AI Act. Our recommendations are intended to provide a technical basis to legislators in formulating the EU AI Act. Our recommendations are targeted at ensuring the AI Act is precise and enforceable so that it may protect EU citizens from being subject to the deceptive and manipulative potential of current and future AI systems. ## Acknowledgments We would like to thank Risto Uuk, Juan Pablo Bermudez, Ariel Gil, and Lorenzo Pacchiardi for their valuable feedback.
2305.07295
Parameterized Verification of Disjunctive Timed Networks
We introduce new techniques for the parameterized verification of disjunctive timed networks (DTNs), i.e., networks of timed automata (TAs) that communicate via location guards that enable a transition only if there is another process in a given location. This computational model has been considered in the literature before, example applications are gossiping clock synchronization protocols or planning problems. We address the minimum-time reachability problem (Minreach) in DTNs, and show how to efficiently solve it based on a novel zone graph algorithm. We further show that solving Minreach allows us to construct a summary TA capturing exactly the possible behaviors of a single TA within a DTN of arbitrary size. The combination of these two results enables the parameterized verification of DTNs, while avoiding the construction of an exponential-size cutoff system required by existing results. Additionally, we develop sufficient conditions for solving Minreach and parameterized verification problems even in certain cases where locations that appear in location guards can have clock invariants, a case that has usually been excluded in the literature. Our techniques are also implemented, and experiments show their practicality.
Étienne André, Paul Eichler, Swen Jacobs, Shyam Lal Karra
2023-05-12T07:54:15Z
http://arxiv.org/abs/2305.07295v2
# Parameterized Verification of Disjunctive Timed Networks ###### Abstract We introduce new techniques for the parameterized verification of disjunctive timed networks (DTNs), i.e., networks of timed automata (TAs) that communicate via _location guards_ that enable a transition only if at least one process is in a given location. This computational model has been considered in the literature before, and example applications are gossiping clock synchronization protocols or planning problems. We address the minimum-time reachability problem (Minreach) in DTNs, and show how to efficiently solve it based on a novel zone-graph algorithm. We further show that solving Minreach allows us to construct a "summary" TA capturing exactly the possible behaviors of a single TA within a DTN of arbitrary size. The combination of these two results enables the parameterized verification of DTNs, while avoiding the construction of an exponential-size cutoff-system required by existing results. Additionally, we develop sufficient conditions for solving Minreach and parameterized verification problems even in certain cases where locations that appear in location guards can have clock invariants, a case that has usually been excluded in the literature. Our techniques are also implemented, and experiments show their practicality. Networks of Timed Automata, Parameterized Verification, Cutoffs, Minimum-time Reachability 2012 acmcopyrightmargin=5pt, innerleftmargin=5pt, innerrightmargin=5pt, innertopmargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innermargin=5pt, innerbottommargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=marginmarginmargin=5pt, innermargin=5pt, innermargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmargin=5pt, innermargin=marginmargin=marginmargin=5pt, innermargin=marginmargin=marginmargin=5pt, innermargin=marginmargin=marginmargin=5pt, innermargin=marginmargin=marginmargin=marginmargin=marginmargin=marginmargin=marginmargin=margin=marginmargin=marginmargin=marginmargin=marginmargin=marginmargin=marginmargin=marginmargin=marginmargin=marginmargin=marginmargin=marginmargin=marginmargin=marginmargin=marginmargin=marginmarginmargin=marginmargin=marginmargin=marginmargin=marginmargin=marginmarginmargin=marginmarginmargin=marginmargin=marginmargin=marginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmarginmargin=marginmarginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmargin=marginmarginmarginmargin=marginmarginmarginmargin=marginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmarginmargin=marginmarginmarginmarginmarginmarginmarginmarginmargin=margin has a single clock, but they are undecidable if processes have multiple clocks, and liveness properties are undecidable regardless of the number of clocks [2, 3, 4]. On the other hand, in _disjunctive timed networks_ (DTNs), communication is restricted to disjunctive guards that enable a transition only if at least one process is in a given location, and decidability is obtained also for multiple clocks per process and for liveness properties [29]. However, all of these results have no or very limited support for _location invariants_ (which can force an action to happen before a time limit runs out), and the decidability result for DTNs relies on the construction of a product system that can be hard to check in practice. Moreover, to the best of our knowledge no techniques exist for the computation of real-time properties such as minimum-time reachability in timed networks (for single TA, cp. [17]). In this paper, we show that minimum-time reachability can be effectively computed in DTNs, which also leads to a more efficient parameterized verification technique for safety and liveness properties, and we provide conditions under which location invariants can be supported. DTNs are an interesting computational model, since (even without clock invariants) they can express classes of models where a resource, encoded as a location, is produced or unlocked once for all; this is notably the case of the whole class of planning problems, and some problems in chemical reaction networks. Moreover, disjunctive networks (without time) have been used to verify security problems in grid computing [28], and real-time properties are also of interest in such applications. Finally, clock synchronization protocols are another natural application area, which we consider in the following. **Motivating Example: Gossiping Clock Synchronization.** Consider the example in Figure 1, depicting a simple clock synchronization protocol. The semantics of a single process is the same as for standard TAs. As a synchronization mechanism, some transitions are _guarded_ by a location \(h_{i}\), i.e., they may only be taken by one process if another process is in \(h_{i}\). Arrows marked with \(h_{0},h_{1}\) stand for two transitions, each guarded by one of the \(h_{i}\). Processes are synchronized when they are in a location \(h_{i}\) for \(i\in\{0,1\}\), and they should move to \(h_{(i+1\mod 2)}\) after two time units. However, they can non-deterministically lose synchronization and move to a location \(\ell_{i}\), where they move according to different timing constraints. Via \(\ell_{sy}\), they can re-synchronize with processes that are still in a location \(h_{i}\). A version of this example was considered by Spalazzi and Spegni [29]. Notably, their version came _without_ clock invariants on locations \(h_{i}\), i.e., processes can stay in these locations forever, which is clearly not the intended behavior. Besides not supporting location invariants, their parameterized verification requires the construction of a product system, which is fine for this small example but quickly becomes impractical for bigger examples. **Contributions.** In this paper, we provide novel decidability results for DTNs: Figure 1: TA with disjunctive guards for gossiping clock synchronization 1. We state the _minimum-time reachability problem_ (Minreach) for DTNs, i.e., computing the minimal time needed to reach any location of a TA, in networks with an arbitrary number of processes. We develop a technique to efficiently solve the problem for TAs without location invariants, based on a specialized zone-graph computation. 2. We show that solving Minreach allows us to construct a _summary automaton_ that captures the local semantics of a process, i.e., has the same set of possible executions as any single process in the DTN. This allows us to decide parameterized verification problems without resorting to the global semantics in a potentially large product system of cutoff size. 3. We show that under certain conditions, parameterized verification is still decidable even in the presence of location invariants. We develop new domain-specific proof techniques that allow us to compute summary automata and cutoffs for these cases. 4. We experimentally evaluate our techniques on variants of the clock synchronization case study and a hand-crafted example, demonstrating its practicality and its benefits compared to the cutoff-based approach. #### Related work Infinite-state systems have been of high interest to the verification community. Their infiniteness can stem from two causes. First, the state space of a single process can be infinite, as in TAs [7]. In order to obtain decidability results (for single TAs), abstraction techniques have been proposed in the literature (e.g., [11, 16, 26]). Second, infinite-state systems (effectively) arise if we consider parameterized systems that consist of an arbitrary number of components. Apt and Kozen [10] showed that reachability is undecidable in systems composed of an arbitrary number of finite-state systems, and Suzuki [30] showed undecidability even for uniform finite-state components with very limited means of communication. However, the parameterized verification problem remains decidable for many interesting classes of systems with finite-state processes [8, 14, 15, 18, 20, 21, 23]. In contrast, cutoffs for networks of timed systems is a relatively unexplored research direction. As the closest work to ours, Spalazzi and Spegni [29] consider networks of TAs with disjunctive guards, and show that cutoffs do not always exist, but they do exist for a subclass where the underlying TAs do not have invariants on locations that appear in guards. In our work we present new decidability results that address this limitation. According to this classification, the parameterized verification of networks of TAs deals with infiniteness in two dimensions [3, 4], and similarly the verification of timed Petri nets [27]. Here, the notion of urgency (which is closely related to location invariants) makes the reachability problem undecidable, even for the case where each TA is restricted to a single clock. Our setting also shares similarities with timed _ad hoc_ networks [2], where processes are TAs that communicate by broadcast in a given topology. However, even simple problems like parameterized reachability quickly become undecidable [2], e.g., when processes contain two clocks if the topology is a clique (i.e., all nodes can communicate with each other), or even when the nodes are equipped with a single clock if the topology is a star of diameter five. In our setting, the topology is fixed to a clique, and communication is not based on actions, but on location guards. In addition, [2, 4] can only decide reachability properties and do not seem to be usable to derive minimum-time reachability results. Parameterized verification _with timing parameters_ was considered in [9], with mostly undecidability results; in addition, the few decidability results only concern _emptiness_ of parameter valuations sets, and not synthesis, and therefore these parametric results cannot be used to encode a minimum-time reachability using a parameter minimization. A similar idea to our summary automaton for parameterized verification has also appeared in [8], but in their case this is a Buchi automaton, whereas we need to use a TA. ## 2 Preliminaries To formalize our approach, we define TAs with location guards (also: guarded TAs) and networks of TAs, followed by parameterized verification problems and cutoffs (Section 2.1), and finally the standard symbolic semantics of TAs (Section 2.2). Let \(\mathcal{C}\) be a set of clock variables. A _clock valuation_ is a mapping \(v:\mathcal{C}\to\mathbb{R}_{\geq 0}\). We denote by \(\mathbf{0}\) the clock valuation that assigns \(0\) to every clock, and by \(v+\delta\) for \(\delta\in\mathbb{R}_{\geq 0}\) the valuation with \((v+\delta)(c)=v(c)+\delta\) for all \(c\in\mathcal{C}\). We call _clock constraints_\(\mathcal{CC}(\mathcal{C})\) the terms of the following grammar: \[\mathcal{CC}(\mathcal{C})\mathrel{\mathop{:}\mskip-1.0mu =}\top\mid\mathcal{CC}( \mathcal{C})\wedge\mathcal{CC}(\mathcal{C})\mid c\sim c^{\prime}\mid c\sim p \text{ with }p\in\mathbb{N},c\in\mathcal{C},\sim\in\{<,\leq,=,\geq,>\}.\] **Guarded Timed Automaton (gTA).** A _gTA_\(A\) is a tuple \((Q,\hat{q},\mathcal{C},\mathcal{T},I)\), where * \(Q\) is a finite set of locations with _initial location_\(\hat{q}\), * \(\mathcal{C}\) is a finite set of clock variables, * \(\mathcal{T}\subseteq Q\times\mathcal{CC}(\mathcal{C})\times 2^{\mathcal{C}} \times(Q\cup\{\top\})\times Q\) is a transition relation, and * \(I:Q\to\mathcal{CC}(\mathcal{C})\) assigns to every location \(q\) an _invariant_\(I(q)\). Given a transition \(\tau=(q,g,r,\gamma,q^{\prime})\in\mathcal{T}\), we write \(\mathit{pre}(\tau)\) for \(q\), \(\mathit{post}(\tau)\) for \(q^{\prime}\), \(\mathit{reset}(\tau)\) for \(r\), \(\mathit{guard}(\tau)\) for \(g\) and \(\mathit{guard}(\tau)\) for \(\gamma\). Intuitively, \(\tau\) takes the automaton from location \(q\) to \(q^{\prime}\), it can only be taken if _clock guard_\(g\) and _location guard_\(\gamma\) are both satisfied, and it resets all clocks in \(r\). We say that \(\gamma\) is _trivial_ if \(\gamma=\top\), and we write \(\mathit{Guards}(A)\subseteq Q\) for the set of non-trivial location guards that appear in \(A\).1 We call \(q\in\mathit{Guards}(A)\) also a _guard location_. Footnote 1: Note that since \(\mathit{Guards}(A)\) is effectively a set of locations, we also use it as such. Figure 2 shows an example gTA with five locations. Location \(\hat{q}\) is the initial location, with a transition to \(q_{0}\) that has a clock constraint \(x=2\), and a transition to \(q_{2}\) that has a reset of clock \(x\) and a location guard \(q_{1}\). Location \(q_{0}\) has an invariant that constrains clock variable \(x\) to satisfy \(x\leq 4\). Given a gTA \(A\), we denote by \(\mathit{UG}(A)\) the unguarded version of \(A\) i.e., ignoring all location guards (or: replacing each location guard with \(\top\)). \(\mathit{UG}(A)\) is a TA [7]. A _configuration_ of a gTA \(A\) is a tuple \((q,v)\), where \(q\in Q\) and \(v:\mathcal{C}\to\mathbb{R}_{\geq 0}\) is a _clock valuation_. When considering a gTA \(A\) in isolation, its semantics is the usual semantics of the TA \(\mathit{UG}(A)\), ignoring location guards. That is, a _delay transition_ is of the form \((q,v)\to^{\delta}(q,v+\delta)\) for some \(\delta\in\mathbb{R}_{\geq 0}\cup\{\infty\}\) such that \(v+\delta\models I(q)\). A _discrete transition_ is of the form \((q,v)\to^{\tau}(q^{\prime},v^{\prime})\), where \(\tau=(q,g,r,\gamma,q^{\prime})\in\mathcal{T}\), \(v\models g\), \(v^{\prime}[c]=ite(c\in r,0,v[c])\), and \(v^{\prime}\models I(q^{\prime})\), (where \(ite(a,b,c)\) is short-hand for "if \(a\) then \(b\) else \(c\)"). We say that the transition is _based on_\(\tau\). We write \((q,v)\to(q^{\prime},v^{\prime})\) if either \((q,v)\to^{\delta}(q^{\prime},v^{\prime})\) or \((q,v)\to^{\tau}(q^{\prime},v^{\prime})\). Then, a _timed path_ of \(A\) is a (finite or infinite) sequence of transitions \(\rho=(q_{0},v_{0})\to(q_{1},v_{1})\to\ldots\). For a finite timed path \(\rho=(q_{0},v_{0})\to\ldots\to(q_{l},v_{l})\), let \(q_{\mathsf{end}}(\rho)=q_{l}\) be the final location of \(\rho\), \(\delta(\rho)=\sum_{0\leq i<l}\delta_{i}\) the total time delay of \(\rho\), and \(v_{\mathsf{end}}(\rho)=v_{l}\) be the final clock valuation of \(\rho\). We write \((q,v)\to^{*}(q^{\prime},v^{\prime})\) if there is a (finite) timed path Figure 2: A gTA example \(\rho=(q,v)\to\ldots\to(q^{\prime},v^{\prime})\). A timed path is a _timed computation_ if \(q_{0}=\hat{q}\), \(v_{0}=\mathbf{0}\) and either is infinite or cannot be extended, i.e., there is no transition from its final configuration. The _language_ of \(A\), denoted \(\mathcal{L}(A)\), is the set of all of its timed computations. We write \((q,v)\xrightarrow{\delta,\tau}(q^{\prime},v^{\prime})\) if there is a delay transition \((q,v)\to^{\delta}(q,v+\delta)\) followed by a discrete transition \((q,v+\delta)\to^{\tau}(q^{\prime},v^{\prime})\). Note that all finite paths can be written in the form \(\rho=(q_{0},v_{0})\xrightarrow{\delta_{0},v_{0}}\ldots\xrightarrow{\delta_{ l-1},\tau_{l-1}}(q_{l},v_{l})\).2 Footnote 2: Wlog, we assume that the first transition of a timed path is a delay transition and that delay transitions and discrete transitions alternate. We write \(\exists\tau\in\rho\) if \(\rho\) contains a discrete transition based on \(\tau\). Consider \(\mathit{UG}(A)\) for the gTA \(A\) in Figure 2. From its initial configuration \((\hat{q},\mathbf{0})\), there can be arbitrary delay transitions (since \(\hat{q}\) does not have an invariant), and after a delay of \(\delta\geq 0\) we can take a transition to \(q_{2}\) which resets the clock \(x\), i.e., we arrive at configuration \((q_{2},\mathbf{0})\). The location guard on this transition is ignored, since we consider \(\mathit{UG}(A)\). In contrast, the transition from \(\hat{q}\) to \(q_{0}\) has a clock constraint that needs to be observed, i.e., we can only take the transition after we reach a configuration \((\hat{q},v)\) with \(v(x)=2\). In \(q_{0}\), the computation can only stay as long as the invariant \(x\leq 4\) is satisfied. **Network of TAs.** For a given gTA \(A\), we denote by \(A^{n}\) the parallel composition \(A\parallel\cdots\parallel A\) of \(n\) copies of \(A\), also called a _network of TAs_ (NTA for short). A copy of \(A\) in the NTA will also be called a _process_. A _configuration_\(\mathfrak{c}\) of an NTA \(A^{n}\) is a tuple \(\big{(}(q_{1},v_{1}),\ldots,(q_{n},v_{n})\big{)}\), where every \((q_{i},v_{i})\) is a configuration of \(A\). The semantics of \(A^{n}\) can be defined as a _timed transition system_\((\mathfrak{C},\hat{\mathfrak{c}},T)\), where \(\mathfrak{C}\) denotes the set of all configurations of \(A^{n}\), \(\hat{\mathfrak{c}}\) is the unique initial configuration \((\hat{q},\mathbf{0})^{n}\), and the transition relation \(T\) is the union of the following delay and discrete transitions: * (delay transition) \(((q_{1},v_{1}),\ldots,(q_{n},v_{n}))\xrightarrow{\delta}((q_{1},v_{1}+ \delta),\ldots,(q_{n},v_{n}+\delta))\) if \(\forall i\in[1,n]\). \(\forall\delta^{\prime}\leq\delta.v_{i}+\delta^{\prime}\models I(q_{i})\), i.e., we can delay \(\delta\in\mathbb{R}_{\geq 0}\cup\{\infty\}\) units of time if all location invariants will be satisfied until the end of the delay * (discrete transition) \(((q_{1},v_{1}),\ldots,(q_{n},v_{n}))\xrightarrow{(i,\tau)}((q^{\prime}_{1}, v^{\prime}_{1}),\ldots,(q^{\prime}_{n},v^{\prime}_{n}))\) if \(\mathfrak{i})\)\((q_{i},v_{i})\xrightarrow{\tau}(q^{\prime}_{i},v^{\prime}_{i})\) is a discrete transition of \(A\) with \(\tau=(q_{i},g,r,\gamma,q^{\prime}_{i})\), _ii_) \(\gamma=\top\) or \(q_{j}=\gamma\) for some \(j\in[1,n]\setminus\{i\}\), and _iii_) \(q^{\prime}_{j}=q_{j}\) and \(v^{\prime}_{j}=v_{j}\) for all \(j\in[1,n]\setminus\{i\}\). That is, location guards \(\gamma\) are interpreted as disjunctive guards: unless \(\gamma=\top\), at least one of the other processes needs to occupy the location \(\gamma\). Reflecting our assumption that local timed paths always interleave delay and discrete transitions, a _timed path_ of \(A^{n}\) is a (finite or infinite) sequence of transitions \(\pi=\mathfrak{c}_{\mathfrak{o}}\xrightarrow{\delta_{0}}\mathfrak{c}_{\mathfrak{ i}}\xrightarrow{(i_{1},\tau_{1})}\ldots\) such that each delay transition is followed by a non-empty sequence of discrete transitions, with at most one discrete transition per process before the next delay transition, and with \(\delta_{i}<\infty\) for all transitions except the last. We also denote this as \(\mathfrak{c}_{\mathfrak{o}}\xrightarrow{\delta_{0},d_{0}}\mathfrak{c}_{ \mathfrak{i}}\xrightarrow{\delta_{1},d_{1}}\ldots\), where every \(d_{j}\) is a sequence of discrete transitions \((i_{1},\tau_{1}),\ldots,(i_{k},\tau_{k})\). For a finite timed path \(\pi=\mathfrak{c}_{\mathfrak{o}}\xrightarrow{\delta_{0},d_{0}}\mathfrak{c}_{ \mathfrak{i}}\xrightarrow{\delta_{1},d_{1}}\ldots\xrightarrow{\delta_{l-1},d_{ l-1}}\mathfrak{c}_{\mathfrak{i}}\), let \(\mathfrak{c}_{\mathfrak{end}}(\pi)=\mathfrak{c}_{\mathfrak{i}}\) be the final configuration of \(\pi\) and \(\delta(\pi)=\sum_{0\leq i<l}\delta_{i}\) the total time delay of \(\pi\). A timed path \(\pi\) of \(A^{n}\) is a _timed computation_ if \(\mathfrak{c}_{\mathfrak{o}}=\hat{\mathfrak{c}}\) and either is infinite or cannot be extended. The _language_ of \(A^{n}\), denoted \(\mathcal{L}(A^{n})\), is the set of all of its timed computations. We will also use _projections_ of these global objects onto subsets of the processes. That is, if \(\mathfrak{c}=((q_{1},v_{1}),\ldots,(q_{n},v_{n}))\) and \(I=\{i_{1},\ldots,i_{k}\}\subseteq[1,n]\), then \(\mathfrak{c}|_{I}\) is the tuple \(((q_{i_{1}},v_{i_{1}}),\ldots,(q_{i_{k}},v_{i_{k}}))\), and we extend this notation in the natural way to timed computa tions \(\pi|_{I}\) and the language \(\mathcal{L}_{|I}(A^{n})\)3. Similarly, for timed paths \(\pi_{1}\) of \(A^{n_{1}}\) and \(\pi_{2}\) of \(A^{n_{2}}\) we denote by \(\pi_{1}\parallel\pi_{2}\) their _composition_ into a timed path of \(A^{n_{1}+n_{2}}\). We write \(q\in\mathfrak{c}\) if \(\mathfrak{c}=((q_{1},v_{1}),\ldots,(q_{n},v_{n}))\) and \(q=q_{i}\) for some \(i\in[1,n]\), and similarly \((q,v)\in\mathfrak{c}\). Footnote 3: Note that since we define timed paths as sequences of transitions, the projection completely hides the transitions of other processes. In untimed systems it is more standard to define paths as sequences of states, and then our projection corresponds to the de-stuttering of the local path, as used e.g. in [8]. We say that a location \(q\) is _reachable_ in \(A^{n}\) if there exists a reachable configuration \(\mathfrak{c}\) s.t. \(q\in\mathfrak{c}\). We denote by \(\mathit{ReachL}(A)\) the set of reachable locations in \(\mathit{UG}(A)\), and by \(\mathit{ReachL}(A^{n})\) the set of reachable locations in \(A^{n}\). If \(\pi\) is a timed path, then by \(\pi^{\leq t}\)we denote the maximal prefix of \(\pi\) with \(\delta(\pi^{\leq t})\leq t\), and similarly for timed paths \(\rho^{\leq t}\) of a single gTA. Consider a network with \(2\) processes that execute the gTA in Figure 2. When both processes are in the initial configuration \((\hat{q},\mathbf{0})\), the transition to \(q_{2}\) is disabled because of the location guard \(q_{1}\). However, after a delay of \(\delta=2\), one of them can move to \(q_{0}\), and after another delay of \(\delta=2\) from \(q_{0}\) to \(q_{1}\). As \(q_{1}\) is now occupied, the process that stayed in \(\hat{q}\) can now take the transition to \(q_{2}\). Disjunctive Timed Network.A given gTA \(A\) induces a _disjunctive timed network_ (DTN)4\(A^{\infty}\), defined as the following family of NTAs: \(A^{\infty}=\{A^{n}:n\in\mathbb{N}_{>0}\}\). Footnote 4: We reuse terminology and abbreviations from [29]. We say that an automaton \(A\)_has persistent guard locations_ if every location \(q\in Q\) that appears in a non-trivial location guard has \(I(q)=\top\). We denote by \(\mathrm{DTN}^{-}\) the class of disjunctive timed networks where \(A\) has persistent guard locations. We define \(\mathcal{L}(A^{\infty})=\bigcup_{n\in\mathbb{N}}\mathcal{L}(A^{n})\) and, for \(I=[1,i]\) with \(i\in\mathbb{N}\), let \(\mathcal{L}_{|I}(A^{\infty})=\bigcup_{n\geq i}\mathcal{L}_{|I}(A^{n})\). ### Parameterized Verification Problems In this work, we are mostly interested in determining \(\mathcal{L}_{|I}(A^{\infty})\) for some fixed finite \(I\), since this allows us to solve many parameterized verification problems. This includes any local safety or liveness properties of a single process (with \(I=[1,1]\)), as well as mutual exclusion properties (with \(I=[1,2]\)) and variants of such properties for larger \(I\). Note that, even though we are interested in the language of a small number of processes, these processes still interact with an arbitrary number of identical processes in the network.5 Footnote 5: One notable exception, i.e., a parameterized verification problem that cannot be answered based on \(\mathcal{L}_{|I}(A^{\infty})\), is the (global) deadlock detection problem, not considered here, or similar problems of simultaneous behaviour of all processes. **Cutoffs.** We call a _family of gTAs_ a collection \(\mathcal{A}\) of gTAs, expressed in some formalism. E.g., let \(\mathcal{A}_{\top}\) be the collection of all gTAs with persistent guard locations, and for any \(k\) let \(\mathcal{A}_{k}\) be the collection of all gTAs with \(|\mathit{Guards}(A)|\leq k\). Then, a _cutoff_ for a family of templates \(\mathcal{A}\) is a number \(c\in\mathbb{N}\) such that for every \(A\in\mathcal{A}\): \[\mathcal{L}_{[1,1]}(A^{\infty})=\mathcal{L}_{[[1,1]}(A^{c})\] Note that our definition of cutoffs requires language equality of an automaton in \(A^{c}\) and \(A^{\infty}\), which implies that \(c\) is also a cutoff under other definitions from the literature which only require equivalence of \(A^{c}\) and \(A^{\infty}\) wrt. certain logic fragments that define subsets of the language of an automaton. In particular, we can immediately state the following generalization of an existing result, which gives a cutoff for gTAs with \(k\) locations used in guards, and without invariants on these locations. Let \(\mathcal{A}_{\top,k}=\mathcal{A}_{\top}\cap\mathcal{A}_{k}\). (follows from [29]). \(1+k\) is a cutoff for \(\mathcal{A}_{\top,k}\). ### Symbolic Semantics of Timed Automata We build on the standard zone-based symbolic semantics of TAs [13]. In the following, let \(A=(Q,\hat{q},\mathcal{C},\mathcal{T},I)\) be a gTA, interpreted as a standard TA. A _zone_ is a set of clock constraints, representing all clock valuations that satisfy these constraints. In the following we will use the constraint notation and the set (of clock valuations) notation interchangeably. We denote by \(\mathcal{Z}\) the set of all zones. A _symbolic configuration_ of \(A\) is a pair \((q,z)\), where \(q\in Q\) and \(z\) is a zone over \(\mathcal{C}\). For a given symbolic configuration \((q,z)\), let \(z^{\uparrow}=\{u+\delta\mid u\in z,\delta\in\mathbb{R}_{\geq 0}\}\), and \((q,z)^{\uparrow}=(q,z^{\uparrow}\cap I(q))\). For a zone \(z\) and a clock constraint \(g\), let \(z\wedge g=\{v\mid v\in z\text{ and }v\models g\}\). **Zone Graph.** The zone graph \(ZG(A)=(S,S_{0},Act,\Rightarrow)\) of \(A\) is a transition system where _i_) \(S\) is the (infinite) set of nodes of the form \((q,z)\) where \(q\in Q\) and \(z\) is a zone. The initial node is \((\hat{q},\mathbf{0})^{\uparrow}\). _ii_) For any two nodes \((q,z)\) and \((q^{\prime},z^{\prime})\), there is a _symbolic transition_, \((q,z)\xrightarrow{r}(q^{\prime},z^{\prime})\) if there exists a transition \(\tau=(q,g,r,\gamma,q^{\prime})\in\mathcal{T}\) such that \((q^{\prime},z^{\prime})=(q^{\prime},\{v^{\prime}\in\mathbb{R}_{\geq 0}^{X}\mid \exists v\in z\colon(q,v)\xrightarrow{\tau}(q^{\prime},v^{\prime})\})^{\uparrow}\) and \(z^{\prime}\neq\emptyset\). _iii_) \(\Rightarrow\) is the union of all \(\xrightarrow{\tau}\), and \(\Rightarrow^{*}\) is the transitive closure of \(\Rightarrow\). A _symbolic path_ in a zone graph is a (finite or infinite) sequence of symbolic transitions \(\overline{\rho}=(q_{0},z_{0})\xrightarrow{\tau_{0}}(q_{1},z_{1}) \xrightarrow{\tau_{1}}\ldots\). ## 3 Minimum-Time Reachability Algorithm for Dtn\({}^{-}\) In this section, we show how we can solve the _minimum-time reachability problem_ (Minreach), i.e., determine reachability of every \(q\in Q\) in a DTN\({}^{-}\) and compute minimal reachability times \(\delta_{min}^{\infty}(q)\) for every reachable location \(q\). Solving Minreach is essential since, in a DTN with \(A\in\mathcal{A}_{\top}\), the minimal reachability times completely determine when each disjunctive guard can be satisfied. We will show that this allows us to determine \(\mathcal{L}|_{[1,1]}(A^{\infty})\) by "filtering" \(\mathcal{L}(\mathit{UG}(A))\) with respect to the \(\delta_{min}^{\infty}(q)\)-values. Formally, we define \[\delta_{min}(q)=\min(\{t\mid\exists\rho\in\mathcal{L}(\mathit{UG}(A))\text{ with }q=q_{\mathsf{end}}(\rho^{\leq t})\}),\] i.e., the minimal global time to reach \(q\) in \(\mathit{UG}(A)\), and for \(i\in\mathbb{N}\cup\{\infty\}\) \[\delta_{min}^{i}(q)=\min(\{t\mid\exists\pi\in\mathcal{L}(A^{i})\text{ with }q\in\mathfrak{c}_{\mathsf{end}}(\pi^{\leq t})\}),\] i.e., the minimal global time such that one process reaches \(q\) in an NTA of size \(i\) (if \(i\in\mathbb{N}\)), or a network of any size (if \(i=\infty\)). Then, Minreach is the problem of determining \(\delta_{min}^{\infty}(q)\) for every \(q\in Q\). For the gTA in Figure 2 we have \(\delta_{min}(q_{2})=0\), as the transition from \(\hat{q}\) to \(q_{2}\) is immediately enabled. However, we have \(\delta_{min}^{2}(q_{2})=\delta_{min}^{\infty}(q_{2})=4\), since one process needs to move to \(q_{1}\) before another can take the transition to \(q_{2}\). Note that the _minimum_ may not always be defined, as the smallest reachable time may be an _infimum_ (e.g., of the form \(t>c\) for some \(c\in\mathbb{N}\)). To ease the expose, we assume that the minimum is always defined, but all our constructions can be extended to work for both cases of minimum and infimum. Note that a "naive" way to solve Minreach would be based on Theorem 3.1, our generalization of existing cutoff results for DTNs.6 I.e., we can consider the cutoff system \(A^{|\mathit{Guard}_{\mathit{G}}(A)|}\) and determine \(\delta_{min}^{|Guards(A)|}(q)\) with standard techniques for TAs [19], but the exponential growth of the system in \(\mathit{Guards}(A)\) will quickly make this problem intractable; it is well-known that adding one clock may result in an exponentially larger zone graph [24, 25]. Instead, the following lemma formalizes the connection between runs of \(A\) and \(A^{\infty}\), stating that a computation of \(A\) is also the projection of a run of \(A^{\infty}\) if no transition with a location guard \(q\) is fired before \(q\) can be reached. Let \(A\in\mathcal{A}_{\top}\). A computation \(\rho\in\mathit{UG}(A)\) is in \(\mathcal{L}_{[1,1]}(A^{\infty})\) iff for every prefix \(\rho^{\prime}\) of \(\rho\) that ends in a discrete transition \((q,v)\rightarrow^{\tau}(q^{\prime},v^{\prime})\) with \(\mathit{sguard}(\tau)\neq\top\), we have \(\delta_{min}^{\infty}(\mathit{sguard}(\tau))\leq\delta(\rho^{\prime})\). Consider again the gTA in Figure 2 (ignoring the location guard of \(q_{0}\)). If we consider the possible behaviors of a process in a DTN\({}^{-}\), we can assume that transition \(\hat{q}\to q_{2}\) is enabled whenever the global time is at least \(\delta_{min}^{\infty}(q_{1})\). This is because we can always assume that there is another process that moves to that location on the shortest path (and stays there). We will show in the following that Lemma 3.1 allows us to solve Minreach by 1. considering a variant \(A^{\prime}\) of \(A\) with a global clock variable \(\delta\) that is never reset, and 2. applying a modified zone-graph algorithm on a _single instance_ of \(A^{\prime}\). Working on a single instance of \(A^{\prime}\) will lead to an exponential reduction of time and memory when compared to a naive method based on cutoff results. ### Symbolic Semantics for \(\mathsf{DTN}^{-}\) As Lemma 3.1 shows, to decide whether a path with location guards can be executed in a DTN\({}^{-}\), it is important to keep track of global time. The natural way to do this is by introducing an auxiliary clock \(\delta\) that is never reset [6, 12]. The following definition captures this idea in a special zone graph. **DTN-Zone Graph.** Given a gTA \(A=(Q,\hat{q},\mathcal{C},\mathcal{T},I)\), let \(A^{\prime}=(Q,\hat{q},\mathcal{C}\cup\{\delta\},\mathcal{T},I)\). Then the _DTN-zone graph_\(ZG^{\infty}(A^{\prime})\) is a transition system where * the set \(S\) of nodes and the initial node \((\hat{q},z_{0})\) are as in \(ZG(A^{\prime})\). * For any two nodes \((q,z)\) and \((q^{\prime},z^{\prime})\), there is * a _guarded transition_\((q,z)\xrightarrow{\tau,\gamma}(q^{\prime},z^{\prime})\) if there is a symbolic transition \((q,z)\xrightarrow{\tau}(q^{\prime},z^{\prime})\) in \(ZG(A^{\prime})\) and \(\gamma=\top\) * a _guarded transition_\((q,z)\xrightarrow{\tau,\gamma}(q^{\prime},z^{\prime})\) if there exists a \(\tau\in\mathcal{T}\) such that \((q^{\prime},z^{\prime})=(q^{\prime},\{v^{\prime}\in\mathbb{R}_{\geq 0}^{X}\mid \exists v\in z\colon v(\delta)\geq\delta_{min}^{\infty}(\gamma)\wedge(q,v) \rightarrow^{\tau}(q^{\prime},v^{\prime})\})^{\uparrow}\), \(z^{\prime}\neq\emptyset\) and \(\gamma\neq\top\). I.e., in this case we effectively add \(\delta\geq\delta_{min}^{\infty}(\gamma)\) to the clock constraint of \(\tau\). * is the union of all \(\xrightarrow{\tau,\gamma}\) and \(\xrightarrow{\gamma}^{*}\) is the transitive closure of \(\xrightarrow{\gamma}\) For a zone \(z\), let \(z(\delta)\) be the lower bound of \(\delta\) in \(z\). Then we want (a finite version of) \(ZG^{\infty}(A^{\prime})\) to satisfy the following properties: * _soundness with respect to_ Minreach, i.e., if \((\hat{q},z_{0})\xrightarrow{\gamma}^{*}(q^{\prime},z^{\prime})\) and \(z^{\prime}(\delta)\) is minimal for \(q^{\prime}\) in \(ZG^{\infty}(A^{\prime})\), then there exist \(n\in\mathbb{N}\) and a timed path \(\pi=\hat{\mathfrak{c}}\xrightarrow{\delta_{0}}\mathfrak{c}_{1}\xrightarrow{ \delta_{1},d_{1}}\ldots\xrightarrow{\delta_{l-1},d_{l-1}}\mathfrak{c}_{l}\) of \(A^{n}\) with \(v^{\prime}\in z^{\prime}\) and \(v^{\prime}(\delta)=z^{\prime}(\delta)\) for some \((q^{\prime},v^{\prime})\in\mathfrak{c}_{l}\), and * _completeness with respect to_ Minreach, i.e., if there exist \(n\in\mathbb{N}\) and a timed path \(\pi=\hat{\mathfrak{c}}\xrightarrow{\delta_{0}}\mathfrak{c}_{1}\xrightarrow{ \delta_{1},d_{1}}\ldots\xrightarrow{\delta_{l-1},d_{l-1}}\mathfrak{c}_{l}\) of \(A^{n}\) with \(v^{\prime}(\delta)=\delta_{min}^{\infty}(q^{\prime})\) for some \((q^{\prime},v^{\prime})\in\mathfrak{c}_{l}\) then there exists \((\hat{q},z_{0})\xrightarrow{\gamma}^{*}(q^{\prime},z^{\prime})\) with \(v^{\prime}\in z^{\prime}\). Note that for a single TA \(A\), and \(A^{\prime}\) defined as above, the zone graph \(ZG(A^{\prime})\) is sound w.r.t. Minreach, and completeness can be preserved under a time-bounded exploration with a bound \(B\) such that \(B>\delta_{min}(q)\) for every \(q\in Q\). However, having \(B\) only slightly larger than \(\delta_{min}(q)\) for every \(q\in Q\) may not be sufficient, as the following example shows: We have already seen that for the gTA in Figure 2 we have \(\delta_{min}^{\infty}(q_{2})=4\), even though \(\delta_{min}(q)<3\) for all \(q\). Thus, if we choose \(B=3\), a time-bounded exploration will not find a path to \(q_{2}\) (or \(q_{3}\)) in \(ZG^{\infty}(A^{\prime})\). Bounding minimal reachability times.In the following, we compute an upper-bound on the minimum-time reachability in a DTN\({}^{-}\), which will allow us to perform a bounded-time exploration, thus rendering the zone graph finite. Let \(\delta_{max}=\max\{\delta_{min}(q)\mid q\in\textit{ReachL}(A)\}\). Our upper bound is defined as \(\textit{UB}(A)=\delta_{max}\cdot(|\textit{Guards}(A)|+1)\), i.e., the maximum over the locations of the minimum to reach that location, times the number of location guards plus one. For any given gTA \(A=(Q,\hat{q},\mathcal{C},\mathcal{T},I)\), we have \(1.\): \(\textit{ReachL}(A)\supseteq\textit{ReachL}(A^{\infty})\), and \(2.\): for all \(q\in\textit{ReachL}(A^{\infty})\), \(\delta_{min}^{\infty}(q)\leq\textit{UB}(A)\). Thus, it is sufficient to perform a time-bounded analysis with \(\textit{UB}(A)\) as a time horizon. Formally, let \(ZG_{\textit{UB}(A)}^{\infty}(A^{\prime})\) be the finite zone graph obtained from \(ZG^{\infty}(A^{\prime})\) by intersecting every zone with \(\delta\leq\textit{UB}(A)\). For \(A=(Q,\hat{q},\mathcal{C},\mathcal{T},I)\in\mathcal{A}_{\top}\), let \(A^{\prime}=(Q,\hat{q},\mathcal{C}\cup\{\delta\},\mathcal{T},I)\). Then \(ZG_{\textit{UB}(A)}^{\infty}(A^{\prime})\) is sound and complete with respect to Minreach. ### An Algorithm for Solving Minreach To solve Minreach in practice, it is usually not necessary to construct \(ZG_{\textit{UB}(A)}^{\infty}(A^{\prime})\) completely. For \(\textit{ReachL}(A^{\infty})=\{q_{1},\ldots,q_{k}\}\), assume wlog. that \(\delta_{min}^{\infty}(q_{1})\leq\delta_{min}^{\infty}(q_{2})\leq\ldots\leq \delta_{min}^{\infty}(q_{k})\). Then the timed path to \(q_{i}\) with minimal global time for every \(q_{i}\) can only have location guards that are in \(\{q_{j}\mid j<i\}\). If we explore zones in order of increasing \(z(\delta)\), we will find \(\delta_{min}^{\infty}(q_{1})\) without using any transitions with non-trivial location guards. Then, we enable transitions guarded by \(q_{1}\), and will find \(\delta_{min}^{\infty}(q_{2})\) using only the trivial guard and \(q_{1}\), etc. An incremental algorithm that constructs the zone graph7 has to keep track of guarded transitions that cannot be taken at the current global time (and it is not yet known when they can be taken), but it may be possible to take them at some later time. Footnote 7: As usual, for efficient computation we encode zones using DBMs, see [13] and Appendix B. Algorithm 1 takes \(A\) as an input, constructs the relevant parts of \(ZG_{\textit{UB}(A)}^{\infty}(A^{\prime})\), and returns a mapping of locations \(q\) to their \(\delta_{min}^{\infty}(q)\). As soon as we have discovered timed paths to all \(q\in Q\), the algorithm terminates (Line 3). Otherwise, as long as we have graph nodes \((q,z)\) to explore, we pick the minimal node (wrt.\(z(\delta)\) in Line 4) and check that its zone \(z\) is non-empty. If this is the case, we mark \(q\) as visited and add any successor nodes. Furthermore, we remember any transitions that are currently not enabled, and store them with the current node in \(D\). Finally, if the current node \((q,z)\) is such that \(z(\delta)<\textit{MinReach}(q)\) (Line 10), then we have discovered the minimal reachability time of \(q\). In this case we store it in \(\textit{MinReach}(\text{Line }11)\), and we compute the successor along \(\tau\) for every tuple \((\tau,(q^{\prime},z^{\prime}))\in D\) with \(\textit{sguard}(\tau)=q\), representing a transition that has just become enabled, after intersecting \(z^{\prime}\) with \(\delta\geq z(\delta)\), as the transition is only enabled now(Line 13). ### Verification of DTNs For a given gTA \(A=(Q,\hat{q},\mathcal{C},\mathcal{T},I)\), the _summary automaton of \(A\)_ is the gTA \(\hat{A}=(Q,\hat{q},\mathcal{C}\cup\{\delta\},\hat{\mathcal{T}},I)\) with \(\hat{\tau}=(q,\hat{g},r,\top,q^{\prime})\in\hat{\mathcal{T}}\) if \(\tau=(q,\hat{g},r,\gamma,q^{\prime})\in\mathcal{T}\) and either \(\gamma=\top\wedge\hat{g}=g\) or \(\gamma\in\mathit{Reach}L(A^{\infty})\wedge\hat{g}=(g\wedge\delta\geq\delta_{ min}^{\infty}(\gamma))\). [Summary Automaton] Let \(A=(Q,\hat{q},\mathcal{C},\mathcal{T},I)\in\mathcal{A}_{\top}\). Then \(\mathcal{L}|_{[1,i]}\left(A^{\infty}\right)=\mathcal{L}\Big{(}\big{(}\mathit{ UG}(\hat{A})\big{)}^{i}\Big{)}\) for all \(i\in\mathbb{N}\). Proof.: For \(i=1\), the statement directly follows from Lemma 7 and the definition of \(\hat{A}\). For \(i>1\), it follows from the fact that \(\mathcal{L}|_{[j,j]}(A^{\infty})=\mathcal{L}\big{(}\mathit{UG}(\hat{A})\big{)}\) for each \(j\in[1,i]\). Theorem 4 tells us that the summary automaton \(\hat{A}\) can be used to answer any verification question that is based on \(\mathcal{L}|_{[1,1]}\left(A^{\infty}\right)\), i.e., the local runs of a single gTA in a DTN\({}^{-}\)\(A^{\infty}\). This includes standard problems like reachability of locations, but also (global) timing properties, as well as liveness properties. Moreover, the same holds for behaviors of multiple processes in a DTN\({}^{-}\), such as mutual exclusion properties, by simply composing copies of \(\hat{A}\). In particular, any model checking problem in a DTN\({}^{-}\) that can be reduced to checking a cutoff system by Theorem 4 can be reduced to a problem on the summary automaton, which is exponentially smaller than the cutoff system. ## 4 Conditions for Decidability in the Presence of Location Invariants In Section 1, we argued that our motivating example would benefit from invariants that limit how long a process can stay in any of the locations. Neither the results presented so far nor existing cutoff results support such a model, since it would have invariants on locations that appear in location guards. To see this, note that the correctness of Theorem 4, like the Minreach-soundness argument of the DTN-zone graph, relies on the fact that in a \(DTN^{-}\), if we know that a location \(q\) can be reached (within some time or on some timed path) and we argue about the existence of a local behavior of a process, we can always assume that there are other processes in the system that reach \(q\)_and stay there forever_. This proof construction is called _flooding_ of location \(q\), but it is in general not possible if \(q\) has a non-trivial invariant. In this section we generalize this flooding construction and provide sufficient conditions under which we can obtain a reduction to a summary automaton or a cutoff system, even in the presence of invariants. For the rest of the section, we assume that \(A\) has a single clock \(x\). **General Flooding Computations.** We say that \(\pi\in\mathcal{L}(A^{\infty})\) is a _flooding computation for \(q\)_ if \(q\in\mathfrak{c}_{\mathsf{md}}(\pi^{\leq t})\) for every \(t\geq\delta_{min}^{\infty}(q)\), i.e., \(q\) is reached in minimal time and will be occupied by at least one process at every later point in time. Then, we obtain the following as a consequence of Lemma 11:8 Footnote 8: To see this, in the soundness part of the proof, instead of extending \(\pi_{min}\) as explained there, use our new flooding computation to extend it. **Corollary 13**.: _Let \(A=(Q,\hat{q},\mathcal{C},\mathcal{T},I)\) be a gTA, not necessarily from \(\mathcal{A}_{\top}\), and let \(A^{\prime}=(Q,\hat{q},\mathcal{C}\cup\{\delta\},\mathcal{T},I)\). If there exists a flooding computation for every \(q\in(\text{Guards}(A)\cap\text{Reach}L(A^{\infty}))\), then \(ZG_{UB(A)}^{\infty}(A^{\prime})\) is correct._ Note that a flooding computation trivially exists if \(q\in\text{Reach}L(A^{\infty})\) and \(I(q)=\top\). Thus, the next question is how to determine whether flooding computations exist for locations with non-trivial invariants. **Identifying Flooding Computations based on Lassos.** We aim to find lasso-shaped local computations of a single process that visit a location \(q\) infinitely often and can be composed into a flooding computation for \(q\). Since any flooding computation \(\pi\) for \(q\) must use one of the minimal paths to \(q\) that we can find in \(ZG_{UB(A)}^{\infty}(A^{\prime})\), and any local computation in \(\pi\) must also be a computation of the summary automaton \(\hat{A}\) of \(A\), our analysis will be based on the summary automaton \(\hat{A}\) instead of \(A\).9 Since furthermore every possible prefix of \(\rho\), including its final configuration \((q,v)\), is already computed in \(ZG_{UB(A)}^{\infty}(A^{\prime})\), what remains to be found is the loop of the lasso, where one or more processes start from a configuration \((q,v)\) and always keep \(q\) occupied. To this end, the basic idea is that each process spends as much time as possible in \(q\). To achieve this with a small number \(c\) of processes, we also want to minimize times where more than one process is in \(q\). We illustrate the idea on an example, and formalize it afterwards. Footnote 9: Note that for this argument, we _assume_ that there are flooding computations for all \(q\in\text{Guards}(A)\), and therefore \(ZG_{\infty}^{A^{\prime}}\) and \(\hat{A}\) are correct. If we can identify flooding lassos based on this assumption, then we have also proven that the assumption was justified. **Example 14**.: Figure 2(a) shows a subgraph of the gTA in Figure 2. Our goal is to find a flooding computation for location \(q_{0}\). Then we can let two processes start in location \(\hat{q}\) and Figure 3: Loops and color codings for flooding computations have both of them move to \(q_{0}\) at global time \(2\). Here, process \(1\) immediately moves to \(\hat{q}\), which is possible as the location guard is enabled by process \(2\). Afterwards process \(1\) stays for \(2\) time units in location \(\hat{q}\), after which it can take the transition back to \(q_{0}\). As process \(2\) has to leave the location now due to the invariant, it takes the same loop as process \(1\) before. Note that the location guard is now again enabled, as process \(1\) has returned to \(q_{0}\). Both processes can take turns traversing the loop, keeping \(q_{0}\) occupied forever. **Syntactic Paths and Timed Loops.** A (finite) _path_ in \(\hat{A}\) is a sequence of transitions \(\psi=q_{0}\xrightarrow{\tau_{0}}q_{1}\xrightarrow{\tau_{1}}\ldots\xrightarrow{ \tau_{l-1}}q_{l}\), with \(\tau_{i}=(q_{i},g_{i},r_{i},\gamma_{i},q^{\prime}_{i})\in\mathcal{T}\) and \(q^{\prime}_{i}=q_{i+1}\) for \(i\in[0,l[\). We also denote \(q_{0}\) as \(q_{\text{start}}(\psi)\), and call \(\psi\)_initial_ if \(q_{\text{start}}(\psi)=\hat{q}\). A path \(\psi\) is a _loop_ if \(q_{0}=q_{l}\). We call a path _resetting_ if it has at least one clock reset. We call a path \(\psi=q_{0}\xrightarrow{\tau_{0}}\ldots\xrightarrow{\tau_{l-1}}q_{l}\)_executable from_\((q_{0},v_{0})\) if there is a timed path \(\rho=(q_{0},v_{0})\xrightarrow{\delta_{0},\tau_{0}}\ldots\xrightarrow{\delta_ {l-1},\tau_{l-1}}(q_{l},v_{l})\) of \(\hat{A}\). We say that \(\rho\) is _based on_\(\psi\). For a path \(\psi=q_{0}\xrightarrow{\tau_{0}}\ldots\xrightarrow{\tau_{l-1}}q_{l}\) in \(\hat{A}\) that is executable from \((q_{0},v_{0})\), we denote by \(\rho_{\text{asap}}(\psi,v_{0})\) the unique timed path \((q_{0},v_{0})\xrightarrow{\delta_{0},\tau_{0}}\ldots\xrightarrow{\delta_{l-1 },\tau_{l-1}}(q_{l},v_{l})\) such that in every step, \(\delta_{i}\) is minimal. A timed path \(\rho=(q_{0},u_{0})\xrightarrow{\delta_{0},\tau_{0}}\ldots\xrightarrow{\delta _{l-1},\tau_{l-1}}(q_{l},u_{l})\) is called a _timed loop_ if \(q_{0}=q_{l}\) and \(u_{0}(x)=u_{l}(x)\). **Sufficient Conditions for Existence of a Flooding Computation.** Let \(\psi\) be a resetting loop, where \(\tau_{i}\) is the first resetting transition and \(\tau_{j}\) is the last resetting transition on \(\psi\). As depicted in Figure 2(b), we split \(\psi\) into \(1)\)\(\psi_{1}=q_{0}\xrightarrow{\tau_{0}}\ldots\xrightarrow{\tau_{j}}q_{i+1}\); \(2)\)\(\psi_{2}=q_{i+1}\xrightarrow{\tau_{i+1}}\ldots\xrightarrow{\tau_{j}}q_{j+1}\); and \(3)\)\(\psi_{3}=q_{j+1}\xrightarrow{\tau_{j+1}}\ldots\xrightarrow{\tau_{l-1}}q_{l}\). Note that \(\psi_{2}\) is empty if \(i=j\), and \(\psi_{3}\) is empty if \(j=l-1\). If \(\psi\) is executable from \((q_{0},v_{0})\), then \(\rho_{\text{asap}}(\psi,v_{0})=(q_{0},v_{0})\xrightarrow{\delta_{0},\tau_{0}} \ldots\xrightarrow{\delta_{l-1},\tau_{l-1}}(q_{l},v_{l})\) exists and we can compute the time needed to execute its parts as * \(t_{1}=\delta((q_{0},v_{0})\xrightarrow{\delta_{0},\tau_{0}}\ldots\xrightarrow{ \delta_{i},\tau_{i}}(q_{i+1},v_{i+1}))\), * \(t_{2}=\delta((q_{i+1},v_{i+1})\xrightarrow{\delta_{i+1},\tau_{i+1}}\ldots \xrightarrow{\delta_{j},\tau_{j}}(q_{j+1},v_{j+1}))\), * \(t_{3}=\delta((q_{j+1},v_{j+1})\xrightarrow{\delta_{l-1},\tau_{l-1}}\ldots \xrightarrow{\delta_{l-1},\tau_{l-1}}(q_{0},v_{l}))\). Note that the first process \(p_{1}\) that traverses the loop along \(\rho_{\text{asap}}(\psi,v_{0})\) will return to \(q_{0}\) after time \(t_{1}+t_{2}+t_{3}\). Thus, if a process \(p_{2}\) starts in the same configuration \((q_{0},v_{0})\), and stays in \(q_{0}\) while \(p_{1}\) traverses the loop, then \(p_{1}\) will return to \(q_{0}\) before \(p_{2}\) has to leave due to \(I(q_{0})\) if \(U^{I}_{x}(q_{0})\geq t_{1}+t_{2}+t_{3}+v(x)\), where \(U^{I}_{x}(q_{0})\) denotes the upper bound imposed on \(x\) by \(I(q_{0})\). Moreover, \(p_{2}\) can still execute \(\psi\) from a configuration \((q_{0},v^{\prime})\) with \(v^{\prime}=v+t\) if and only if \(v(x)+t\leq T\), where \(T=\min\{U^{I}_{x}(q_{k})\mid 0\leq k\leq i\}\). Generally, traversing \(\psi\) is possible from any configuration \((q_{0},v^{\prime})\) with \(v(x)\leq v^{\prime}(x)\leq T\) and \(v(\delta)\leq v^{\prime}(\delta)\). In particular, we have \(\delta(\rho_{\text{asap}}(\psi,v^{\prime}))\leq t_{1}+t_{2}+t_{3}\) and in the reached configuration \((q_{0},v^{\prime\prime})\) we have \(v^{\prime\prime}(x)\leq t_{3}\). Thus, if \(p_{2}\) has to leave before \(p_{1}\) returns, we can add more processes that start traversing the loop in regular intervals, such that \(p_{i+1}\) always arrives in \(q_{0}\) before \(p_{i}\) has to leave. Consider again the flooding computation constructed in Example 3.2, depicted in Figure 4. In this case, \(\psi_{1}=q_{0}\rightarrow\hat{q}\) (depicted in red), \(\psi_{2}\) is empty, and \(\psi_{3}=\hat{q}\to q_{0}\) (in brown). Note that we will have a repetition of the global configuration after a bounded time, and therefore can keep \(q_{0}\) occupied forever by repeating the timed loop. Based on these observations, we can show that the conditions mentioned above are sufficient to guarantee that a flooding computation for \(q_{0}\) exists, provided that we also have a flooding computation for all other guard locations. Let \(\rho_{0}=(\hat{q},\mathbf{0})\rightarrow\ldots\rightarrow(q_{0},v_{0})\) be a timed computation of \(\hat{A}\) that reaches \(q_{0}\) after minimal time, and \(\psi=q_{0}\xrightarrow{\tau_{0}}\ldots\xrightarrow{\tau_{l-1}}q_{0}\) a resetting loop in \(\hat{A}\) executable from \((q_{0},v_{0})\). Let \(t_{1},t_{2},t_{3},T\) as defined above. If \(T\geq t_{1}+t_{2}+t_{3}+v(x)\), \(T>t_{3}\), and there exists a flooding computation for all \(q\in(\mathit{Guards}(A)\cap\mathit{ReachL}(A^{\infty}))\backslash\{q_{0}\}\), then there exists a flooding computation for \(q_{0}\). If for every \(q\in(\mathit{Guards}(A)\cap\mathit{ReachL}(A^{\infty}))\) we either have \(I(q)=\top\) or there exists a flooding computation according to Lemma 3, then there exists a computation of \(A^{\infty}\) that floods all of these locations at the same time. #### 4.2.2 New Cutoff Results In addition to witnessing the correctness of the summary automaton for \(A\), the proofs of Lemmas 3 and 3 also allow us to compute a cutoff for the given gTA. For a location \(q\), let \(w(q)=1\) if \(I(q)=\top\) and \(w(q)=max\{2,\lceil\frac{T+t_{2}}{T-t_{3}}\rceil\}\), where \(T,t_{2},t_{3}\) are as in Lemma 3, if there exists a flooding computation based on the lemma. Intuitively, \(w(q)\) is the _width_ of the flooding computation, i.e., how many processes need to be dedicated to \(q\). For a given gTA \(A\), \(1+\sum_{q\in\mathit{Guards}(A)}w(q)\) is a cutoff for \(A\). #### 4.2.3 Sufficient and Necessary Conditions for Decidability Note that above, we give sufficient but not necessary conditions to establish that a guard location \(q\) can always remain occupied after \(\delta_{\mathit{min}}^{\infty}(q)\). Further note that there are DTNs where a guard location \(q\)_cannot_ always remain occupied after it is reached, and in such cases the language \(\mathcal{L}_{|[1,i]}\left(A^{\infty}\right)\) can be determined iff one can determine all (global-time) intervals in which \(q\) can be occupied in the DTN, for all such \(q\). While [29] shows that in this case cutoffs for parameterized verification do not exist, an approach based on a summary automaton would work whenever these intervals can be determined. Whether parameterized verification is decidable in the presence of location invariants remains an open question. ## 5 Evaluation We compare the performance of parameterized verification of DTNs based on our new techniques to the existing cutoff-based techniques (according to Theorem 3, using Uppaal). To this end, we implemented Algorithm 1 and the detection of flooding lassos from Section 4, and constructed three parametric examples: _i_) parametric versions of the TA in Figure 1 with locations \(h_{0},...,h_{k-1},\ell_{0},...,\ell_{k-1},h_{sy},\ell_{sy}\) for some \(k\), without location invariants on the \(h_{i}\) (see Appendix C for \(k=3\)), denoted \(GCS^{\top}(k)\) in the following, _ii_) versions of \(GCS^{\top}(k)\) with invariants on the \(h_{i}\), denoted \(GCS(k)\), _iii_) hand-crafted examples \(Star(k)\), where some \(q_{\mathsf{final}}\) can only be reached after all \(q\in\mathit{Guards}(A)\) with \(|\mathit{Guards}(A)|=k\) have been reached (see Appendix D for \(k=4\) and 5). All experiments have been conducted on a machine with Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz and 16GiB RAM, and our implementation as well as the benchmarks can be found at [https://github.com/cispa/Verification-Disjunctive-Time-Networks](https://github.com/cispa/Verification-Disjunctive-Time-Networks). Figure 4: Flooding computation for location \(q_{0}\) from Figure 2(a) On \(Star(k)\), Algorithm 1 significantly outperforms the cutoff-based approach for solving Minreach with Uppaal, as can be seen in Table 0(a). On \(GCS^{\top}(3)\), solving Minreach and constructing the summary automaton takes \(0.23\,\mathrm{s}\), and \(1.13\,\mathrm{s}\) for \(GCS(3)\). Solving Minreach using cutoffs is even faster, which is not surprising since in this example location guards do not influence the shortest paths. However, we can also use the summary automaton to check more complex temporal properties of the DTN, and two representative examples are shown in Table 0(b): \(\phi_{1}\) states that a process that is in a location \(h_{i}\) will eventually be in a location \(\ell_{j}\), and \(\phi_{3}\) states that a process in a location \(h_{i}\) will eventually be in a location \(h_{j}\). For \(\phi_{1}\), both approaches are very fast, while for \(\phi_{3}\) our new approach significantly outperforms the cutoff-based approach. Note that even if we add the time for construction of the summary automaton to the verification time, we can solve most queries significantly faster than the cutoff-based approach. Additional experimental results can be found in Appendix E. ## 6 Conclusion In this work, we proposed a novel technique for parameterized verification of disjunctive timed networks (DTNs), i.e., an unbounded number of timed automata, synchronizing on disjunctive location guards. Our technique to solve Minreach in a network of arbitrary size relies on an extension of the zone graph of a _single_ TA of the network--leading to an exponential reduction of the model to be analyzed, when compared to classical cutoff techniques. If guard locations can always remain occupied after first reaching them, solving Minreach allows us to construct a _summary automaton_ that can be used for parameterized verification of more complex properties, which is again exponentially more efficient than existing cutoff techniques. This is the case for the full class of DTNs without invariants on guard locations, and we give a sufficient condition for correctness of the approach on the larger class with invariants, but with a single clock per automaton. Moreover, our _ad-hoc_ prototype implementation already outperforms cutoff-based verification in Uppaal on tasks that significantly rely on location guards. Decidability of the parameterized verification problem for DTNs with invariants on guard locations in general remains an open question, but the techniques we introduced are a promising step towards solving it in cases where it is known that cutoffs do not exist [29].
2301.12409
A counterexample on polynomial multiple convergence without commutativity
It is shown that for polynomials $p_1, p_2 \in {\mathbb Z}[t]$ with ${\rm deg}\ p_1, {\rm deg}\ p_2\ge 5$ there exist a probability space $(X,{\mathcal X},\mu)$, two ergodic measure preserving transformations $T,S$ acting on $(X,{\mathcal X},\mu)$ with $h_\mu(X,T)=h_\mu(X,S)=0$, and $f, g \in L^\infty(X,\mu)$ such that the limit \begin{equation*} \lim_{N\to\infty}\frac{1}{N}\sum_{n=0}^{N-1} f(T^{p_1(n)}x)g(S^{p_2(n)}x) \end{equation*} does not exist in $L^2(X,\mu)$, which in some sense answers a question by Frantzikinakis and Host.
Wen Huang, Song Shao, Xiangdong Ye
2023-01-29T10:31:09Z
http://arxiv.org/abs/2301.12409v3
# A counterexample on polynomial multiple convergence without commutativity ###### Abstract. It is shown that for polynomials \(p_{1},p_{2}\in\mathbb{Z}[t]\) with \(\deg p_{1},\deg p_{2}\geq 5\) there exist a probability space \((X,\mathcal{X},\mu)\), two ergodic measure preserving transformations \(T,S\) acting on \((X,\mathcal{X},\mu)\) with \(h_{\mu}(X,T)=h_{\mu}(X,S)=0\), and \(f,g\in L^{\infty}(X,\mu)\) such that the limit \[\lim_{N\to\infty}\frac{1}{N}\sum_{n=0}^{N-1}f(T^{p_{1}(n)}x)g(S^{p_{2}(n)}x)\] does not exist in \(L^{2}(X,\mu)\), which in some sense answers a problem by Frantzikinakis and Host. 2000 Mathematics Subject Classification: Primary: 37A05; 37A30 This research is supported by National Natural Science Foundation of China (12031019, 12090012, 11971455, 11731003). It is unknown that whether Theorem F-H holds when one replaces the iterates \(n,p(n)\) by the pair \(n,n\) or \(n^{2},n^{3}\) or arbitrary polynomials \(p,q\in\mathbb{Z}[t]\) with \(p(0)=q(0)=0\) (except the case covered by the above theorem), see [5, Problem] and the sentence below it. In this paper, in some sense we obtain an answer to this problem. To be precise, the main result of the paper is as follows. **Main Theorem.**_Let \(p_{1},p_{2}:\mathbb{Z}\to\mathbb{Z}\) be polynomials with \(\deg p_{1},\deg p_{2}\geq 5\), and \(E\subset\mathbb{N}\) be infinite with \(\mathbb{N}\setminus E\) infinite. Then there exist a Lebesgue probability space \((X,\mathcal{X},\mu)\), two ergodic measure preserving transformations \(T,S\) acting on \((X,\mathcal{X},\mu)\) with \(h_{\mu}(X,T)=h_{\mu}(X,S)=0\), and there are measurable subsets \(A_{1},A_{2}\in\mathcal{X},\ M\in\mathbb{N}\), an infinite subset \(S_{0}\subseteq[M,+\infty)\setminus E\) with density \(0\), and a positive \(c>0\) such that for all \(n\geq M\)_ \[\mu(A_{1}\cap T^{-p_{1}(n)}A_{2}\cap S^{-p_{2}(n)}A_{2})=\left\{\begin{array} []{ll}0,&\mbox{if $n\in E$;}\\ c,&\mbox{if $n\not\in(E\cup S_{0})$.}\end{array}\right. \tag{1.3}\] _As a consequence, there exist a probability space \((X,\mathcal{X},\mu)\), two ergodic measure preserving transformations \(T,S\) acting on \((X,\mathcal{X},\mu)\) with \(h_{\mu}(X,T)=h_{\mu}(X,S)=0\), and there is a measurable subset \(A_{2}\in\mathcal{X}\) with \(\mu(A_{2})>0\) such that the averages_ \[\frac{1}{N}\sum_{n=0}^{N-1}1_{A_{2}}(T^{p_{1}(n)}x)1_{A_{2}}(S^{p_{2}(n)}x)\] _do not converge in \(L^{2}(X,\mu)\) as \(N\to\infty\)._ _Remark 1.1_.: We have the following remarks 1. By the proof of the Main Theorem, we may replace the condition that \(p_{1}\), \(p_{2}\in\mathbb{Z}[t]\) with degrees \(\geq 5\) by any functions \(h:\mathbb{N}\to\mathbb{N}\) satisfying for some \(N\in\mathbb{N}\) \[\sum_{n=N}^{\infty}\sum_{k=1}^{\infty}\frac{1}{\sqrt{|h(n+k)-h(n)|}}<\infty \quad\mbox{and}\quad\sum_{n=1}^{\infty}\frac{1}{\sqrt{h(n)}}<\infty.\] 2. By (1) and Lemma 2.3, particularly the Main Theorem applies to the question concerning the convergence in \(L^{2}\) of the following averages \[\frac{1}{N}\sum_{n=1}^{N}f(T^{[n^{\sigma}]}x)g(S^{[n^{b}]}x),\] when \(a,b>4\) are reals, which was suggested in [4] (see the sentences after [4, Problem 2]). 3. Due to some technical reason of our proof, we need to introduce the subset \(S_{0}\). We think that it can be removed by showing more properties of the function \(f\) appearing in Theorem 2.1. 4. It seems that the systems we constructed in the Main Theorem are not measurable distal. It will be very nice if measurable distal systems can be constructed. **Conjecture 1**.: _Theorem F-H is sharp in the sense that the pair \(n,p(n)\) with \(\deg(p)\geq 2\) can not be replaced by any other pair \(p_{1},p_{2}\), where \(p_{1},p_{2}\in\mathbb{Z}[t]\) with \(p_{1}(0)=p_{2}(0)=0\)._ Now we briefly describe the ideas of the proof of the Main Theorem. First we fix a function \(f\) in the so-called _the lattice local central limit theorem_ proved by Kosloff and Volny [8, Theorem], see Theorem 2.1. The probability space \((X,\mathcal{X},\mu)\) appeared in the Main Theorem is \((\mathbb{T}\times\Sigma,\mathcal{B}(\mathbb{T}\times\Sigma),\mu)\), where \(\mathbb{T}\) is the unit circle, \(\Sigma=\{0,1\}^{\mathbb{Z}}\), \(\mathcal{B}(\mathbb{T}\times\Sigma)\) is the Borel \(\sigma\)-algebra and \(\mu\) is the product measure. The transformation \(T\) that we construct is a skew product of an irrational rotation on \(\mathbb{T}\) with the full shift \(\sigma\) on \(\Sigma\) related to the given function \(f:\mathbb{T}\to\mathbb{Z}\), i.e. \((y,\omega)\mapsto(y+\alpha,\sigma^{f(y)}\omega)\) for \((y,\omega)\in\mathbb{T}\times\Sigma\). Using the properties of \(f\) stated in Theorem 2.1 and the Abramov-Rokhlin formula, we are able to show that \(T\) is ergodic with zero entropy. Then we construct the transformation \(S\). The construction of \(S\) is very much involved, since at one hand to make the proof of the ergodicity and zero entropy property of \(S\) easier, we require that \(S\) is isomorphic to \(T\); and at the other hand we need to guarantee (1.3) hold (the subsets \(A_{1}\) and \(A_{2}\) appear naturally in the process of the construction). We remark that the assumptions of \(\deg(p_{1}),\deg(p_{2})\geq 5\) are essentially used in the construction of \(S\), see Lemma 2.4. As all terms have been considered in the constructions of \(T\) and \(S\), the Main Theorem follows. **Acknowledgments:** The authors thank Frantzikinakis for suggesting to add Remark 1.1(4), and Kosloff for a correction concerning the degree of the polynomials we consider. ## 2. Proof of the main result In this section, we prove the Main Theorem. First we do some preparations, and then we construct \((X,\mathcal{X},\mu)\) and \(T,S\) on it with desired properties. ### The local central limit theorem The following theorem plays an important role in our construction. **Theorem 2.1**.: _[_8_]_ _For every ergodic, aperiodic and probability measure preserving system \((Y,\mathcal{Y},m,R)\) there exists a square integrable function \(f:Y\to\mathbb{Z}\) with \(\int_{Y}fdm=0\) which satisfies the lattice local central limit theorem. That is, \(\lim_{n\to\infty}\frac{\|S_{n}(f)\|_{2}^{2}}{n}=\sigma^{2}>0\) and_ \[\sup_{x\in\mathbb{Z}}\left|\sqrt{n}\cdot m\Big{(}S_{n}(f)=x\Big{)}-\frac{e^{- x^{2}/(2n\sigma^{2})}}{\sqrt{2\pi\sigma^{2}}}\right|\xrightarrow{n\to\infty}0, \tag{2.1}\] _where \(S_{n}(f):=\sum_{k=0}^{n-1}f\circ R^{k}\)._ ### Abramov-Rokhlin formula We will need Abramov-Rokhlin formula to calculate entropy. For the definition of entropy, refer to [6]. Let \((Y,\mathcal{Y},m,R)\) be a m.p.s. and \(\{\phi(y):y\in Y\}\) be a family of measurable transformations of a measurable space \((Z,\mathcal{Z},\nu)\) such that the map \((y,z)\mapsto\phi(y)z\) is measurable from \(Y\times Z\) to \(Z\). We can then define the _skew-product transformation_\(T\) on \(Y\times Z\) by \[T(y,z)=(Ry,\phi(y)z),\ \forall(y,z)\in Y\times Z.\] Let \(X=Y\times Z,\mathcal{X}=\mathcal{Y}\times\mathcal{Z}\) and \(\mu=m\times\nu\). Then \((X,\mathcal{X},\mu,T)\) is a m.p.s. For any finite measurable partition \(\beta\) of \(Z\) the limit \[h_{\mu}(T|R,\beta)=\lim_{n\to\infty}\frac{1}{n}\int_{Y}H_{\nu}(\bigvee_{i=0}^ {n-1}\phi_{i}^{-1}(y)\beta)dm(y)\] exists, where \(\phi_{0}(y)\) is the identity map on \(Z\) and \(\phi_{i}(y)=\phi(R^{i-1}y)\circ\cdots\circ\phi(y)\) for \(i\in\mathbb{N}\), and \[h_{\mu}(T|R)=\sup\{h_{\mu}(T|R,\beta):\beta\text{ is a finite measurable} partitionof\ Z\}\] is called the _fiber-entropy of \(T\)_. Abramov-Rokhlin formula [1] says that \[h_{\mu}(T)=h_{m}(R)+h_{\mu}(T|R). \tag{2.2}\] ### Additional assumptions on \(p_{1},p_{2}\) To prove the Main Theorem, we may additionally assume that the leading coefficients of polynomials involved are positive. That is, we may assume \(p_{1}(n)=a_{t}n^{t}+a_{t-1}n^{t-1}+\cdots+a_{1}n+a_{0}\) and \(p_{2}(n)=b_{s}n^{s}+b_{s-1}n^{s-1}+\cdots+b_{1}n+b_{0}\) with \(a_{t}>0\) and \(b_{s}>0\), and \(t,s\geq 5\). To see this, we assume that the Main Theorem holds for polynomials with positive leading coefficients. Now we show the case when \(p_{1}\) has a negative leading coefficient and \(p_{2}\) has a positive leading coefficient. Since \(-p_{1}\) and \(p_{2}\) have positive leading coefficients, by the hypothesis there exist a Lebesgue probability space \((X,\mathcal{X},\mu)\), two ergodic measure preserving transformations \(\widetilde{T},S\) acting on \((X,\mathcal{X},\mu)\) with \(h_{\mu}(X,\widetilde{T})=h_{\mu}(X,S)=0\) and there are measurable subsets \(A_{1},A_{2}\in\mathcal{X}\), \(M\in\mathbb{N}\), a positive \(c>0\) and an infinite subset \(S_{0}\subseteq[M,+\infty)\setminus E\) with density \(0\) such that for all \(n\geq M\) \[\mu(A_{1}\cap\widetilde{T}^{-(-p_{1}(n))}A_{2}\cap S^{-p_{2}(n)}A_{2})=\left\{ \begin{array}{ll}0,&\text{if }n\in E;\\ c,&\text{if }n\not\in(E\cup S_{0}).\end{array}\right.\] Let \(T=\widetilde{T}^{-1}\). Then we still have \(h_{\mu}(X,T)=0\) and the same \(A_{1},A_{2},M,c,S_{0}\) such that for all \(n\geq M\) \[\mu(A_{1}\cap T^{-p_{1}(n)}A_{2}\cap S^{-p_{2}(n)}A_{2})=\left\{\begin{array}[ ]{ll}0,&\text{if }n\in E;\\ c,&\text{if }n\not\in(E\cup S_{0}).\end{array}\right.\] That is, we still have the Main Theorem in this case. Similarly, we deal with the remaining cases. ### Construction of the m.p.s. \((X,\mathcal{X},\mu,T)\) #### 2.4.1. For a topological space \(X\), we use \(\mathcal{B}(X)\) denote the \(\sigma\)-algebra generated by all open subsets of \(X\). Let \(\mathbb{T}=\mathbb{R}/\mathbb{Z}\) be the circle. Let \((\mathbb{T},\mathcal{B}(\mathbb{T}),m,R_{\alpha})\) be the circle rotation by an irrational number \(\alpha\), where \(m\) is the Lebesgue measure on \(\mathbb{T}\) and \[R_{\alpha}:\mathbb{T}\to\mathbb{T},\;y\mapsto y+\alpha\pmod{1}.\] By Theorem 2.1, there exists a square integrable Borel function \(f:\mathbb{T}\to\mathbb{Z}\) with \(\int_{\mathbb{T}}fdm=0\) which satisfies the lattice local central limit theorem (2.1). **We fix such a Borel function \(f\) in the sequel**. Let \(\Sigma=\{0,1\}^{\mathbb{Z}}\), and \(\big{(}\Sigma,\mathcal{B}(\Sigma),\nu,\sigma\big{)}\) be the \((\frac{1}{2},\frac{1}{2})\)-shift. That is, \(\nu=(\frac{1}{2}\delta_{0}+\frac{1}{2}\delta_{1})^{\mathbb{Z}}\) is the product measure on \(\Sigma=\{0,1\}^{\mathbb{Z}}\) and for each sequence \(\omega\in\Sigma\), \[(\sigma\omega)(n)=\omega(n+1),\quad\forall n\in\mathbb{Z}.\] Now let \((X,\mathcal{X},\mu)=(\mathbb{T}\times\Sigma,\mathcal{B}(\mathbb{T})\times \mathcal{B}(\Sigma),m\times\nu)\), and define \[T:\mathbb{T}\times\Sigma\to\mathbb{T}\times\Sigma,\quad(y,\omega)\mapsto(y+ \alpha,\sigma^{f(y)}\omega). \tag{2.3}\] Since \(f:\mathbb{T}\to\mathbb{Z}\) is measurable, it is easy to verify that \((X,\mathcal{X},\mu,T)\) is a m.p.s. #### 2.4.2. Let \(f_{n}=S_{n}(f)=\sum_{k=0}^{n-1}f\circ R_{\alpha}^{k}\) for \(n\in\mathbb{N}\). Then for all \(n\geq 1\) and \((y,\omega)\in X\), we have \[T^{n}(y,\omega)=(R_{\alpha}^{n}(y),\sigma^{f_{n}(y)}\omega)=(y+n\alpha,\sigma^{ f_{n}(y)}\omega). \tag{2.4}\] #### 2.4.3. For \(a,b\in\mathbb{Z}\) with \(a\leq b\) and \((s_{1},s_{2},\ldots,s_{b-a+1})\in\{0,1\}^{b-a+1}\), let \[{}_{a}[s_{1},s_{2},\ldots,s_{b-a+1}]_{b}:=\{\omega\in\Sigma:\omega(a)=s_{1}, \ldots,\omega(b)=s_{b-a+1}\}. \tag{2.5}\] And for \(i\in\{0,1\}\), \(j\in\mathbb{Z}\) let \[[i]_{j}:={}_{j}[i]_{j}=\{\omega\in\Sigma:\omega(j)=i\}.\] #### 2.4.4. Now we show \((X,\mathcal{X},\mu,T)\) is ergodic and its entropy is zero. **Proposition 2.2**.: \((X,\mathcal{X},\mu,T)\) _is an ergodic m.p.s. with \(h_{\mu}(X,T)=0\)._ Proof.: First we show that \((X,\mathcal{X},\mu,T)\) is ergodic. Let \(A_{1},A_{2}\in\mathcal{B}(\mathbb{T})\) with \(m(A_{1})m(A_{2})>0\) and \(B_{j}={}_{-M_{j}}[s^{j}]_{M_{j}}\in\mathcal{B}(\Sigma)\), where \(s^{j}\in\{0,1\}^{2M_{j}+1}\) with some \(M_{j}\in\mathbb{N}\), \(j=1,2\). For each \(n\in\mathbb{N}\) set \[W_{n}=\{y\in\mathbb{T}:|f_{n}(y)|\leq M_{1}+M_{2}\}.\] By (2.4), \[\mu\big{(}(A_{1}\times B_{1})\cap T^{-n}(A_{2}\times B_{2})\big{)}\] \[= \int_{\Sigma}\int_{\mathbb{T}}1_{R_{\alpha}^{-n}A_{2}\cap A_{1}}( y)\cdot 1_{\sigma^{-f_{n}(y)}B_{2}\cap B_{1}(\omega)}dm(y)d\nu(\omega).\] Note that when \(|q|>M_{1}+M_{2}\), \(\nu(\sigma^{-q}B_{1}\cap B_{2})=\nu(B_{1})\nu(B_{2})\). Thus \[\mu\big{(}(A_{1}\times B_{1})\cap T^{-n}(A_{2}\times B_{2})\big{)}\] \[= \int_{\Sigma}\int_{\mathbb{T}}1_{R_{\alpha}^{-n}A_{2}\cap A_{1}}( y)\cdot 1_{\sigma^{-f_{n}(y)}B_{2}\cap B_{1}}(\omega)dm(y)d\nu(\omega) \tag{2.6}\] \[\geq \int_{\Sigma}\int_{\mathbb{T}\setminus W_{n}}1_{R_{\alpha}^{-n}A_ {2}\cap A_{1}}(y)\cdot 1_{\sigma^{-f_{n}(y)}B_{2}\cap B_{1}(\omega)}dm(y)d\nu(\omega)\] \[= \int_{\mathbb{T}\setminus W_{n}}1_{R_{\alpha}^{-n}A_{2}\cap A_{1} }(y)\cdot\nu(B_{1})\nu(B_{2})dm(y)\] \[\geq \Big{(}m(R_{\alpha}^{-n}A_{2}\cap A_{1})-m(W_{n})\Big{)}\nu(B_{1} )\nu(B_{2}).\] Now we estimate \(m(W_{n})\). For \(j\in\mathbb{Z}\), by (2.1), there is some \(N_{j}\in\mathbb{N}\) such that for \(n>N_{j}\) \[\left|\sqrt{n}\cdot m\Big{(}f_{n}=j\Big{)}-\frac{e^{-j^{2}/(2n\sigma^{2})}}{ \sqrt{2\pi\sigma^{2}}}\right|<\frac{1}{\sqrt{2\pi\sigma^{2}}}.\] Thus when \(n>N_{j}\) we have \[m\Big{(}f_{n}=j\Big{)}<\frac{e^{-j^{2}/(2n\sigma^{2})}}{\sqrt{n}\sqrt{2\pi \sigma^{2}}}+\frac{1}{\sqrt{n}\sqrt{2\pi\sigma^{2}}}\leq\frac{2}{\sqrt{n}\sqrt {2\pi\sigma^{2}}}.\] Hence for \(n>\max_{|j|\leq M_{1}+M_{2}}\{N_{j}\}\) \[m(W_{n})=\sum_{j=-(M_{1}+M_{2})}^{M_{1}+M_{2}}m(f_{n}=j)<\sum_{j=-(M_{1}+M_{2}) }^{M_{1}+M_{2}}\frac{2}{\sqrt{n}\sqrt{2\pi\sigma^{2}}}=\frac{2(2(M_{1}+M_{2})+ 1)}{\sqrt{n}\sqrt{2\pi\sigma^{2}}}.\] It follows that \(\lim_{n\to\infty}m(W_{n})=0\). By (2.6) and the ergodicity of \((\mathbb{T},\mathcal{B}(\mathbb{T}),m,R_{\alpha})\), we deduce \[\lim_{N\to\infty}\frac{1}{N}\sum_{n=0}^{N-1}\mu\left((A_{1}\times B _{1})\cap T^{-n}(A_{2}\times B_{2})\right)\] \[\geq \lim_{N\to\infty}\frac{1}{N}\sum_{n=0}^{N-1}\Big{(}m(R_{\alpha}^{ -n}A_{2}\cap A_{1})-m(W_{n})\Big{)}\nu(B_{1})\nu(B_{2})\] \[= m(A_{1})m(A_{2})\nu(B_{1})\nu(B_{2})=\mu(A_{1}\times B_{1})\mu( A_{2}\times B_{2}).\] Then it is standard to prove that for all \(D_{1},D_{2}\in\mathcal{X}\), we have that \[\lim_{N\to\infty}\frac{1}{N}\sum_{n=0}^{N-1}\mu\big{(}D_{1}\cap T^{-n}D_{2} \big{)}\geq\mu(D_{1})\mu(D_{2}).\] In particular, we have that for any \(D_{1},D_{2}\in\mathcal{X}\) with \(\mu(D_{1})\mu(D_{2})>0\), there is some \(n\in\mathbb{N}\) such that \(\mu\big{(}D_{1}\cap T^{-n}D_{2}\big{)}>0\), which means that \((X,\mathcal{X},\mu,T)\) is ergodic. Now we use Abramov-Rokhlin formula to show that \(h_{\mu}(T)=0\). For any finite measurable partition \(\beta\) of \(\Sigma\), we have \[h_{\mu}(T|R_{\alpha},\beta)=\lim_{n\to\infty}\frac{1}{N}\int_{\mathbb{T}}H_{ \nu}(\bigvee_{n=0}^{N-1}\sigma^{-f_{n}(y)}\beta)dm(y),\] where \(f_{0}(y)\equiv 0\). For \(y\in\mathbb{T}\) and \(N\in\mathbb{N}\), we denote \(a_{N}(y)\) the cardinality of the set \(\{f_{n}(y):0\leq n\leq N-1\}\), i.e. \[a_{N}(y)=|\{f_{n}(y):0\leq n\leq N-1\}|.\] Then \(a_{N}\) is a measurable function from \(\mathbb{T}\) to \(\{1,2,\cdots,N\}\), and the cardinality of \(\bigvee_{n=0}^{N-1}\sigma^{-f_{n}(y)}\beta\) is not greater than \(|\beta|^{a_{N}(y)}\) for any \(y\in\mathbb{T}\), and hence \[\frac{1}{N}\int_{\mathbb{T}}H_{\nu}(\bigvee_{n=0}^{N-1}\sigma^{-f_{n}(y)} \beta)dm(y)\leq\frac{1}{N}\int_{\mathbb{T}}\log|\beta|^{a_{N}(y)}dm(y)=\int_{ \mathbb{T}}\frac{a_{N}(y)}{N}\log|\beta|dm(y).\] We claim that for \(m\)-a.e. \(y\in\mathbb{T}\), \[\lim_{N\to\infty}\frac{a_{N}(y)}{N}=0. \tag{2.7}\] We now show the claim. Since \((\mathbb{T},\mathcal{B}(\mathbb{T}),m,R_{\alpha})\) is ergodic and \(\int_{\mathbb{T}}fdm=0\), by Birkhoff ergodic theorem, for \(m\)-a.e. \(y\in\mathbb{T}\), \[\frac{f_{n}(y)}{n}=\frac{1}{n}\sum_{k=0}^{n-1}f(R_{\alpha}^{k}y)\to\int_{ \mathbb{T}}fdm=0,\ n\to\infty.\] Thus for \(m\)-a.e. \(y\in\mathbb{T}\), for any \(\varepsilon>0\) there is some \(M(y,\varepsilon)\in\mathbb{N}\) such that when \(n\geq M(y,\varepsilon)\), we have \(|\frac{f_{n}(y)}{n}|\leq\varepsilon\). Thus for \(N>M(y,\varepsilon)\), we have \(|f_{n}(y)|\leq\varepsilon n\leq\varepsilon N\) for all \(M(y,\varepsilon)\leq n\leq N\). It follows that \(a_{N}(y)\leq M(y,\varepsilon)+2\varepsilon N+1\), and \[\lim_{N\to\infty}\frac{a_{N}(y)}{N}\leq\lim_{N\to\infty}\frac{M(y,\varepsilon) +2\varepsilon N+1}{N}\leq 2\varepsilon.\] Since \(\varepsilon\) is arbitrary, we have (2.7), i.e. for \(m\)-a.e. \(y\in\mathbb{T}\), \(\lim_{N\to\infty}\frac{a_{N}(y)}{N}=0.\) This ends the proof of the claim. Thus by Dominated Convergence Theorem, \[h_{\mu}(T|R_{\alpha},\beta) =\lim_{N\to\infty}\frac{1}{N}\int_{\mathbb{T}}H_{\nu}(\bigvee_{n=0}^ {N-1}\sigma^{-f_{n}(y)}\beta)dm(y)\] \[\leq\log|\beta|\lim_{N\to\infty}\int_{\mathbb{T}}\frac{a_{N}(y)}{ N}dm(y)=\log|\beta|\int_{\mathbb{T}}\lim_{N\to\infty}\frac{a_{N}(y)}{N}dm(y)\] \[=0.\] As \(\beta\) is an arbitrary finite measurable partition of \(\Sigma\), we have \(h_{\mu}(T|R_{\alpha})=0\). Then by Abramov-Rokhlin formula, \[h_{\mu}(T)=h_{m}(R_{\alpha})+h_{\mu}(T|R_{\alpha})=0.\] The proof is complete. ### Construction of the m.p.s. \((X,\mathcal{X},\mu,S)\) We now construct the transformation \(S\). As we said in the introduction, this construction is much involved. #### 2.5.1. First we need some lemmas. By Subsection 2.3, we assume that all polynomials considered have positive leading coefficients. **Lemma 2.3**.: _Let \(h(n)=p(n)\) be a polynomial with a positive leading coefficient and \(\deg p\geq 5\) or \(h(n)=[n^{a}]\) with \(a>4\), where \([\cdot]\) is the integral part of a real number. Then there is some \(M_{1}\in\mathbb{N}\) such that_ \[\sum_{n=M_{1}}^{\infty}\sum_{k=1}^{\infty}\frac{1}{\sqrt{h(n+k)-h(n)}}<\infty.\] Proof.: First let \(p(n)=\sum_{j=0}^{t}a_{j}n^{j}\) with \(a_{t}>0,t\geq 5\). Let \(q(n)=p(n+1)-p(n)\). Then \[\lim_{n\to\infty}\frac{q(n)}{(n+1)^{t}-n^{t}}=\lim_{n\to\infty}\frac{p(n+1)-p( n)}{(n+1)^{t}-n^{t}}=a_{t}>0.\] Thus there is some \(M_{1}\in\mathbb{N}\) such that for all \(n\geq M_{1}\) we have \[q(n)\geq\frac{a_{t}}{2}\big{(}(n+1)^{t}-n^{t}\big{)}>0.\] Thus for all \(n\geq M_{1}\) and \(k\in\mathbb{N}\), \[p(n+k)-p(n)=\sum_{i=0}^{k-1}q(n+i)\geq\sum_{i=0}^{k-1}\frac{a_{ t}}{2}\big{(}(n+i+1)^{t}-(n+i)^{t}\big{)}\] \[= \frac{a_{t}}{2}\big{(}(n+k)^{t}-n^{t}\big{)}\geq\frac{a_{t}}{2} \big{(}tn^{t-1}k+tnk^{t-1}\big{)}\] \[\geq a_{t}tn^{t/2}k^{t/2}. \tag{2.8}\] It follows that \[\sum_{n=M_{1}}^{\infty}\sum_{k=1}^{\infty}\frac{1}{\sqrt{p(n+k)-p (n)}}\leq\sum_{n=M_{1}}^{\infty}\sum_{k=1}^{\infty}\frac{1}{\sqrt{a_{t}tn^{t/2 }k^{t/2}}}\] \[= \frac{1}{\sqrt{a_{t}t}}\sum_{n=M_{1}}^{\infty}\sum_{k=1}^{\infty }\frac{1}{n^{t/4}k^{t/4}}\leq\frac{1}{\sqrt{a_{t}t}}\Big{(}\sum_{n=M_{1}}^{ \infty}\frac{1}{n^{t/4}}\Big{)}\Big{(}\sum_{k=1}^{\infty}\frac{1}{k^{t/4}} \Big{)}.\] Since \(t\geq 5\), we have \(\sum_{n=M_{1}}^{\infty}\sum_{k=1}^{\infty}\frac{1}{\sqrt{p(n+k)-p(n)}}<\infty\). Now assume that \(h(n)=[n^{a}]\) with \(a>4\). We note that for \(\varepsilon=a-4>0\), \(n,k\in\mathbb{N}\), we have \[(n+k)^{\varepsilon}=(n^{2}+2nk+k^{2})^{\varepsilon/2}>c(nk)^{\varepsilon/2}, \tag{2.9}\] where \(c>0\) is a constant. Moreover, \[[(n+k)^{a}]-[n^{a}]\geq(n+k)^{a}-n^{a}-1\geq(n+k)^{a-4}((n+k)^{4}-n^{4})-1. \tag{2.10}\] Thus, (2.8), (2.9) and (2.10) give us the required property, and the proof is complete. **Lemma 2.4**.: _Let \(p:\mathbb{Z}\to\mathbb{Z}\) be a polynomial with a positive leading coefficient and \(\deg p\geq 5\). For \(N\in\mathbb{N}\) set_ \[E_{N}(p)=\{y\in\mathbb{T}:f_{p(n)}(y)\neq 0,f_{p(n)}(y)\neq f_{p(n+k)}(y)\text{ for all }\ n\geq N,k\geq 1\}.\] _Then_ \[\lim_{N\to\infty}m(E_{N}(p))=1.\] Proof.: Since \(p\) has a positive leading coefficient, there is some \(N_{1}:=N_{1}(p)\in\mathbb{N}\) with \(N_{1}\geq M_{1}\) (\(M_{1}\) is defined in Lemma 2.3) such that \(p(n)\) is strictly monotone increasing on \([N_{1},+\infty)\) and for \(n\geq N_{1}\), \(p(n)>0\). Thus for \(n\geq N_{1}\), \(f_{p(n)}\) is well-defined. For \(n\in\mathbb{N}\) with \(n\geq N_{1}\), let \[G_{n}^{0}=\{y\in\mathbb{T}:f_{n}(y)=0\}.\] For \(n,k\in\mathbb{N}\) and \(n\geq N_{1}\),we have \[\begin{split} F_{n,k}&=\{y\in\mathbb{T}:f_{p(n)}(y )=f_{p(n+k)}(y)\}\\ &=\{y\in\mathbb{T}:f_{p(n+k)-p(n)}(y+p(n)\alpha)=0\}\\ &=G_{p(n+k)-p(n)}^{0}-p(n)\alpha.\end{split} \tag{2.11}\] Then for \(N\geq N_{1}\) \[E_{N}(p)=\mathbb{T}\setminus\Big{(}\bigcup_{n=N}^{\infty}\bigcup_{k=1}^{ \infty}F_{n,k}\cup\bigcup_{n=N}^{\infty}G_{p(n)}^{0}\Big{)}. \tag{2.12}\] By (2.11), \[m\Big{(}\bigcup_{n=N}^{\infty}\bigcup_{k=1}^{\infty}F_{n,k}\Big{)}\leq\sum_{n =N}^{\infty}\sum_{k=1}^{\infty}m(F_{n,k})=\sum_{n=N}^{\infty}\sum_{k=1}^{ \infty}m(G_{p(n+k)-p(n)}^{0}). \tag{2.13}\] By (2.1), \[\sup_{x\in\mathbb{Z}}\left|\sqrt{n}\cdot m\Big{(}f_{n}=x\Big{)}-\frac{e^{-x^{ 2}/(2n\sigma^{2})}}{\sqrt{2\pi\sigma^{2}}}\right|\stackrel{{ n\to\infty}}{{ \longrightarrow}}0,\] where \(\sigma>0\). In particular, for \(x=0\) we have \[\left|\sqrt{n}\cdot m\Big{(}G_{n}^{0}\Big{)}-\frac{1}{\sqrt{2\pi\sigma^{2}}} \right|\stackrel{{ n\to\infty}}{{\longrightarrow}}0.\] Thus there is some \(N_{2}:=N_{2}(p)\in\mathbb{N}\) such that for all \(n\geq N_{2}\), \[m(G_{n}^{0})\leq\frac{2}{\sqrt{n}\sqrt{2\pi\sigma^{2}}}:=\frac{c}{\sqrt{n}}, \quad\text{where }c=\frac{2}{\sqrt{2\pi\sigma^{2}}}>0. \tag{2.14}\] Combining (2.13) with (2.14), we conclude that for all \(N\geq\max\{N_{1},N_{2}\}\) \[m\Big{(}\bigcup_{n=N}^{\infty}\bigcup_{k=1}^{\infty}F_{n,k}\Big{)}\leq\sum_{n=N} ^{\infty}\sum_{k=1}^{\infty}m(G^{0}_{p(n+k)-p(n)})\leq\sum_{n=N}^{\infty}\sum_{k =1}^{\infty}\frac{c}{\sqrt{p(n+k)-p(n)}}\] and \[m\Big{(}\bigcup_{n=N}^{\infty}G^{0}_{p(n)}\Big{)}\leq c\sum_{n=N}^{\infty} \frac{1}{\sqrt{p(n)}}.\] Since \(\deg p\geq 5\) and \(N_{1}\geq M_{1}\), we have \(\sum_{n=N_{1}}^{\infty}\sum_{k=1}^{\infty}\frac{1}{\sqrt{p(n+k)-p(n)}}<\infty\) and \(\sum_{n=N_{1}}^{\infty}\frac{1}{\sqrt{p(n)}}<\infty\). Hence \[\lim_{N\to\infty}m\Big{(}\bigcup_{n=N}^{\infty}\bigcup_{k=1}^{\infty}F_{n,k} \Big{)}=0\ \ \text{and}\ \ \ \lim_{N\to\infty}m\Big{(}\bigcup_{n=N}^{\infty}G^{0}_{p(n)}\Big{)}=0.\] Thus by (2.12), we derive that \(\lim_{N\to\infty}m(E_{N}(p))=1\). The proof is complete. An immediate consequence is **Corollary 2.5**.: _Let \(p_{1},p_{2}:\mathbb{Z}\to\mathbb{Z}\) with positive leading coefficients and \(\deg p_{1},\deg p_{2}\geq 5\). Then there exist a measurable subset \(B\subseteq\mathbb{T}\) with \(m(B)>0\), and \(M\in\mathbb{N}\) such that for all \(y\in B\), all terms in \(\{f_{p_{j}(n)}(y)\}_{n=M}^{\infty}\) are distinct, \(j=1,2\)._ Proof.: By Lemma 2.4, \(\lim_{N\to\infty}m(E_{N}(p_{1}))=1\) and \(\lim_{N\to\infty}m(E_{N}(p_{2}))=1\). Thus \[\lim_{N\to\infty}m(E_{N}(p_{1})\cap E_{N}(p_{2}))=1.\] In particular, there is some natural number \(M>\max\{N_{i}(p_{j}),N_{i}(p_{j}):i,j=1,2\}\) (\(N_{1},N_{2}\) are defined in the proof of Lemma 2.4) such that \(m(E_{M}(p_{1})\cap E_{M}(p_{2}))>0\). Let \[B=E_{M}(p_{1})\cap E_{M}(p_{2}).\] Then for all \(y\in B\), all terms in \(\{f_{p_{j}(n)}(y)\}_{n=M}^{\infty}\) are distinct, \(j=1,2\). #### 2.5.2. Construction of \((X,\mathcal{X},\mu,S)\) Let \(p_{1},p_{2},B,M\) be as defined in Corollary 2.5. Fix an infinite subset \(S_{0}\subseteq[M,+\infty)\) with density \(0\), i.e. \(d(S_{0})=0\).1 Now for each \(y\in B\) we define a permutation \(\pi_{y}\) of \(\mathbb{Z}\) such that Footnote 1: Let \(S_{0}\) be a subset of \(\mathbb{N}\). The _upper density_ and _lower density_ of \(S_{0}\) are defined by \(\overline{d}(S_{0})=\limsup_{n\to\infty}\frac{|S_{0}\cap[1,n]|}{n}\) and \(\underline{d}(S_{0})=\liminf_{n\to\infty}\frac{|S_{0}\cap[1,n]|}{n}\). If \(\overline{d}(S_{0})=\underline{d}(S_{0})=d\), then \(d\) is called the _density_ of \(S\) and is denoted by \(d=d(S_{0})\). \[0\mapsto 0,\quad f_{p_{2}(n)}(y)\mapsto f_{p_{1}(n)}(y),\quad\forall n\in[M,+ \infty)\setminus S_{0}.\] The difficult part is to make \(\pi_{y}\) having some measurable properties related to \(y\). We will do it as follows. Let \([M,+\infty)\setminus S_{0}=\{k_{1}<k_{2}<\ldots\}\). Then \[\big{\{}f_{p_{1}(n)}(y):n\in[M,+\infty)\setminus S_{0}\big{\}}=\{f_{p_{1}(k_{i })}(y)\}_{i=1}^{\infty}.\] Since \(S_{0}\) is infinite and all terms in \(\{f_{p_{j}(n)}(y)\}_{n=M}^{\infty}\) are distinct, we have that \(\mathbb{Z}\setminus\{f_{p_{1}(k_{i})}(y)\}_{i=1}^{\infty}\) is also infinite (this is the reason why we introduce the subset \(S_{0}\), since we do not know if \(\mathbb{Z}\setminus\{f_{p_{1}(n)}(y)\}_{i=M}^{\infty}\) is infinite). Now put \[l_{i}:=(-1)^{i-1}[\frac{i+1}{2}],\ i\geq 1,\ \text{i.e.}\ \{l_{i}\}_{i=1}^{ \infty}=\{1,-1,2,-2,\ldots\}=\mathbb{Z}\setminus\{0\}.\] Let \[L(y)=\{j\geq 1:l_{j}\not\in\{f_{p_{1}(k_{i})}(y)\}_{i=1}^{\infty}\}:=\{j_{1}(y)<j_{ 2}(y)<\cdots\}.\] Thus for each \(y\in B\), we have a partition of \(\mathbb{Z}\): \[\mathbb{Z}=\{0\}\cup\{f_{p_{1}(k_{i})}(y)\}_{i=1}^{\infty}\cup\{l_{j_{i}(y)}\}_ {i=1}^{\infty}.\] Define a permutation \(\pi_{p_{1},y}:\mathbb{Z}\to\mathbb{Z}\) by \[\pi_{p_{1},y}(i)=\left\{\begin{array}{ll}0,&i=0;\\ f_{p_{1}(k_{i})}(y),&i\geq 1;\\ l_{j_{-i}(y)},&i\leq-1.\end{array}\right.\] Replacing \(p_{1}\) by \(p_{2}\), we can define a permutation \(\pi_{p_{2},y}\) similarly. Define \(\pi_{y}:\mathbb{Z}\to\mathbb{Z}\) as \(\pi_{y}=\pi_{p_{1},y}\circ\pi_{p_{2},y}^{-1}\). Then \[\pi_{y}(0)=0,\quad\pi_{y}(f_{p_{2}(n)}(y))=f_{p_{1}(n)}(y),\quad\forall n\in[ M,+\infty)\setminus S_{0}. \tag{2.15}\] Given the permutation \(\pi_{y}\) of \(\mathbb{Z}\) defined above and \(F\subseteq\mathbb{N}\), we define a map \(\psi_{\pi_{y}}:\Sigma\to\Sigma\) by \[(\psi_{\pi_{y}}\omega)(q)=\left\{\begin{array}{ll}\omega(0),&q=0;\\ 1-\omega(\pi_{y}(q))=1-\omega(f_{p_{1}(n)}(y)),&q=f_{p_{2}(n)}(y),n\in([M,+ \infty)\cap F)\setminus S_{0};\\ \omega(\pi_{y}(q)),&\text{else}.\end{array}\right.\] Recall that \(X=\mathbb{T}\times\Sigma\). Now define \(R:\mathbb{T}\times\Sigma\to\mathbb{T}\times\Sigma\) as follows: \[R(y,\omega)=\left\{\begin{array}{ll}(y,\psi_{\pi_{y}}\omega),&y\in B;\\ (y,\omega),&y\in\mathbb{T}\setminus B.\end{array}\right.\] The required transformation \(S:X\to X\) is then defined by \(S:=R^{-1}\circ T\circ R\). #### 2.5.3. Note that for \(y\in B\) and \(n\in\mathbb{N}\), we have \(S^{n}(y,\omega)=(R^{-1}\circ T^{n}\circ R)(y,\omega)\). Thus \[S^{n}(y,\omega)=\left\{\begin{array}{ll}(y+n\alpha,(\psi_{\pi_{y+n}=\alpha}^ {-1}\circ\sigma^{f_{n}(y)}\circ\psi_{\pi_{y}})(\omega)),&\text{if }y+n\alpha\in B;\\ (y+n\alpha,(\sigma^{f_{n}(y)}\circ\psi_{\pi_{y}})(\omega)),&\text{if }y+n\alpha\in \mathbb{T}\setminus B.\end{array}\right. \tag{2.16}\] **Lemma 2.6**.: \((X,\mathcal{X},\mu,S)\) _is an ergodic m.p.s. with \(h_{\mu}(X,S)=0\)._ Proof.: To show the result, it suffices to show that \(R:X\to X\) is an invertible measure-preserving transformation according to Proposition 2.2. It is done by the following steps. **Step 1**: Write \(R\) as the composition of \(3\) transformations. Given a permutation \(\pi\) on \(\mathbb{Z}\), we define a map \(\phi_{\pi}:\Sigma\to\Sigma\) by \[(\phi_{\pi}\omega)(q)=\omega(\pi(q)),\quad\forall q\in\mathbb{Z}.\] Note that \(\phi_{\pi^{-1}}=\phi_{\pi}^{-1}.\) And for \(Q\subseteq\mathbb{N}\), we define a map \(\phi^{Q}:\Sigma\to\Sigma\) by \[(\phi^{Q}\omega)(q)=\left\{\begin{array}{ll}1-\omega(q),&q\in Q;\\ \omega(q),&q\not\in Q.\end{array}\right.\] Thus for \(y\in B\) we have \[\psi_{\pi_{y}}=\phi^{Q_{y}}\circ\phi_{\pi_{p_{1},y}}\circ\phi_{\pi_{p_{2},y}}^{-1},\] where \(Q_{y}=\{f_{p_{1}(n)}(y):n\in([M,+\infty)\cap F)\setminus S_{0}\}\). Define \(R_{1},R_{2},R_{3}:X\to X\) by \[R_{i}(y,\omega)=\left\{\begin{array}{ll}(y,\phi_{\pi_{p_{i},y}}\omega),&y\in B ;\\ (y,\omega),&y\in\mathbb{T}\setminus B\end{array}\right.\ i=,1,2;\quad R_{3}(y, \omega)=\left\{\begin{array}{ll}(y,\phi^{Q_{y}}\omega),&y\in B;\\ (y,\omega),&y\in\mathbb{T}\setminus B.\end{array}\right.\] Then \[R=R_{3}\circ R_{1}\circ R_{2}^{-1}.\] **Step 2**: \(R_{1},R_{2}\) are isomorphisms. Now we show \(R_{1}\) is a Borel isomorphism. By the definition, it is clear that \(R_{1}:X\to X\) is a bijective map. By Souslin's Theorem 2, it suffices to show that \(R_{1}\) maps Borel measurable subsets to Borel measurable subsets. Since \(R_{1}|_{\mathbb{T}\setminus B\times\Sigma}=\operatorname{Id}_{\mathbb{T} \setminus B\times\Sigma}\), we need to show that \(R_{1}|_{B\times\Sigma}:B\times\Sigma\to B\times\Sigma\) maps Borel measurable sets to Borel measurable sets. Let \(C\subseteq B\) be a Borel measurable subset and \({}_{-H}[s]_{H}\in\mathscr{B}(\Sigma)\), where \(s=(s_{-H},s_{-H+1},\ldots,s_{H})\in\{0,1\}^{2H+1}\), \(H\in\mathbb{N}\). We show that \(R_{1}(C\times_{-H}[s]_{H})\) is Borel measurable. Footnote 2: Souslin’s Theorem says that if \(f:X\to Y\) is a Borel bijection, then \(f\) is a Borel isomorphism [7, Theorem 14.12]. Recall that for \(y\in B\) \[\mathbb{Z}=\{0\}\cup\{f_{p_{1}(k_{i})}(y)\}_{i=1}^{\infty}\cup\{l_{j_{i}(y)} \}_{i=1}^{\infty},\] where \(\{j_{1}(y)<j_{2}(y)<\ldots\}=\{j\geq 1:l_{j}\not\in\{f_{p_{1}(k_{i})}(y)\}_{i=1}^{ \infty}\}\). By the definition, for all \(y\in C\) \[R_{1}(\{y\}\times_{-H}[s]_{H})=\{y\}\times\Big{(}[s_{0}]_{0}\cap\bigcap_{i=1} ^{H}[s_{i}]_{f_{p_{1}(k_{i})}(y)}\cap\bigcap_{i=1}^{H}[s_{-i}]_{l_{j_{i}}(y)} \Big{)}. \tag{2.17}\] Since \(f:\mathbb{T}\to\mathbb{Z}\) is Borel measurable, for each \(J=(m_{1},m_{2},\ldots,m_{H})\in(\mathbb{Z}\setminus\{0\})^{H}\) the subset \[C^{J}=\{y\in C:f_{p_{1}(k_{1})}(y)=m_{1},\ldots,f_{p_{1}(k_{H})}(y)=m_{H}\}\] is Borel measurable. Thus \[C=\bigcup_{J\in(\mathbb{Z}\setminus\{0\})^{H}}C^{J},\text{ and for each }y\in C^{J}=C^{(m_{1},\ldots,m_{H})},\,\bigcap_{i=1}^{H}[s_{i}]_{f_{p_{1}(k_{i})} (y)}=\bigcap_{i=1}^{H}[s_{i}]_{m_{i}}. \tag{2.18}\] Let \(\mathbb{N}^{<H}=\{(n_{1},n_{2},\ldots,n_{H})\in\mathbb{N}^{H}:n_{1}<n_{2}< \cdots<n_{H}\}\). Then \((j_{1}(y),j_{2}(y),\ldots,j_{H}(y))\in\mathbb{N}^{<H}\). For each \(I=(n_{1},n_{2},\ldots,n_{H})\in\mathbb{N}^{<H}\), set \[C_{I}=\{y\in C:j_{1}(y)=n_{1},\ldots,j_{H}(y)=n_{H}\}.\] For \(q,t\in\mathbb{N}\) set \[W_{q,t}=\{y\in C:l_{q}\neq f_{p_{1}(k_{t})}(y)\},\] and it is Borel measurable. Then for \(y\in C\), \(l_{q}\not\in\{f_{p_{1}(k_{t})}(y)\}_{t=1}^{\infty}\) if and only if \(y\in\bigcap_{t=1}^{\infty}W_{q,t}\), and \[l_{n_{1}},\ldots,l_{n_{H}}\not\in\{f_{p_{1}(k_{t})}(y)\}_{t=1}^{\infty}\text{ if and only if }y\in\bigcap_{r=1}^{H}\bigcap_{t=1}^{\infty}W_{n_{r},t}.\] For all \(j\in[1,n_{H}]\setminus\{n_{1},n_{2},\ldots,n_{H}\}\), we have \(j\in\{f_{p_{1}(k_{i})}(y)\}_{t=1}^{\infty}\), which is equivalent to \[y\in\bigcup_{t=1}^{\infty}(C\setminus W_{j,t}).\] Thus \[C_{I}=\Big{(}\bigcap_{r=1}^{H}\bigcap_{t=1}^{\infty}W_{n_{r},t}\Big{)}\cap \Big{(}\bigcap_{j\in[1,n_{H}]\setminus\{n_{1},\ldots,n_{H}\}}\bigcup_{t=1}^{ \infty}(C\setminus W_{j,t})\Big{)}\] is a Borel measurable subset of \(C\). And we have \[C=\bigcup_{I\in\mathbb{N}^{<H}}C_{I},\text{ and for each }y\in C_{I}=C_{(n_{1}, \ldots,n_{H})},\ \bigcap_{i=1}^{H}[s_{-i}]_{I_{i}(y)}=\bigcap_{i=1}^{H}[s_{-i}]_{n_{i}}. \tag{2.19}\] By (2.18) and (2.19) we have \[R_{1}(C\times_{-H}[s]_{H})=\bigcup_{\begin{subarray}{c}J=(m_{1},\ldots,m_{H} )\in(\mathbb{Z}\setminus\{0\})^{H}\\ I=(n_{1},\ldots,n_{H})\in\mathbb{N}^{<H}\end{subarray}}\big{(}C^{J}\cap C_{I} \big{)}\times\Big{(}[s_{0}]_{0}\cap\bigcap_{i=1}^{H}[s_{i}]_{m_{i}}\cap \bigcap_{i=1}^{H}[s_{-i}]_{n_{i}}\Big{)}.\] Thus \(R_{1}(C\times_{-H}[s]_{H})\) is Borel measurable. Since the Borel measurable subsets \(C\subseteq B\) and \({}_{-H}[s]_{H}\) are arbitrary, \(R_{1}^{-1}\) is Borel measurable and by Souslin's Theorem \(R_{1}:X\to X\) is a Borel isomorphism (see for example [7, Theorem 14.12]). Next we show \(R_{1},R_{1}^{-1}\) are measure preserving. Let \(C\subseteq B\) be a Borel measurable subset and \({}_{-H}[s]_{H}\in\mathscr{B}(\Sigma)\), where \(s=(s_{-H},s_{-H+1},\ldots,s_{H})\in\{0,1\}^{2H+1}\), \(H\in\mathbb{N}\). We show that \(\mu(R_{1}(C\times_{-H}[s]_{H}))=\frac{1}{2^{2H+1}}m(C)=\mu(C\times_{-H}[s]_{H})\). In fact, by (2.17) and Fubini's Theorem \[\mu(R_{1}(C\times_{-H}[s]_{H}))\] \[= \mu\Big{(}\bigcup_{y\in C}\{y\}\times\Big{(}[s_{0}]_{0}\cap \bigcap_{i=1}^{H}[s_{i}]_{f_{p_{1}(k_{i})}(y)}\cap\bigcap_{i=1}^{H}[s_{-i}]_{ I_{i_{i}}(y)}\Big{)}\Big{)}\] \[= \int_{C}\nu(\Big{(}[s_{0}]_{0}\cap\bigcap_{i=1}^{H}[s_{i}]_{f_{p_{ 1}(k_{i})}(y)}\cap\bigcap_{i=1}^{H}[s_{-i}]_{I_{i_{i}}(y)}\Big{)}dm(y)\] \[= \int_{C}\frac{1}{2^{2H+1}}dm(y)=\frac{1}{2^{2H+1}}m(C)\] \[= \mu(C\times_{-H}[s]_{H}).\] Thus \(R_{1}^{-1}\) is measure-preserving, and hence \(R_{1}\) is also measure-preserving since \(R_{1}\) is Borel isomorphism. To sum up, we have that \(R_{1}:X=\mathbb{T}\times\Sigma\to\mathbb{T}\times\Sigma\) is isomorphic. Similarly, we have that \(R_{2}:X=\mathbb{T}\times\Sigma\to\mathbb{T}\times\Sigma\) is isomorphic. **Step 3**. \(R_{3}\) is an isomorphism. Since \(R_{3}|_{\mathbb{T}\setminus B\times\Sigma}=\operatorname{Id}_{\mathbb{T} \setminus B\times\Sigma}\), we need to show that \(R_{3}|_{B\times\Sigma}:B\times\Sigma\to B\times\Sigma\) maps Borel measurable sets to Borel measurable sets. Let \(C\subseteq B\) be a Borel measurable subset and \(\mathcal{B}(\Sigma)\), where \(s=(s_{-H},s_{-H+1},\ldots,s_{H})\in\{0,1\}^{2H+1}\), \(H\in\mathbb{N}\). We show that \(R_{3}(C\times_{-H}[s]_{H})\) is Borel measurable. By the definition, for all \(y\in C\) \[R_{1}(\{y\}\times_{-H}[s]_{H})=\{y\}\times\Big{(}\bigcap_{i\in[-H,H]\cap Q_{ y}}[1-s_{i}]_{i}\cap\bigcap_{i\in[-H,H]\setminus Q_{y}}[s_{i}]_{i}\Big{)},\] where \(Q_{y}=\{f_{p_{1}(n)}(y):n\in([M,+\infty)\cap F)\setminus S_{0}\}\). For \(i\in\mathbb{Z}\), let \[D_{i}=\{y\in C:i\in Q_{y}\}=\{y\in C:f_{p_{1}(n)}(y)=i\ \ \text{for some}\ n\in([M,+\infty)\cap F) \setminus S_{0}\},\] and it is Borel measurable since \(f:\mathbb{T}\to\mathbb{Z}\) is Borel measurable. For each \(I\subseteq[-H,H]\), let \[D_{I}=\bigcap_{i\in I}D_{i}\cap\bigcap_{i\in[-H,H]\setminus I}(C\setminus D _{i}).\] Then we have \[R_{1}(C\times_{-H}[s]_{H})=\bigcup_{I\subseteq[-H,H]}D_{I}\times\Big{(}\bigcap _{i\in I}[1-s_{i}]_{i}\cap\bigcap_{i\in[-H,H]\setminus I}[s_{i}]_{i}\Big{)}.\] Thus \(R_{3}(C\times_{-H}[s]_{H})\) is Borel measurable. Since the Borel measurable subsets \(C\subseteq B\) and \({}_{-H}[s]_{H}\) are arbitrary, \(R_{3}^{-1}\) is Borel measurable and by Souslin's Theorem \(R_{3}:X\to X\) is Borel isomorphism. Similar to the proof in **Step 2** for \(R_{1}\), it is easy to verify that \(R_{3},R_{3}^{-1}\) are measure-preserving. Thus we have showed that \(R=R_{3}\circ R_{1}\circ R_{2}^{-1}:X\to X\) is an invertible measure-preserving transformation, and hence \((X,\mathcal{X},\mu,S)\) is a m.p.s. and it is isomorphic to \((X,\mathcal{X},\mu,T)\). By Proposition 2.2, \((X,\mathcal{X},\mu,S)\) is an ergodic m.p.s. with \(h_{\mu}(X,S)=0\). The proof is complete. ### Proof of the main theorem #### 2.6.1. Now we are ready to give the proof of the Main Theorem. First we need the following lemma. Recall that the subset \(B\) is the set defined in Corollary 2.5. **Lemma 2.7**.: _Let \(A_{1}=B\times\Sigma\) and \(A_{2}=\mathbb{T}\times[0]_{0}\). Then for \(n\in[M,+\infty)\setminus S_{0}\)_ \[\mu(A_{1}\cap T^{-p_{1}(n)}A_{2}\cap S^{-p_{2}(n)}A_{2})=\left\{\begin{array} []{ll}0,&n\in F\mbox{;}\\ \frac{1}{2}m(B),&n\not\in F\mbox{.}\end{array}\right. \tag{2.20}\] Proof.: Recall \([0]_{j}=\{\omega\in\Sigma:\omega(j)=0\}\). Note that for \(n\in[M,+\infty)\setminus S_{0}\), \((y,\omega)\in A_{1}\cap T^{-p_{1}(n)}A_{2}\cap S^{-p_{2}(n)}A_{2}\) if and only if \(y\in B\), \(T^{p_{1}(n)}(y,\omega)\in\mathbb{T}\times[0]_{0}\) and \(S^{p_{2}(n)}(y,\omega)\in\mathbb{T}\times[0]_{0}\). Moreover, since \((\psi_{\pi_{\gamma}}^{-1}\widetilde{\omega})(0)=\widetilde{\omega}(0)\) for any \(z\in B\) and \(\widetilde{\omega}\in\Sigma\), by (2.4) and (2.16) \[A_{1}\cap T^{-p_{1}(n)}A_{2}\cap S^{-p_{2}(n)}A_{2}\] \[= \{(y,\omega)\in X:y\in B,\sigma^{f_{p_{1}(n)}(y)}\omega(0)=0,( \sigma^{f_{p_{2}(n)}(y)}\circ\psi_{\pi_{\gamma}})(\omega)(0)=0\}\] \[= \{(y,\omega)\in X:y\in B,\omega(f_{p_{1}(n)}(y))=0,(\sigma^{f_{p_{ 2}(n)}(y)}\circ\psi_{\pi_{\gamma}})(\omega)(0)=0\}.\] Note that \((\sigma^{f_{p_{2}(n)}(y)}\circ\psi_{\pi_{\gamma}})(\omega)(0)=0\) if and only if \((\psi_{\pi_{\gamma}}\omega)(f_{p_{2}(n)}(y))=0\). By the definition of \(\psi_{\pi_{\gamma}}\), we have \[(\psi_{\pi_{\gamma}}\omega)(f_{p_{2}(n)}(y))=\left\{\begin{array}{ll}1-\omega (f_{p_{1}(n)}(y)),&n\in F\mbox{;}\\ \omega(f_{p_{1}(n)}(y)),&n\not\in F\mbox{.}\end{array}\right.\] It follows that for all \(n\in[M,+\infty)\setminus S_{0}\) \[A_{1}\cap T^{-p_{1}(n)}A_{2}\cap S^{-p_{2}(n)}A_{2}=\left\{\begin{array}{ll} \emptyset,&n\in F;\\ \bigcup_{y\in B}\Big{(}\{y\}\times[0]_{f_{p_{1}(n)}(y)}\Big{)},&n\not\in F.\end{array}\right.\] Note that for \(n\in[M,+\infty)\setminus(S_{0}\cup F)\), by Fubuni's Theroem \[\begin{split}&\mu(A_{1}\cap T^{-p_{1}(n)}A_{2}\cap S^{-p_{2}(n)} A_{2})=\mu\big{(}\bigcup_{y\in B}\{y\}\times[0]_{f_{p_{1}(n)}(y)}\big{)}\\ =&\int_{B}\nu\big{(}[0]_{f_{p_{1}(n)}(y)}\big{)}dm=\frac{1}{2}m(B). \end{split} \tag{2.21}\] The proof is complete. #### 2.6.2. Proof of the Main Theorem Let \(E\subseteq\mathbb{N}\) be an infinite subset of \(\mathbb{N}\) with \(\mathbb{N}\setminus E\) infinite. Let \(S_{0}\subset[M,+\infty)\setminus E\) be an infinite subset having zero density. Put \(F=E\cup S_{0}\). Then Lemma 2.7 gives the first part of the Main Theorem with \(c=\frac{1}{2}m(B)>0\). Now we show the second part of the Main Theorem. Choose \(E\) such that \(\lim\limits_{N\to\infty}\frac{|[1,N]\setminus E|}{N}\) does not exist. Since \(d(S_{0})=\lim\limits_{N\to\infty}\frac{|[1,N]\cap S_{0}|}{N}=0\), we have \(\lim\limits_{N\to\infty}\frac{|[1,N]\setminus F|}{N}=\lim\limits_{N\to\infty} \frac{|[1,N]\setminus E|}{N}\) does not exist. Moreover by Lemma 2.7, we have the limit \[\lim\limits_{N\to\infty}\frac{1}{N}\sum\limits_{n=0}^{N-1}\mu(A_{ 1}\cap T^{-p_{1}(n)}A_{2}\cap S^{-p_{2}(n)}A_{2})\] \[= \lim\limits_{N\to\infty}\frac{1}{N}\sum\limits_{n\in[0,N-1] \setminus S_{0}}\mu(A_{1}\cap T^{-p_{1}(n)}A_{2}\cap S^{-p_{2}(n)}A_{2})\] \[= \lim\limits_{N\to\infty}\frac{|[M,N-1]\setminus(S_{0}\cup E)|}{N} \cdot\frac{1}{2}m(B)\] \[= \lim\limits_{N\to\infty}\frac{|[M,N-1]\setminus F|}{N}\cdot\frac {1}{2}m(B)\] does not exist. In particular, the average \[\frac{1}{N}\sum\limits_{n=0}^{N-1}1_{A_{2}}(T^{p_{1}(n)}x)1_{A_{2}}(S^{p_{2}(n )}x)\] do not converge in \(L^{2}(X,\mu)\) as \(N\to\infty\).
2304.11906
Transformer-based stereo-aware 3D object detection from binocular images
Transformers have shown promising progress in various visual object detection tasks, including monocular 2D/3D detection and surround-view 3D detection. More importantly, the attention mechanism in the Transformer model and the 3D information extraction in binocular stereo are both similarity-based. However, directly applying existing Transformer-based detectors to binocular stereo 3D object detection leads to slow convergence and significant precision drops. We argue that a key cause of that defect is that existing Transformers ignore the binocular-stereo-specific image correspondence information. In this paper, we explore the model design of Transformers in binocular 3D object detection, focusing particularly on extracting and encoding task-specific image correspondence information. To achieve this goal, we present TS3D, a Transformer-based Stereo-aware 3D object detector. In the TS3D, a Disparity-Aware Positional Encoding (DAPE) module is proposed to embed the image correspondence information into stereo features. The correspondence is encoded as normalized sub-pixel-level disparity and is used in conjunction with sinusoidal 2D positional encoding to provide the 3D location information of the scene. To enrich multi-scale stereo features, we propose a Stereo Preserving Feature Pyramid Network (SPFPN). The SPFPN is designed to preserve the correspondence information while fusing intra-scale and aggregating cross-scale stereo features. Our proposed TS3D achieves a 41.29% Moderate Car detection average precision on the KITTI test set and takes 88 ms to detect objects from each binocular image pair. It is competitive with advanced counterparts in terms of both precision and inference speed.
Hanqing Sun, Yanwei Pang, Jiale Cao, Jin Xie, Xuelong Li
2023-04-24T08:29:45Z
http://arxiv.org/abs/2304.11906v4
# Transformer-based stereo-aware 3D object detection from binocular images ###### Abstract Transformers have shown promising progress in various visual object detection tasks, including monocular 2D/3D detection and surround-view 3D detection. However, when used in essential and classic stereo 3D object detection, directly adopting those surround-view Transformers leads to slow convergence and significant precision drops. We argue that one of the causes of this defect is that the surround-view Transformers do not consider the stereo-specific image correspondence information. In a surround-view system, the overlapping areas are small, and thus correspondence is not a primary issue. In this paper, we explore the model design of Transformers in stereo 3D object detection, focusing particularly on extracting and encoding the task-specific image correspondence information. To achieve this goal, we present TS3D, a Transformer-based Stereo-aware 3D object detector. In the TS3D, a Disparity-Aware Positional Encoding (DAPE) module is proposed to embed the image correspondence information into stereo features. The correspondence is encoded as normalized disparity and is used in conjunction with sinusoidal 2D positional encoding to provide the location information of the 3D scene. To extract enriched multi-scale stereo features, we propose a Stereo Reserving Feature Pyramid Network (SRFPN). The SRFPN is designed to reserve the correspondence information while fusing intra-scale and aggregating cross-scale stereo features. Our proposed TS3D achieves a 41.29% Moderate Car detection average precision on the KITTI test set and takes 88 ms to detect objects from each binocular image pair. It is competitive with advanced counterparts in terms of both precision and inference speed. Stereo vision, 3D object detection, Transformer, positional encoding, feature pyramid. ## I Introduction Stereo vision systems can perceive 3D scene for an autonomous vehicle [1]. In such stereo-based 3D scene perception, image correspondence information plays an essential role [2, 3, 4]. Image correspondence information, in the form of disparity or depth supervision, is widely used in convolutional neural network (CNN) based stereo 3D object detection to prevent the stereo-based detector from converging to a monocular local minimum during training [5, 6]. Existing stereo 3D object detectors are divided into two categories based on their sources of the image correspondence information: detectors with and without LiDAR-based image correspondence supervision. The former [7, 6, 8], shortened as **Stereo-with-LiDAR**, generates ground-truth image correspondence information based on projected LiDAR point clouds, resulting in accurate yet sparse supervision, while The latter [9, 10, 11], shortened as **Stereo-without-LiDAR**, does not rely on LiDAR point clouds to generate ground-truth image correspondence information. However, the pseudo ground-truths obtained by external stereo matching networks can provide inaccurate supervision in those methods [5, 12]. Some detectors in this category do not use explicit image correspondence supervision [13, 14]. Although Stereo-with-LiDAR detectors achieve higher detection precision, developing Stereo-without-LiDAR detectors is still essential as LiDAR devices may not always be affordable or available, such as in stereo endoscopes and traffic cameras. Transformers [15] has gained promising progress in 2D object detection [16, 17, 18], surround-view 3D object detection [19, 20, 21], and depth estimation [22]. However, for stereo 3D object detection in driving scenes, to the best of our knowledge, there have not been any public Transformer-based detectors in the literature. Fig. 1: The adapt the Transformer-based surround-view 3D object detector DETR3D [19] to binocular detection, and name it DETR3D-Binocular. The DETR3D-Binocular and our TS3D are both trained on the KITTI train subset for 320 epochs. Their (a) Moderate validation AP during training are plotted and (b) the 3D detection APs are listed. DETR3D-Binocular converges to a poor 3D detection AP, whereas the Transformer-based TS3D can be trained to a superior performance. An intuitive way to introduce Transformer models into stereo 3D object detection is to adopt the above surround-view Transformer-based 3D detectors on the KITTI [23] dataset. To achieve that, we train a binocular revision of the surround-view 3D object detector DETR3D [19], which is named DETR3D-Binocular. The validation AP curve of DETR3D-Binocular during training is shown in Fig. 1a as blue circles, and its evaluation indicators after training 320 epochs are listed in Fig. 1b. It can be seen that DETR3D-Binocular converges to a poor 3D detection performance, which can compared neither to advanced CNN-based stereo 3D object detectors nor to the surround-view detection performance [24] of DETR3D. We argue that the convergence problem is caused by a lack of image correspondence supervision in surround-view 3D object detection Transformers such as DETR3D, as well as in existing monocular-based and LiDAR-based 3D object detection Transformers. It is a fact that monocular images and LiDAR point clouds do not involve cross-view correspondence [25, 26]; and the overlapping area across images in a surround-view system is small [24]. Concerning the binocular vision system, however, the overlap areas between left and right images are large. The image correspondence information in the large overlap areas, as mentioned above, is key to decoding the 3D scene. Therefore, extracting and exploiting the image correspondence information in a Transformer-based model is a task-specific challenge of stereo 3D object detection. In order to encode the image correspondence information into Transformers, and to build a Transformer-based stereo 3D object detector, we present TS3D, a Transformer-based Stereo-aware 3D object detector. The key novelties of the TS3D lie in its Disparity-Aware Positional Encoding (DAPE) and Stereo Reserving Feature Pyramid Network (SRFPN). In detail, a deformable Transformer decoder [27] is introduced to decode multi-scale stereo features. In the decoder, the DAPE is designed to explicitly encode the correspondence information into the stereo features. DAPE composites normalized disparities and sinusoidal 2D positional encoding [27], thus simultaneously providing position information in both 2D image space and 3D scene. In addition to the DAPE, a Non-Parametric Anchor-Based Object Query (NPAQuery) scheme is introduced. Compared with the widely-used parametric query, NPAQuery does not introduce new learnable parameters as query embeddings, instead, it reuses the stereo features as a uniformly distributed query embedding in the 2D image space. Encoder-wisely, the SRFPN is proposed to preserve image correspondence information and extract enriched multi-scale stereo features for the above Transformer decoder. Besides providing semantic information as done in existing feature pyramid networks [4, 28, 29], SRFPN reserves both local information in low-level features and correspondence information in stereo features. The resultant stereo feature pyramid is then aggregated by intra-scale and cross-scale fusion while respecting the disparity-wise definition of the stereo features. Experiments on the KITTI dataset show that TS3D is an effective Transformer model for stereo 3D object detection, and it is competitive with existing CNN-based counterparts in terms of accuracy and efficiency. Consequently, our TS3D can encode the image correspondence information into the Transformer model, thus alleviate the above convergence problem. The validation AP curve of our TS3D is plotted in Fig. 1a as orange triangles, and its validation results are listed in Fig. 1b. It is shown that after reserving and encoding the image correspondence information by the proposed SRFPN and DAPE respectively, the detection AP is greatly improved. To the best of our knowledge, the proposed TS3D is the first public Transformer-based stereo 3D object detector for driving scene in the literature. The key contributions of this paper can be summarized as three-fold: 1. Towards introducing Transformer model into stereo 3D object detection, we present the Transformer-based Stereo-aware 3D object detector (TS3D). It is a Stereo-without-LiDAR detector and can extract and encode the image correspondence information from binocular images. 2. The Disparity-Aware Positional Encoding (DAPE) is designed to explicitly encode the image correspondence information into stereo features by reusing the disparity logits, thus providing the Transformer decoder with the 3D scene information. 3. A Stereo Reserving Feature Pyramid Network (SRFPN) is proposed, which reserves image correspondence information in both intra-scale and cross-scale feature fusion. It provides multi-scale stereo features that hold enriched detail and correspondence informations for the Transformer decoder. ## II Related work Existing stereo 3D object detectors are based on convolutional neural networks (CNN) and are reviewed following the taxonomy introduced in Sec. I. Representative Transformer-based 3D detectors for monocular images, LiDAR point clouds, and surround-view images are briefly introduced, and readers are encouraged to refer to [30, 31] for more thorough reviews on Transformers-based 3D object detection. Existing feature pyramid networks are reviewed as well, which relates to our SRFPN. Stereo-with-LiDAR 3D object detectors are trained with LiDAR-based image correspondence information. Pseudo-LiDAR [7, 32, 33] formulates the task as a cascade of disparity estimation and LiDAR-based 3D detection, therefore, the image correspondence information is explicitly used in the former step. OCStereo [34] and IDA-3D [8] exploit the image correspondence information in an instance-aware manner, that is, disparities in foreground pixels are seen more importantly. ZoomNet [35] also generates instance-wise image correspondence, however, it adopts external 3D models to densify the instance-wise correspondence. DSGN [6], PLUMENet [36], and CDN [37] use the image correspondence information as auxiliary supervision to the detection supervision. LIGA-Stereo [38] distillates 3D voxel features extracted from the LiDAR-based correspondence information in a knowledge-distillation [39] manner. Stereo-without-LiDAR detectors are trained without LiDAR-based image correspondence information. Some detectors in this category use external stereo matching algorithms to generate coarse image correspondence information. YOLOStereo3D [5] generates coarse disparity maps [40] and uses them as a multi-task supervision in parallel with the detection supervision. Disp R-CNN [9] generates instance-wise disparities [41] and utilizes external 3D model to densify the instance-wise correspondence as done in ZoomNet. RT3DStereo [12] uses the estimated disparity map as an intermediate image correspondence supervision. Some detectors use image correspondence information in an implicit manner or ignore the correspondence. Stereo R-CNN [13] and TLNet [42] only uses the image correspondence information from the 3D object annotations. RTS3D [14] implicitly utilizes the correspondence by representing 3D scenes in a 4D feature-consistent embedding space. The Transformer-based 2D object detector DETR [16] reformulates the 2D object detection as a collection prediction task and removes the anchor generation and Non-Maximum Suppression (NMS) processes, resulting in an end-to-end detector. DeformableDETR [27] introduces a deformable attention module, which only applies the multi-head attention mechanism at several learnable key locations around the reference point. It significantly reduces the computational cost of the DETR decoder and converges faster. The Transformer-based unary 3D object detector MonoDTR [25] utilizes auxiliary supervision to implicitly learn depth-aware features and a depth-aware Transformer to mine context and depth information from global features. MonoDETR [43] takes 3D object candidates as queries and introduces an attention-based depth encoder to encode depth embeddings. A depth cross-attention module is proposed to decode inter-query and query-scene interactions. The Transformer-based surround-view 3D object detector DETR3D [19] uses a CNN to extract unary features of each view and a DETR decoder to predict objects. PETR [20] extends DETR3D by generating a 3D coordinate grid in the camera frustum space and fusing the 3D coordinates with the 2D features. BEVFormer [21] generates object queries on the bird's eye view (BEV) and uses both spatial and temporal information. A spatial cross-attention module is proposed to extract BEV features from each view; A temporal self-attention module is designed to fuse historical BEV features. In addition to the obvious differences in input modalities comparing our TS3D with the above Transformer-based object detectors, two novel components, namely DAPE and SRFPN, are proposed to encode the image correspondence information into enriched stereo feature pyramids. Feature Pyramid Network (FPN) [44] exploits the multi-level nature of CNNs to generate multi-scale features, so that the features of each level focus more on objects of the specific scale. FPN improves the detection performance of multi-scale 2D object detection. RetinaNet [45] makes use of more levels of the features than that used in FPN, resulting in higher detection precision at a lower computational cost. Original FPN fuses feature from lower-resolution to higher-resolution (_i.e._, top-down), and PAFPN [46] adds an aggregation network from higher-resolution to lower-resolution features (bottom-up). AFFF [47] adaptively fuses multi-scale features and spatially filters information that conflicts across scales. BiFPN [48] designs a simple and efficient bidirectional FPN, in which a basic structure consisting of top-down and bottom-up paths is repeated to capture enriched multi-scale features. RFP [49] recursively reuses the FPN structure and introduces Atrous Spatial Pyramid Pooling (ASPP) module as a connection module between two FPNs, which improves the performance of 2D object detection. The above feature pyramids are carefully designed for unary 2D object detection and do not consider the image correspondence nature of binocular images. As a result, they can be applied to generate unary feature pyramids in stereo 3D object detection, nevertheless, directly applying them in stereo features will destroy the latent image correspondence information and affect the accuracy of 3D object detection. ## III Methodology We elaborate on our TS3D in this section. The overall architecture is introduced in Sec. III-A. The proposed SRFPN (Sec. III-C) and Transformer decoder (Sec. III-B) result in an encoder-decoder pipeline. The DAPE is detailed in Sec. III-B1 as a novel component in the decoder. ### _Overall architecture of TS3D_ The overall architecture of TS3D is illustrated in Fig. 2. Sequentially, TS3D takes binocular images as inputs (blue boxes in the figures denote left view, green denotes right), extracts unary features and stereo features, estimates disparities, decodes object features using a multi-scale deformable DETR decoder [27], and regresses and classifies 3D objects. Two weight-sharing ResNets [50]\(+\) FPN [44] are used to **extract multi-scale unary features** from left and right input images respectively. Fig. 2 is a schematic diagram of a three-level multi-scale network. For simplicity and clarity, the left and right unary features are represented in a stacked manner (_i.e._, the stacked blue and green boxes in Fig. 2). The upper half is the primary unary feature pyramid obtained by ResNet; the lower half is the enhanced unary feature pyramid obtained by FPN. The index of each feature level in both pyramids is denoted as \(l\in{1,2,3}\). The above unary features are then fed into **Stereo Reserving Feature Pyramid Network** (SRFPN). Both primary and enhanced unary feature pyramids are exploited to construct correlation-based cost volumes [51], resulting in primary and enhanced cost volume pyramids (shown as orange boxes in the second column of Fig. 2). Constructed cost volume pyramids are then referred to as stereo feature pyramids in the SRFPN. Subsequently, primary and enhanced stereo features from each level of the two pyramids are merged, in which the correspondence information is kept intact. The resultant stereo-reserving feature pyramid is then cross-scale aggregated in a bottom-up manner. Finally, enriched and aggregated multi-scale stereo features are obtained for subsequent disparity estimation and Transformer decoder. A concise **disparity estimation** head (lower right corner of Fig. 2) is introduced to predict disparity maps from the lowest-resolution stereo feature in the above feature pyramid. Stereo features are convolved and upsampled to produce higher-resolution disparity maps. The disparity estimation is supervised by pseudo ground-truth generated by the BlockMatching algorithm [40], making the resultant TS3D a Stereo-without-LiDAR 3D object detector. The **Transformer decoder** consists of Multi-Head Self-Attention layers [16] (shortened as MHSA in Fig. 2), Multi-Scale Deformable Cross-Attention layers [27] (MSDeformCA), and Feed-Forward Networks (FFNs). The last feature tensor for disparity regression is termed disparity logits and is used as the input of our DAPE. The generated positional encoding of the 3D scene is summed with the query of each layer in the Transformer decoder. The lowest-resolution features in the above feature pyramid are convolved to generate embeddings of the NPAQuery. In addition to the DAPE and NPAQuery, the multi-scale stereo features extracted by SRFPN are exploited as keys and values of the MSDeformCA layers. The above procedure including MHSA, MSDeformCA, and FFN is collectively referred to as a decoder layer. The Transformer decoder is a cascade of \(N_{\text{dec}}\) decoding layers, in which the object queries are refined step-by-step. During training, 3D objects are classified and regressed under ground-truth supervision after each decoder layer; During inference, only predictions from the last layer are taken as the final 3D object detection results. The **detection head**[5] consists of two sub-heads of classification and regression, both of which are predicted at \(1/16\) resolution. For the classification head, \(K+1\) scores are predicted for \(K\)-class objects, where the extra class denotes the background class. Offsets _w.r.t._ anchor boxes are regressed by the regression head. The offset is defined as a 13-dimensional vector: the projected 2D bounding box \((u_{\text{2D}},v_{\text{2D}},w_{\text{2D}},h_{\text{2D}})\), projected 3D object center \((u_{\text{3D}},v_{\text{3D}})\), object distance \(z\), object 3D size \((w,h,l)\), and object orientation \((\sin 2\alpha,\cos 2\alpha,c_{\alpha})\). ### _Transformer decoder_ DeformableDETR [27] introduces a Multi-Scale Deformable Cross-Attention (MSDeformCA) layer in the DETR decoder [16]. We first briefly review the key processes of MSDeformCA as preliminary knowledge in this section. Flattened multi-scale features are used as the key and value of MSDeformCA, and object embeddings extracted by MHSA are used as queries. A reference point is first extracted from an object query; then, MSDeformCA is applied to regress \(n_{\text{points}}\) offset points around it. The Multi-Head Cross-Attention (MHCA) mechanism is only applied to those points, which not only reduces the computational cost and GPU memory footprint, but also speeds up the training convergence. Therefore, MSDeformCA can be applied to multi-scale features since its computational cost are controllable. Despite the success of MSDeformCA in 2D object detection, directly applying it in stereo 3D object detection still leads to a difficult convergence during training. To speed up the convergence of such a model, we introduce image correspondence information to stereo features. Image correspondence information can ensure that the model makes full use of the stereo features, instead of falling into the local minimum of unary 3D object detection [6]. Towards that end, a Transformer decoder is designed as shown in Fig. 2. The main difference between our decoder and a DeformableDETR decoder is Disparity-Aware Positional Encoding (Sec. III-B1) and Non-Parametric Anchor-Based Object Query (Sec. III-B2). As shown in Fig. 2, Disparity-Aware Positional Encoding is aimed to explicitly encode image correspondence information based on disparity logits. The generated positional encoding holds the 3D location information in the scene. It is fed into the decoder as an addition to the NPAQuery, thus introducing 3D information into the Transformer decoder. The multi-scale stereo features (refer to Sec. III-C) are flattened and Fig. 2: The overall architecture of the Transformer-based Stereo-aware 3D object detector (TS3D). Sequentially, TS3D takes binocular images as inputs (blue boxes in the figures denote left view, green denotes right), extracts unary features, extracts stereo features using SRFPN (Stereo Reserving Feature Pyramid Network), estimates disparities, decodes object features using a multi-scale deformable DETR decoder [27], and regresses and classifies 3D objects. The DAPE (Disparity-Aware Positional Encoding) elaborated on the right is used to explicitly encode image correspondence information for detection. concatenated (in Fig. 2, a small square represents the feature vector of a pixel in the stereo feature, and varying saturations represent features from different scales). The resultant 2D feature matrix is used as the multi-scale key/value in MSDeformCA. The downsampling ratios of the feature maps of the three resolutions relative to the input image are 4, 8, and 16, respectively, that is, \(\mathbf{x}^{l}\in\mathbb{R}^{\frac{W}{2^{l+1}}\times\frac{H}{2^{l+1}}\times C_{ \text{disc}}}\), where \(l\in\{1,2,3\}\), \(C_{\text{dec}}\) denotes the dimension of feature vectors in the decoder. Only the lowest-resolution stereo features are reused in the NPAQuery, which maintains the inference speed while accelerating the training convergence. For other hyperparameters in MSDeformCA, we use \(N_{\text{dec}}=4\) decoder layers, each layer is of \(M=8\) heads, and each MSDeformCA samples \(K=4\) offset points. #### Iii-B1 Disparity-Aware Positional Encoding (DAPE) Positional encoding is an important component in vision Transformers [15, 20, 52]. The sinusoidal 2D positional encoding [16] is widely used in existing Transformer-based 2D object detectors. However, positional encodings for 2D object detection lack 3D scene information; positional encodings for unary, point cloud, and surround-view 3D object detection ignore the correspondence information between binocular images. Therefore, DAPE is proposed to explicitly encode the image correspondence information into stereo features based on disparities, thus enabling the decoder to perceive the 3D information of the scene and objects. We reuse the disparity logits in the disparity estimation head. The disparity prediction at position \((u,v)\) on the disparity map \(\hat{\mathbf{M}}\) is regressed using SoftArgMax [28] by the disparity logits \(\mathbf{x_{\hat{M}}}\): \[\hat{\mathbf{M}}\left(u,v\right)=\mathrm{SoftArgMax}\left(\mathbf{x_{\hat{M}} }\left(u,v,:\right)\right), \tag{1}\] where \(\mathbf{x_{\hat{M}}}\in\mathbb{R}^{W\times H\times C_{\text{disc}}}\), and \(C_{\text{disp}}<C_{\text{dec}}\). We proposed to reuse the Softmax normalized version of \(\mathbf{x_{\hat{M}}}\) as the probability distribution of disparity at \((u,v)\), thus introducing the correspondence information into the decoder. Formally, it can be given as \[\mathrm{PE}_{\text{disp}}\left(\mathbf{x_{\hat{M}}};u,v,:\right)=\sigma \left(\mathbf{x_{\hat{M}}}\left(u,v,:\right)\right), \tag{2}\] where \(\sigma\left(\cdot\right)\) denotes Softmax normalization. The DAPE is a concatenation of the sinusoidal 2D and the disparity-related encoding defined by Eq. (2), that is, \[\mathrm{PE}_{\text{DA}}\left(\mathbf{x_{\hat{M}}};u,v,:\right)=\left[\mathrm{ PE}_{\text{size}}\left(u,v,:\right),\mathrm{PE}_{\text{disp}}\left(\mathbf{x_{ \hat{M}}};u,v,:\right)\right], \tag{3}\] where \(\mathrm{PE}_{\text{disp}}\left(\mathbf{x_{\hat{M}}};u,v,:\right)\) is a \(C_{\text{disp}}\)-dimensional vector, \(\mathrm{PE}_{\text{size}}\left(u,v,:\right)\) is re-defined as a \((C_{\text{dec}}-C_{\text{disp}})\)-dimensional vector, so \(\mathrm{PE}_{\text{DA}}\left(\mathbf{x_{\hat{M}}};u,v,:\right)\) is a \(C_{\text{dec}}\)-dimensional vector, which can be summed directly with the object query. DAPE is added to the NPAQuery to provide explicit image correspondence and 3D position information for stereo features: \[\mathbf{Q}=\mathbf{x}_{q}+\mathrm{PE}_{\text{DA}}\left(\mathbf{x_{\hat{M}}} \right), \tag{4}\] where \(\mathbf{x}_{q}\in\mathbb{R}^{\frac{W}{2^{l}}\times\frac{H}{2^{l}}\times C_{ \text{disc}}}\) is the NPAQuery elaborated in Sec. III-B2, and \(\mathbf{x}_{q}\) and \(\mathrm{PE}_{\text{DA}}\left(\mathbf{x_{\hat{M}}}\right)\) are reshaped to \(\mathbb{R}^{(\frac{W}{2^{l}}\cdot\frac{H}{2^{l}})\times C_{\text{disc}}}\). It can be seen from Eqs. (2) and (3) that the DAPE not only encodes 2D information in the 2D image space but also explicitly encodes the correspondence information of binocular images. It is suitable for locating the projection of 3D object center points and bounding boxes in 2D images; The encoded image correspondence information is guided by stereo features, thus is suitable for regressing depths and sizes of 3D objects in 3D space. #### Iii-B2 Non-Parametric Anchor-Based Object Query (NPAQuery) Existing methods assign a best-matching object query to each object. When the number of positive samples is small (as that in KITTI), few object queries can be trained in each iteration, resulting in less training efficiency. As a consequence, it is difficult for the model to learn useful 3D object statistics. In addition, in order to make object queries cover the entire 3D scene, existing 3D object detectors usually generate dense 3D grids. Therefore, the number of object queries becomes large to ensure that it can cover the entire 3D space, which makes the training harder to converge. To alleviate that problem while ensuring the object query able to cover the entire 3D space, we reuse the lowest-resolution features in the pyramid (Sec. III-C). The feature map can cover the entire 2D image space, and with the 3D information provided by the proposed DAPE, it can cover the 3D space. The NPAQuery does not depend on learnable parameters as do in [33, 53], thus we term it a non-parametric object query. Using higher-resolution feature maps can cover the 3D space more densely, but it also leads to a significant increase in the computational cost. In terms of query quantity and coverage, the lowest-resolution stereo feature is a compromise. As mentioned above, the lowest-resolution feature \(\mathbf{x}^{l=3}\in\mathbb{R}^{\frac{W}{2^{l}}\times\frac{H}{2^{l}}\times C_{ \text{disc}}^{\text{disc}}}\). The number of feature channels is convolved to \(C_{\text{dec}}\) by a \(1\times 1\) convolution, and the resulting tensor is denoted as \(\mathbf{x}_{q}\in\mathbb{R}^{\frac{W}{2^{l}}\times\frac{H}{2^{l}}\times C_{ \text{disc}}}\), which is used as object embeddings. After flattening, the query becomes \(\mathbf{x}_{q}\in\mathbb{R}^{1440\times C_{\text{disc}}}\), which is equivalent to 1440 object queries: \(N_{q}=\frac{W}{16}\times\frac{H}{16}=1440\). Although this number is much higher than empirical values in 2D object detection [27], it is comparable to the values in surround-view 3D object detection [20]. Because the coordinate correspondence between the NPAQuery and the 2D image space is clear, we therefore directly generate the reference point coordinates according to this correspondence. Such design makes it possible to exploit anchor prior [5], which is commonly used in CNN-based detectors. During training, targets are assigned _w.r.t._ 2D IoUs (Intersection over Union) between anchor boxes and ground-truth boxes. If an IoU is greater than a preset foreground threshold \(\tau_{\text{fg}}\), the corresponding anchor is assigned as a matching positive sample; On the contrary, if an IoU is smaller than a preset background threshold \(\tau_{\text{bg}}\), the anchor is assigned as a negative sample. ### _Stereo Reserving Feature Pyramid Network_ The above Transformer decoder can decode 3D object attributes from stereo features so as to complete the 3D object detection. In order to provide the Transformer decoder with more discriminative stereo features and to embed rich detail and correspondence information into the stereo features, we propose Stereo Reserving Feature Pyramid Network (SRFPN). The feature pyramid network can extract multi-scale features in a neural network model, which is suitable for addressing the scale variation problem in object detection tasks. In stereo 3D object detection, on the one hand, unary features suffer from scale variation as does in 2D object detection, making the feature pyramid network one of the crucial designs. On the other hand, stereo features contain image correspondence information and have clear physical meanings. Directly using existing feature pyramid networks for 2D object detection will destroy the 3D information, since those ignore the intra-scale and cross-scale relationships of stereo features. Existing feature pyramids are constructed on either unary features or stereo features: the former cannot perform information interaction across scales of stereo features; the latter suffers from the low quality of the initial cost volumes. To address those problems, we propose the SRFPN, which consists of three parts (three columns of features as shown in Fig. 2(c)). The first column presents the unary feature pyramid. As described in Sec. III-A, the upper part is the primary pyramid (C2-C4) and the lower part is the enhanced pyramid (P2-P4). Six levels of paired unary features with three levels of resolutions are used to construct a six-level correlation-based cost volume pyramid [10, 51] (orange rectangles in the second column in the figure). The upper three levels in the figure are obtained from primary unary features, which contain rich detailed information; the lower three levels are from enhanced unary features, which contain more semantic information. The primary and enhanced stereo features are fused according to their resolutions. Knowing the fact that the channel dimensions of identical resolution stereo features have consistent physical meanings, those two feature tensors are directly summed. The resulting multi-scale stereo features contain rich details and correspondence information. In addition, the features of the lowest resolution will be reused as NPAQuery. In order to get better object embeddings, the higher-resolution stereo features are step-by-step fused with lower-resolution features [5] (the downward arrows in the third column of Fig. 2(c)). Each scale focuses on objects within the range of the receptive field [44], therefore, the cross-scale fusion in this paper is completed by channel expansion and concatenation. Finally, the lowest-resolution stereo features with the largest number of feature channels are obtained and reused in NPAQuery. Then in primary unary features, the left unary feature \(\mathbf{x}_{\text{t,C}}^{l+1}\) and the right unary feature \(\mathbf{x}_{\text{R,C}}^{l+1}\) construct primary 3D cost volumes by \[\mathbf{C}_{\text{C}}^{l+1}=\mathrm{CV}_{\text{3D}}\left(\mathbf{x}_{\text{L, C}}^{l+1},\mathbf{x}_{\text{R,C}}^{l+1}\right), \tag{5}\] where \(\mathrm{CV}_{\text{3D}}(\cdot,\cdot)\) correlation based cost volume [51, \(\mathbf{x}_{\text{L,C}}^{l+1},\mathbf{x}_{\text{R,C}}^{l+1}\in\mathbb{R}^{ \frac{\mathrm{WF}}{2\mathrm{F}+1}\times\frac{\mathrm{HF}}{2\mathrm{F}+1}\times C },\)\(\mathbf{C}_{\text{C}}^{l+1}\in\mathbb{R}^{\frac{\mathrm{WF}}{2\mathrm{F}+1} \times\frac{\mathrm{HF}}{2\mathrm{F}+1}\times\frac{\mathrm{HF}}{2\mathrm{F}+1}},\)\(D\) denotes the maximum disparity value. Similarly, the enhanced unary features, \(\mathbf{x}_{\text{L,R,P}}^{l+1}\) construct enhanced 3D cost volumes. It can be observed that \(\mathbf{C}_{\text{C}}^{l+1}\) and \(\mathbf{C}_{\text{P}}^{l+1}\) are of identical definition along the disparity dimension so that summation will not change the physical meaning of that dimension. Therefore, the initial stereo feature of the \(l\)-th scale in the third column of Fig. 2(c) is \[\mathbf{C}_{\text{init}}^{l}=\mathbf{C}_{\text{C}}^{l+1}+\mathbf{C}_{\text{P} }^{l+1}, \tag{6}\] \(\mathbf{C}_{\text{init}}^{l}\in\mathbb{R}^{\frac{\mathrm{WF}}{2\mathrm{F}+1} \times\frac{\mathrm{HF}}{2\mathrm{F}+1}\times\frac{\mathrm{P}}{2\mathrm{F}+1}}\) is kept. However, definitions of the disparity dimension across scales are not the same. In order not to destroy the physical meaning of each scale, the cross-scale fusion can be expressed recursively as \[\mathbf{C}^{l}:=\mathrm{Concat}\left[\mathbf{C}^{l},\mathrm{Conv}_{ \text{3}\times\text{3},\text{2}}\left(\mathbf{C}^{l-1}\right)\right], \tag{7}\] \[\mathbf{C}^{1}=\mathbf{C}_{\text{init}}^{1},\] Fig. 3: Comparing (a) FPN [44] and (b) BiFPN [48] with the proposed (c) Stereo Reserving Feature Pyramid Network (SRFPN). FPN consists of a top-down path and BiFPN introduces an additional bottom-up path. Our SRFPN utilizes the FPN to extract multi-scale unary features, and a six-level three-scale cost volume pyramid is constructed from the unary features. Intra-Scale Fusion is performed where disparity dimensions are of identical definition, thus the stereo features are summed accordingly; Cross-Scale Aggregation is performed where disparity dimensions are of different definitions, thus the stereo features are expended and concatenated with the lower-resolution feature. The image correspondence information is therefore reserved. where \(\mathrm{Conv}_{3\times 3,s}(\cdot)\) denotes a \(3\times 3\) convolutional layer with a stride of 2. Additionally, Figs. (a)a and (b)b respectively showcase using FPN [44] and BiFPN [48] for extracting multi-scale stereo features. It can be seen from Fig. (a)a that the feature fusion direction of FPN is top-down, _i.e._, using high-level features to supplement semantic information for low-level. However, the lowest-resolution stereo features play a vital role in NPAQuery, so we choose to use higher-resolution features to supplement the detailed information of lower-resolution features. In this way, it tends to cause a lack of semantic information on high-resolution features. Therefore, an hourglass-shaped SRFPN is designed, where the bottom high-resolution features are enhanced by rich semantic information. The BiFPN (Fig. (b)b) introduces top-down and bottom-up multi-scale feature fusion. Although it can enrich the final multi-scale stereo features, its cross-scale fusion method will destroy the image correspondence information of different scales. ## IV Experiments ### _Experimental setup_ The experiments are conducted on the KITTI dataset [23]. In the ablation experiments, the data split is consistent with that of [54] and the evaluation index is obtained on its validation subset. We report all difficulty levels (Easy, Moderate, Hard) of 3D Car detection AP (Average Precision) as benchmarking indicators. In the experiments in Sec. IV-C, the entire training set is used during training, and the prediction results are calculated by the KITTI server. Detection APs of three difficulty levels of 3D, BEV, and 2D Car detection tasks are reported. Among them, Moderate 3D Car detection AP is the main evaluation index in this paper, which is placed as the first column of AP in each table and marked in italics. The TS3D model is implemented based on PyTorch 1.10.0 [55] and Python 3.7.11. During training, four NVIDIA Quadro P6000 GPUs are used with CUDA 11.1.74 and CuDNN 8.0.5 installed. In the ablation experiments, ResNet-34 [50] is used as the backbone. The disparity dimensions of multi-scale cost volumes are set to 24, 48, and 96, respectively, from highest-resolution to lowest-resolution. The experiments on KITTI test set use ResNet-50 as the backbone, the disparity dimensions are 48, 48, and 96, respectively. The rest hyperparameters remain unchanged. During the training process, 8 training samples are input on each GPU for each iteration, resulting in an equivalent batchsize of 32 on four GPU parallel training. The resolution of the input RGB image is uniformly cropped to \(W=1280,H=288\). Training samples are randomly horizontally flipped with a probability of 0.5. ### _Ablation study_ Ablation study is performed on the KITTI validation set, and experimental results are shown in Tab. I. The first row shows the result of directly using the Transformer-based surround-view 3D object detector DETR3D [19] to train on KITTI. The training process is unstable and the detection accuracy is poor. The DAPE (second row) is used in conjunction with NPAQuery to enrich the stereo features, and the Moderate AP is greatly improved by 27.98 p.p. (percentage points), and other indicators are improved as well. Finally, the SRFPN (third row) is introduced to further introduce the detail and correspondence information into stereo features, which improves the Moderate detection AP by 1.17 p.p. As a comparison, we also implement the SRFPN on the CNN-based YOLOStereo3D [5] (fourth row), and the results are given in the fifth row. Compared with baseline results on the fourth row, using SRFPN gains an improvement in detection AP (1.21 p.p.), proving that SRFPN is also effective in CNNs. Thanks to its image correspondence information reserving design, the extracted stereo features are enriched and more discriminative. #### Iv-B1 Design of positional encoding The DAPE is compared with the widely-used sinusoidal 2D positional encoding and disparity-only encoding. The results are listed in Tab. II. Specifically, the sinusoidal 2D positional encoding in the table is a fixed embedded 2D positional encoding. The disparity-only positional encoding is a simple way to explicitly encode disparity information into stereo features. First, use ArgMax to find the candidate disparity on the disparity map, and then set the positional embedding of the disparity value index position to 1, and the other parts set to 0. This scheme is similar to the one-hot encoding. Although it can introduce matching information into stereo features, it neither considers the 2D locations of the image nor can it obtain sub-pixel-level disparity. No positional encodings are introduced in the first row of Tab. II, which serves as a comparison baseline. The second row shows that using sinusoidal 2D positional encoding can introduce location information and the detection AP is improved (0.54 p.p.), however, it does not introduce image correspondence information. The disparity-only positional encoding in the third row explicitly introduces the correspondence information. Results have proved that considering only the correspondence information while ignoring the 2D location information is harmful to 3D object detection. The DAPE (last row) of this paper considers both 2D location information and sub-pixel-level stereo matching information but does not enforce the discretization of disparity. It uses the latent correspondence information in stereo features to guide the disparity-aware positional embedding. DAPE is a learnable binocular 3D positional encoding, in contrast to the fixed sinusoidal 2D positional encoding. Experiments show that DAPE outperforms the above three schemes and can provide suitable 3D spatial information for stereo 3D object detection. #### Iii-B2 SRFPN compared with existing FPNs In this subsection, a variety of existing feature pyramid networks are implemented to extract multi-scale stereo features. A comparison of those and our SRFPN can be seen in Tab. III. The first row shows the result of applying FPN [44] to extract multi-scale stereo features (detailed in Sec. III-C). The results show that the direct use of FPN will lead to a significant decline in APs. The reason, as mentioned above, is that these methods do not consider the physical meaning of the disparity dimension of stereo features. Many FPNs are aimed to enrich features of higher-resolution (_i.e._, top-down aggregation), which is not compatible with the idea of reusing lowest-resolution features in this paper. BiFPN [48] (Sec. III-C) uses high-level features to supplement the semantic information of low-level features, and then aggregates features to high-level features from the bottom up. Therefore, the results in the second row show that the fusion of multi-scale stereo features based on BiFPN can significantly improve the detection AP compared with the above three existing pyramid networks. The last row in Tab. III shows the detection AP of the model based on our SRFPN. Results prove that SRFPN is more suitable for extracting multi-scale stereo features and can retain crucial 3D scene information, providing the Transformer decoder with more discriminative object queries. #### Iii-B3 The number of decoder layers and the usage of intermediate supervision This subsection gradually changes the number of decoding layers in the Transformer decoder, that is, \(N_{\text{dec}}\in\{2,4,6,8\}\). The first row of Tab. IV is the detection AP of the baseline model without Transformer decoder (_i.e._, \(N_{\text{dec}}=0\)). When the decoder is added without additional supervision (third row), a significant drop in detection AP can be observed. Conversely, by adding appropriate supervision (Sec. III-A) at each decoding layer, the detection AP can be improved regardless of the number of decoder layers. Continuing to increase the number of decoder layers can improve detection AP, however, it requires longer training time. We set \(N_{\text{dec}}=4\) in the final model. ### _Comparison with existing methods_ The performance of our TS3D on the KITTI test set is given in this subsection. The evaluation indicators of three difficulty levels (Easy, Moderate, and Hard) including 3D, BEV, and 2D object detection tasks are displayed in the last row of Tab. V. As a first Transformer-based stereo 3D object detector in the literature at the time of writing, TS3D achieves competitive detection results compared with advanced CNN-based detectors. In the KITTI test set Moderate difficulty, the 3D Car detection AP of TS3D reaches 41.29%, which is generally the same as that of the latest existing methods without LiDAR-based stereo matching supervision, _i.e._, YOLOStereo3D [5] in ICRA 2021. Compared with other models in the table, TS3D is based on the Transformer model. Compared with the first part of the table, that is, using LiDAR-based stereo matching supervisions during training, TS3D does not rely on LiDAR to provide disparity annotations during training. Compared with other detectors that do not rely on LiDAR in the second part, in addition to the difference of the neural network model, the method of this paper has the following differences. From the perspective of multi-scale features, TLNet [42] and RT3DStereo [12] use single-scale feature maps; Whereas TS3D proposes SRFPN to generate multi-scale feature maps. Stereo R-CNN [13] only extracts multi-scale unary features, uses single-scale features in the 3D object RoI head; TS3D uses SRFPN modules in stereo features, extracts unary and stereo two pyramids of multi-scale features. RTS3D [14], YOLOStereo3D, Disp R-CNN [9] also use the feature pyramid for both unary features and stereo features; but do not use a method similar to the SRFPN to fuse multi-scale stereo features. From the perspective of additional informations required for training, in addition to 3D object annotations, TS3D requires a stereo matching algorithm to generate pseudo-ground-truth disparity maps; RT3DStereo relies on semantic segmentation annotations; RTS3D takes coarse 3D bounding boxes as input and gradually optimizes the detection results to achieve high-quality 3D object detection; Disp R-CNN needs to use external 3D models to generate more accurate pseudo point clouds for candidate objects. From the perspective of detection pipelines, Stereo R-CNN requires complex post-processing steps to obtain a more accurate 3D object bounding box after predicting the 3D bounding box; TS3D only uses 2D NMS [57] as the post-processing step. Stereo R-CNN, TLNet, and Disp R-CNN are two-stage algorithms [57]; whereas TS3D is a single-stage detector [58]. Tab. V also lists the external dependencies and runtime of the existing methods that do not rely on LiDAR-based stereo matching supervisions. Among them, Disp R-CNN and S3D [10] are higher than TS3D in Moderate 3D Car detection AP, however, the detection speed of the two is slower, respectively 0.42 seconds and 0.67 seconds. TS3D takes only 0.09 seconds to detect from a pair of binocular images. In addition, TS3D merely relies on the pseudo-ground-truth of stereo matching, while Disp R-CNN requires instance segmentation supervision and 3D models during training. We also visualizes three detection results of TS3D on the KITTI validation set (Fig. 4, one column each). The first row visualizes the inverse-projection of disparity map predicted by TS3D and detected 3D bounding boxes (pink); The second row illustrates the projection of detected 3D bounding boxes onto the left image. The last row shows the projection of 3D ground-truth boxes on the left image. In the first column in Fig. 4, the shadow of trees on the side of the road is a challenge for image-based object detectors, and the distant vehicle ahead is of relatively small-scale. Thanks to the explicitly embedded rich details and correspondence information in multi-scale stereo features and Transformer decoders, TS3D can alleviates the problems of scale variation and shadow defects. The second column shows a roadside parking scene on a narrow road. In addition to the detected roadside cars of various scales, TS3D is also able to detect the car parked vertically on the left. The third column showcases the detection of distant cars. All four cars are detected, however, the right two cars are of erroneous orientations due to their small scale. Fig. 4: Visualization of three detection results of our TS3D on the KITTI validation set, one column each. From top to bottom of each sample: inverse-projection of disparity estimation and 3D detection (pink), left image with projected 3D detection (pink), and left image with 3D ground-truth boxes (green). ## V Conclusions Transformer is making progress in various fields of computer vision, and the performance of Transformer model in many fields is comparable to or exceeds that of the state-of-the-art CNN model. We have proposed a Transformer-based Stereo-aware 3D object detector (TS3D), which has successfully applied the Transformer model to the stereo 3D object detection task by designing Disparity-Aware Positional Encoding (DAPE) and Stereo Reserving Feature Pyramid Network (SRFPN). The image correspondence and 2D location informations have been explicitly encoded into object queries by DAPE, providing the decoder with the spatial information of the 3D scene. The SRFPN has been designed to provide the Transformer decoder with enriched multi-scale features and object embeddings. In the fusion process, SRFPN has reserved the image correspondence information in the stereo features; In the cross-scale aggregation, the complementary image correspondence information across scales has been aggregated, and eventually, the stereo feature pyramid with enriched details and correspondence information has been obtained. Experiments on the KITTI dataset have shown that TS3D is an effective Transformer model for stereo 3D object detection, which has been competitive with advanced methods of CNN-based models, achieving 41.29% of the test set Moderate Car detection AP. Ablation experiments have demonstrated the effectiveness of the proposed DAPE and SRFPN. The TS3D could hopefully serve as a baseline for future research on the Transformer model for stereo 3D object detection. ## Acknowledgments This work was partially supported by the National Key R&D Program of China under Grant 2022ZD0160400, the National Key R&D Program of China under Grant 2018AAA0102800, the Tianjin Science and Technology Program under Grant 19ZXZXNGX00050, and the Tianjin Natural Science Foundation under Grant JCQNJC00420.
2307.08907
Ring Current Proton Decay Timescales Derived from Van Allen Probe Observations
The Earth's ring current is highly dynamic and is strongly influenced by the solar wind. The ring current alters the planet's magnetic field, defining geomagnetic storms. In this study, we investigate the decay timescales of ring current protons using observations from the Van Allen Probes. Since proton fluxes typically exhibit exponential decay after big storms, the decay time scales are calculated by performing linear regression on the logarithm of the fluxes. We found that in the central region of the ring current, proton decay timescales generally increase with increasing energies and increasing L-shells. The ~10s keV proton decay timescales are about a few days, while the ~100 keV proton decay time scale is about ~10 days, and protons of 269 keV have decay timescales up to ~118 days. These findings provide valuable insights into the ring current dynamics and can contribute to the development of more accurate ring current models.
Stephanie Wang, Jinxing Li
2023-07-18T00:23:33Z
http://arxiv.org/abs/2307.08907v2
# Ring Current Proton Decay Timescales Derived from Van Allen Probe Observations + ###### Abstract The Earth's ring current is highly dynamic and is strongly influenced by the solar wind. The ring current alters the planet's magnetic field, defining geomagnetic storms. In this study, we investigate the decay timescales of ring current protons using observations from the Van Allen Probes. Since proton fluxes typically exhibit exponential decay after big storms, the decay time scales are calculated by performing linear regression on the logarithm of the fluxes. We found that in the central region of the ring current, proton decay timescales generally increase with increasing energies and increasing L-shells. The 10s keV proton decay timescales are about a few days, while the 100 keV proton decay time scale is about 10 days, and protons of 269 keV have decay timescales up to 118 days. These findings provide valuable insights into the ring current dynamics and can contribute to the development of more accurate ring current models. Ring current Van Allen Probe Radiation belt Geomagnetic storm Magnetosphere ## 1 Introduction The Earth's magnetized space, known as the magnetosphere, is the region around our planet where its magnetic field dominates that of the solar wind. Observations from various satellites reveal intimate connections between Earth's magnetosphere and the solar wind, which is the direct driver of geomagnetic storms and the variation of geomagnetic field. Emitted by the sun, solar wind is a stream of charged particles (mostly protons and electrons). During periods of enhanced solar wind activity, the solar wind exerts pressure on Earth's magnetosphere, intensifying the magnetic field within the magnetosphere, leading to energetic particle injection from the magnetotail and the formation of the ring current. The Earth's ring current is an electric current encircling the Earth at the height of 10,000 km to 40,000 km near the magnetic equatorial plane. It consists of energetic ions, primarily protons, trapped by the geomagnetic field. The ring current dynamics causes global magnetic field variation and distortion, known as geomagnetic storms. Geomagnetic storms and ring current dynamics are driven by solar activities, either a solar coronal mass ejection (CME) or a co-rotating interaction region (CIR) which is a high-speed solar wind originating from a coronal hole [1]. Geomagnetic storms lead to a variety of associated effects in geospace including radiation belt enhancement [2]. After the geomagnetic storm main phase, ring current proton fluxes exhibit exponential decay, and the geomagnetic field is subsequently recovered. In this study, we calculate the ring current proton decay timescales at representative energies and different L-shells (the geocentric distances projected to the magnetic equator in units of Earth Radii) using Van Allen Probe observations. The present study aims to gain a deeper understanding of the dynamics of ring current protons, providing insights for future space missions and facilitating a more profound understanding of the universe. The energy-dependent decay timescales at different altitudes are useful for developing more accurate ring current models. ## 2 Data Description The Van Allen Probes [3], also known as the Radiation Belt Storm Probes (RBSP), were a pair of NASA spacecraft designed to study the inner magnetospheric environments, especially the radiation belt surrounding the Earth. Launched on August 30, 2012, the mission aimed to improve our understanding of the radiation belts, the two donut-shaped regions of highly energetic charged particles trapped by Earth's magnetic field, known for their potentially hazardous effects inflicted on satellites, spacecraft, and astronatus. The Van Allen Probe's main objectives include exploring the dynamics of high-energy ring current particles including electrons and ions, and studying the associated space weather effects in order to provide crucial information for future space exploration endeavors and satellite operations. This study mainly investigates proton fluxes measured by the RBSPICE (Radiation Belt Storm Probes Ion Composition Experiment) instrument [4] onboard Van Allen Probes. The RBSPICE instruments measure the fluxes for different types of ions in the inner magnetosphere over a wide range of energies. Specifically, they measure proton fluxes from 7 keV to 598 keV, covering the main energies of the ring current [5]. Note that protons of 100 keV is the dominant contributor to plasma pressure especially during quiet time[6]. ## 3 Data Overview Figure 1a displays the Van Allen Probe-A trajectory over an orbit period from 00 UT to 09 UT on 1 January 2018, showing an elliptical orbit with a period of about 9 hours. Figure 1b shows the orbital coverage over the entire year of 2018, showing that the satellite was sweeping westward over local time. An overview of the ring current proton fluxes during 2018 is presented in Figure 2. The proton fluxes are shown as a function of L shell at energies of 13 keV, 27 keV, 55 keV, 99 keV, 148 keV and 269 keV. Figure 2a shows the Sym-H index, which is the symmetric component of magnetic field disturbances at low latitudes on Earth's surface, similar to the disturbance storm time (Dst) index. Figure 2 shows that ring current protons at low energies ( 10s keV) are mainly observed at altitudes above L=4. Enhancement of these low energy protons is seen in response to each geomagnetic storm, including minor ones. On the other hand, protons with higher energies ( 100s keV) generally peak at lower L shells and mainly respond to large storms. For example, the 269 keV protons around L=4 were only enhanced by the largest geomagnetic storm of the year in September, as seen in the Sym-H index. Moreover, Figure 2 demonstrates that ring current ions typically exhibit exponential decay after reaching peak values during geomagnetic storms. ## 4 Methodology Ring current ion fluxes typically exhibit exponential decay after reaching peak values due to injections and acceleration during geomagnetic storms. Therefore, the logarithm of flux can be fitted by a linear function of time. This study calculates the proton flux decay timescales using linear regression. Suppose we have a time sequence of ion flux Figure 1: (a) The Van Allen Probe A path over an orbit period from 00 UT to 09 UT on 1 January 2018. The red arrow indicates the orbit direction. (b) The coverage of Van Allen Probe A orbit over the entire year of 2018. The red arrow indicates the orbit sweeping over local time. measured at time \(t_{i}\) during the decay phase measured by the satellite at a specific energy and a specific L shell, and \(y_{i}\) = \(logz_{i}\), then the slop of the fitting line can be calculated as \[k=\frac{n\sum y_{i}t_{i}-\sum y_{i}\sum t_{i}}{n\sum t_{i}^{2}-(\sum t_{i})^{2}} \tag{1}\] The decay time scale is therefore \[\tau=-\frac{1}{k}=\frac{n\sum t_{i}^{2}-(\sum t_{i})^{2}}{\sum y_{i}\sum t_{i} -n\sum y_{i}t_{i}} \tag{2}\] ## 5 Case Studies To systematically investigate the ring current proton decay timescales, we show case studies at five selected energies, 27 keV, 55 keV, 99 keV, 148 keV and 269 keV, covering the main energies of the ring current. We illustrate cases that show clear exponential decay pattens, and calculate the energy-dependent proton decay timescales. Figure 3 shows our analysis for 27 keV protons. Figure 3a shows the Sym-H index from 24 October 2017 to 1 November 2017, indicating a geomagnetic storm recovery phase. Figure 3b shows the 27 keV proton flux versus L-shell measured by Van Allen Probe A along its orbit, indicating a rapid decay above L=4. Figure 3c shows the proton flux measured at Figure 2: Geomagnetic indices and ion fluxes measured over the year of 2018. (a) The Sym-H index; (b-g) The proton fluxes as a function of L shell at energies of 13 keV, 27 keV, 55 keV, 99 keV, 148 keV, and 269 keV, respectively. Note that the proton fluxes of 13 keV and 27 keV are contaminated below L=3. L=5 in black line. The red line represents the linear fitting of the proton flux, and the decay timescale calculated using linear regression is 3.8 days for this case. Figure 3d shows the measured proton flux at L=5.5 and the linear fitting, indicating a decay timescale of 4.1 days, slightly longer than that at L=5. We note that a few factors cause the data fluctuation along the linear fitting lines which represent exact exponential decay. 1) The satellite was at different latitudes each time when it traveled across a specific L shell, due to the unalignment between the Earth's magnetic axis and its geographical axis. 2) Minor activities, including substorms, may have happened during the recovery phase of geomagnetic storms. Nevertheless, we see that the linear fitting can well represent the proton decay activity. Figure 4 shows the 55 keV proton flux observed from 9 November to 14 November in 2017. The linear regression indicates that the decay timescale is 3.9 days at L=4, and is 6.2 days at L=5 in this event. Figure 5 shows the satellite observation and linear regression from 9 November 2017 to 14 November 2017 for 99 keV protons. The proton decay timescale is 11.0 days at L=4, and is 21.9 days at L=5. Figure 6 exhibits the study for 148 keV protons, and our calculation produces a decay timescale of 18.8 days at L=3.5, and a timescale of 26.4 days at L=4. Figure 7 exhibits the study for 269 keV protons. The 269 keV proton fluxes generally peak around L=3.5, and are highly dynamic above L=4.0. Hence, the decay timescale is only calculated at L=3.5, which is about 118 days. ## 6 Statistical Results We studied 10 cases for the proton decay timescales from 27 keV to 148 keV, and produced a mean timescale and the corresponding standard deviation for each energy. The 269 keV proton fluxes are insensitive to small and moderate geomagnetic storms at around L=3.5 where the flux peaks, and we only provide one number derived from a single event shown in Figure 7. Table 1 presents a summary of the ring current proton decay timescales at their respective suited L-shells. It can be clearly seen that the proton decay timescales increase with increasing energies and increasing L-shells. The distinctly different behaviors of different energy protons can possibly be explained by the proton loss mechanism due to charge exchange. Lower energy protons have larger charge exchange cross sections, and therefore undergo more efficient charge exchanges than higher energy protons, leading to shorter lifetimes [7]. While Coulomb collision may also contribute to the loss of protons, it primarily affects low-energy protons below 10 keV [8]. Figure 3: (Left) The geomagnetic Sym-H index and the 27 keV proton fluxes as a function of L shell measured from 24 October 2017 to 1 November 2017. (Right) Proton flux variations at L=5 and L=5.5, respectively, overplotted with the linear fitting lines. The decay timescales are derived from the slot of the linear regression. [https://doi.org/10.48550/arXiv.2307.08907](https://doi.org/10.48550/arXiv.2307.08907) [https://doi.org/10.48550/arXiv.2307.08907](https://doi.org/10.48550/arXiv.2307.08907) Figure 6: The same as Figure 3 but for 148 keV proton fluxes measured from 1 October 2017 to 9 October 2017, and the proton decay timescales are calculated at L=3.5 and L=4. Figure 7: The same as Figure 3 but for 269 keV proton fluxes measured from 11 September to 29 September 2017. The decay timescale is only calculated at L=3.5 and is about 118 days for this case. The Van Allen Probe observations provide us a unique opportunity to determine the ring current decay timescales at various energies and different altitudes, and thereby provide crucial insights into the ion loss mechanisms. The statistically calculated timescales can substantially contribute to the construction of data-based ring current models. For instance, in a machine-learned ring current model which employs geomagnetic indices as the input, longer look-back windows are used for modeling higher energy protons [9]. ## 7 Conclusions By investigating the variations of ring current proton fluxes measured by Van Allen Probes, we identified energy-dependent ring current proton dynamic patterns. For instance, protons of 10s keV energies are mainly observed at altitudes above L=4, and they are injected during each geomagnetic storm including weak ones. Protons with higher energies such as 100s keV energy are generally observed to peak at lower L shells, and they mainly respond to moderate and large storms at the center of the ring current. Ring current protons typically experience an exponential decay after reaching peak values during geomagnetic storms. Therefore, their decay timescale can be determined by implementing a linear regression on the logarithm of fluxes. We systematically calculated the proton decay timescales in the main region of the ring current, and the statistical results show that proton decay timescales increase with increasing energies and increasing L shells. Specifically, the decay timescale for 27 keV protons at L=5 is approximately 3.8\(\pm\)1 days, increasing to 4.3\(\pm\)1.2 days at L=5.5. The 55 keV proton decay timescale is 3.1\(\pm\)0.5 days at L=4, extending to 4.7\(\pm\)1.3 days at L=5. The 98 keV proton decay timescale is 8.9\(\pm\)2.4 days at L=4 and is 14.5\(\pm\)2.7 days at L=5. The 148 keV proton decay timescale at L=3.5 is 16.6\(\pm\)3.2 days, while at L=4 it is 22.2\(\pm\)4.7 days. The 269 keV proton decay timescale at L=3.5 is roughly 118 days. By studying energy-dependent proton decay timescales at varied altitudes, we gain an advanced understanding into the ring current dynamics. The proton flux decay timescales calculated from Van Allen Probe observations provide insights in developing accurate ring current models. ## Acknowledgments JL acknowledges NASA grants LWS-80NSSC20K0201 and 80NSSC21K0522.
2310.05634
Towards Verifiable Generation: A Benchmark for Knowledge-aware Language Model Attribution
Although achieving great success, Large Language Models (LLMs) usually suffer from unreliable hallucinations. Although language attribution can be a potential solution, there are no suitable benchmarks and evaluation metrics to attribute LLMs to structured knowledge. In this paper, we define a new task of Knowledge-aware Language Model Attribution (KaLMA) that improves upon three core concerns with conventional attributed LMs. First, we extend attribution source from unstructured texts to Knowledge Graph (KG), whose rich structures benefit both the attribution performance and working scenarios. Second, we propose a new ``Conscious Incompetence" setting considering the incomplete knowledge repository, where the model identifies the need for supporting knowledge beyond the provided KG. Third, we propose a comprehensive automatic evaluation metric encompassing text quality, citation quality, and text citation alignment. To implement the above innovations, we build a dataset in biography domain BioKaLMA via evolutionary question generation strategy, to control the question complexity and necessary knowledge to the answer. For evaluation, we develop a baseline solution and demonstrate the room for improvement in LLMs' citation generation, emphasizing the importance of incorporating the "Conscious Incompetence" setting, and the critical role of retrieval accuracy.
Xinze Li, Yixin Cao, Liangming Pan, Yubo Ma, Aixin Sun
2023-10-09T11:45:59Z
http://arxiv.org/abs/2310.05634v2
# Towards Verifiable Generation: A Benchmark for Knowledge-aware Language Model Attribution ###### Abstract Although achieving great success, Large Language Models (LLMs) usually suffer from unreliable hallucinations. In this paper, we define a new task of Knowledge-aware Language Model Attribution (KaLMA) that improves upon three core concerns on conventional attributed LMs. First, we extend attribution source from unstructured texts to Knowledge Graph (KG), whose rich structures benefit both the attribution performance and working scenarios. Second, we propose a new "Conscious Incompotence" setting considering the incomplete knowledge repository, where the model identifies the need for supporting knowledge beyond the provided KG. Third, we propose a comprehensive automatic evaluation metric encompassing text quality, citation quality, and text citation alignment. To implement the above innovations, we build a dataset in biography domain BioKaLMA via a well-designed evolutionary question generation strategy, to control the question complexity and necessary knowledge to the answer. For evaluation, we develop a baseline solution and demonstrate the room for improvement in LLMs' citation generation, emphasizing the importance of incorporating the "Conscious Incompotence" setting, and the critical role of retrieval accuracy. ## 1 Introduction Recently, Large Language Models Brown et al. (2020) (LLMs) have exhibited great capability in open-ended question answering Yang et al. (2019). However, the generated answers are not always reliable, e.g., "hallucination" Shuster et al. (2021); Ji et al. (2023). To minimize the negative impacts, researchers have proposed the task of language attribution Bohnet et al. (2023), which not only enables users to verify the generated text flexibly but also contributes to many important applications, such as situation reports Reddy et al. (2023), academic papers Salvagno et al. (2023), medical diagnosis Zuccon and Koopman (2023). Existing works mainly attribute generated outputs to unstructured documents like web pages Nakano et al. (2021); Menick et al. (2022) or passages Gao et al. (2023). To verify the answer quality, they typically compare with a human annotated reference answer for automatic evaluation or conduct human evaluation. We argue that there are several concerns on such task definition. Firstly, are documents the only source for attribution? Many real-world applications have their own knowledge bases or semi-structured reports. Unstructured documents neglect the informative structures. Secondly, does the attribution source always include all the required knowledge? It is still necessary to consider the coverage issue since no perfect repository can contain all the information in this world for attribution. Thirdly, how to sys Figure 1: A demonstration of our task set up. Given a question, the system generates answers attributed from a retrieved knowledge graph. The underlines in question are the retrieved entities, and the underlines in outputs are the citations. [NA] is the “Not Applicable Citation”. tematically evaluate the attributed content without references? For open-ended questions, there are unlimited number of answers and it is difficult to define a single ground truth. To address the first challenge, we utilize knowledge graph (KG) as a reliable source for attribution, namely Knowledge-aware Language Model Attribution (**KaLMA**). We show a demonstration of task in Figure 1. KGs efficiently organize world knowledge in a structured manner and has the potential to unify various formats of data. For example, databases can be easily converted into KGs, or, passages and web pages can be represented as a node in KG like Wikipedia. KaLMA differs from entity linking (Sevgili et al., 2022) since the sentences or phrases are attributed to a knowledge triplet rather than a single entity. For the second challenge, we tackle the coverage problem by making the model aware of its limitations. We introduce a new setting "**Conscious Incompetence**" (Curtiss and Warren, 1974), which is the psychological stage that one is aware of the knowledge gap. During generation, except for references to KG, LLMs also identify sentences or phrases that require supporting knowledge not present in the knowledge graph. For the third challenge, we propose a comprehensive automatic evaluation metric including text quality, citation quality, and text citation alignment. The entire evaluation process does not require human annotated ground truth. To implement the above innovations, we construct a dataset1 in biography domain, namely **BioKaLMA**, for a benchmark with all-rounded automatic measurements. We choose biography because it forms a good testset for attribution due to its practical application and convenient evaluation. Derived from the biographical database2(Plum et al., 2022) and WikiData, BioKaLMA contains 1,085 questions and BioKGs. For evaluation, each question refers to a minimum set of knowledge triples in the corresponding BioKG to answer, which is obtained via a well-designed Evolutionary Question Generation strategy. We use GPT-EVAL (Liu et al., 2023) to automatically evaluate the text. We also design measurement for correctness, precision, and recall for citations based on the minimum set of knowledge in BioKG. Additionally, we determine the alignment between texts and citations employing NLI (Dagan et al., 2005). Our setting "Conscious Incompetence" enables an attributed LM to recognize the knowledge gaps and allows users to verify uncertain claims, which enhances trustworthiness. Footnote 1: The codes and dataset will be released upon acceptance. Footnote 2: [https://plumaj.github.io/biographical/](https://plumaj.github.io/biographical/) We conducted extensive experiments and show a room for improvement of the LLMs' ability to generate accurate and thorough citations based on provided knowledge graphs. Our experiments on "Conscious Incompetence" investigate the capability of current LLMs to identify if there are required knowledge not in knowledge base. We highlight the necessity of incorporating this setting in future language attribution works. Furthermore, our ablation studies demonstrate the crucial role of retrieval accuracy in achieving desirable generation results. ## 2 Task and Dataset ### Task Formulation We hereby define the task Knowledge-aware Language Model Attribution **(KaLMA)**: Given a question \(q\) and the knowledge graph \(G\), the system generates an output text \(t\) that answers the question. The output text consists of a list of \(m\) sentences \(s_{1}\),..., \(s_{m}\) grounded with a list of \(n\) grounded knowledge \(k_{1}\).. \(k_{n}\) where \(\{k_{1}..k_{n}\}\in G\). Conscious IncompetenceWe extend this task setting to include conscious incompetence. Given the same input, each sentence \(s\) in the output text \(t\) can map to a Not Applicable Citation (we use [NA] to represent it) if it includes some knowledge to be verified, but the knowledge is absent in the knowledge graph \(G\). A sentence can map to both [NA] and a list of sub-graph knowledge it it can be partially verified by the knowledge graph \(G\). [NA] is not a citation on conventional means, but a indicator of knowledge gap. ### Dataset Construction We construct dataset BioKaLMA for the task. The dataset construction pipeline consists of three components: Name Pair Extraction, Name Disambiguation, and Evolutionary Question Generation. While we use the human biography domain as an example, this method is applicable to all domains. Name Pair ExtractionTo ensure that each generated question involves people related to one another, we select name pairs from the biographical database, which is specifically designed for the relation extraction (RE) task. Each data entry includes a short paragraph and a triple of names and relations, such as <William Shakespeare, Spouse, Anne Hathaway>, extracted from the paragraph. We specifically choose the human annotated set from the database to ensure high-quality name pairs. To avoid potential ambiguities, we filter out data if any name in the triple is incomplete. In practice, we consider a name complete if it has at least a family name and a surname. Name DisambiguationDue to the presence of duplicate names (e.g., Anne Hathaway the actress and Anne Hathaway the wife of William Shakespeare), we perform name disambiguation to map each name in the triple to a unique entity from the knowledge graph. We utilize WikiData3(Vrandecic and Krotzsch, 2014) as the knowledge base and employ SPARQL (Perez et al., 2009) queries to retrieve all entities associated with the name. Each entity represents a node in the knowledge graph. Since each triple consists of two names and one relation, we select the two entities obtained from the query if they are connected to each other on WikiData. Additionally, the connecting edge should align with the relation specified in the triple. Subsequently, we extract the one-hop sub-graph centered around each person node, which provides properties related to the person, such as gender, birth date, occupation, and more. Footnote 3: [https://www.wikidata.org/wiki/Wikidata:Main_Page](https://www.wikidata.org/wiki/Wikidata:Main_Page) Evolutionary Question GenerationWe employ an "evolutionary question generation" approach inspired by WizardLM (Xu et al., 2023), where we gradually increase the complexity of the questions by injecting knowledge through iterations. In each iteration, LLMs extend the original answer by incorporating the additional knowledge and then formulates a question based on the expanded answer. After the last iteration, all injected knowledge form a "minimum knowledge set", which includes the least knowledge required to answer the final question (Table 1). We call this set "BioKG" for the question, which is a controllable set of knowledge that we choose. To maintain question coherence, we use an "answer-to-question" paradigm within each iteration, where we generate the answer first and then construct the question based on the answer. In the first iteration, we utilize short paragraphs from the biographical database to generate simple questions. For subsequent iterations, we take the answer and question from the previous iteration and inject a new knowledge. We leverage the "text-davinci-003" model for generation with in-context learning. The detailed prompt used is provided in the appendix C We provide one human written demonstration. Some examples of full question evolution process are provided in appendix D. In practice, we employ six iterations to ensure sufficient complexity in the questions without making them overly tedious. ### Dataset Analysis With the above efforts, we have obtained 1,085 questions for BioKaLMA. The dataset includes a wide range of distribution with people from 196 countries and 949 cities, taking 279 kinds of different occupations. The era of people ranging from 1950 B.C. to 2001 A.D. Compared existing datasets for document attribution, BioKaLMA supports a more fine-grained attribution at an entity level. ## 3 Method We build a pipeline to enable LLMs to generate knowledge-aware attributed answers. Following the approach of many retrieval-augmented generation works (Lee et al., 2022; Izacard and Grave, 2021), we utilize a pipeline consisting of three components: retrieval, re-ranking, and generation. ### Retrieval Our baseline retrieval process consists of two parts: named entity recognition and graph retrieval. We utilize spaCy4 to identify the named entities mentioned in the question. Using these entities, we retrieve entity-centered sub-graphs using SPARQL. For each retrieved entity, we search for nodes in the graph that match the entity's name. We use \begin{table} \begin{tabular}{p{142.3pt}} **Question:** \\ How did the succession of Chandargupta II, \\ Kumaragupta I, and Skandagupta contribute to \\ the development of the Gupta Empire, and what \\ can we learn from their reigns? \\ **BioKG (Minimum required knowledge)**: \\ [Q844536, religion, Hinduism] \\ [Q844536, occupation, monarch] \\ [Q844536, position held, Gupta emperor] \\ [Q844536, succeeded by, Kumaragupta II] \\ [Q844536, spouse, Dhruvadevi] \\ **Q844536** is Chandargupta II’s qid in WikiData \\ \end{tabular} \end{table} Table 1: An example for the generated question and minimum required knowledge. the named entity recognition (NER) entity type as a simple filter (e.g., the NER category "person" matches the "human" entity type in WikiData). Taking each selected node as the center, we retrieve one-hop sub-graphs that contain properties associated with the entity. ### Re-ranking The re-ranking component plays a crucial role in disambiguating retrieved entities, as multiple entities may share the same name in the WikiData graph. Two common scenarios are different individuals with the same name (e.g., Anne Hathaway the American actress and Anne Hathaway the wife of William Shakespeare) and different references to the same word (e.g., "Chinese" the language and "Chinese" the ethnic group). When multiple entities are retrieved from the graph for a given entity name, we rank the graphs based on the Exact Match (EM) between the neighboring nodes and the question. We select the entity with the highest number of matched neighboring nodes. ### Generation The generation component effectively prompt the LLMs with the retrieved knowledge graphs (KGs) to generate answers that attribute the KG. To adapt to the input format of the LLMs, we transform the structured KGs into flat texts. We preserve the information of the retrieved sub-graphs by mapping each sub-graph to a set of triples. Each triple consists of two nodes and one edge, where one node is the centered entity, the other node is its neighbor, and the edge represents the relationship between them. For example, [Q212657 - place of birth - Q220] can be translated to [Artemisia Gentileschi - place of birth - Rome]. In this translation, we use the names of the entities for better comprehension by both the models and humans, since WikiData utilizes Qids (e.g., Q220) to represent unique entities. We construct a prompt (Table 2) which includes 1) instruction to the models to generate attributed answers. 2) retrieved knowledge graph, and 3) the question. We employ one-shot in-context learning [Brown et al., 2020] by prepending one human written demonstration. In the one-shot demonstration, we use the special token [NA] to represent the "Not Applicable Citations" for conscious incompetence. We deliberately omit some knowledge in the demonstration example knowledge graph, and we insert [NA] tokens in the corresponding sentences that use these knowledge within the example answer. The detailed prompt is in appendix C. ## 4 Automatic Evaluation Our benchmark includes measurements for both the generated text and citations. We also provide evaluations for the alignment between the text and corresponding citations through both automatic and human evaluation. ### Text Evaluation Since our test-set has no human-written gold answers as references, we do not utilize comparison-based metrics such as BERTScore [Zhang et al., 2019a] or MAUVE [Pillutla et al., 2021]. Instead, we employ reference-free NLG evaluator GPTScore [Fu et al., 2023], which defines the following four metrics: 1) **Coherence**: whether the generated text is well-structured and well-organized. 2) **Consistency**: whether the generated text is consistent with the knowledge provided. 3) **Fluency**: whether the generated text is well-written and grammatical. 4) **Relevance**: how well is the generated text relevant to the question. We use the model text-davinci-003 for evaluation, which assigns an integer score of 1 to 5 for each metric. We follow the prompt provided in G-Eval [Liu et al., 2023b] and customize it based on our task. The full prompts are given in appendix C. ### Citation Evaluation We evaluate the citation qualities from three aspects: 1) **Correctness**, which measures whether the generated knowledge matches the given knowledge from the knowledge graph, 2) **Precision**, which \begin{table} \begin{tabular}{l} \hline \hline Instruction: You answer the question based on your knowledge, with... \\ \hline Question: Consider the information: \\ (Q367360, name: Orazio Gentileschi, place of \\ death: London, child: Artemisia Gentileschi, \\.) \\ (Q212657, name: Artemisia Gentileschi, place \\ of birth: Rome, occupation: painter,...) \\ How did Orazio Gentileschi’s influence on \\ Artemisia’s life and career shape her \\ development as a Baroque painter? \\ \hline Answer: Artemisia Gentileschi was an Italian \\ painter born on July 8, 1596 **[NA]** in Rome \\ **[Q212657, ethnic group: Italians, occupation: \\ painter, place of birth: Rome]...** \\ \hline \hline \end{tabular} \end{table} Table 2: An example for the prompt we use for attributed answer generation. determines how much of the generated citations are helpful to answer the question, and 3) **Recall**, which measures how much of the minimum knowledge set are covered by the generated citations. We also calculate the F1-Score based on the Precision and Recall to reflect the overall quality of citations. CorrectnessWe calculate the citation correctness for each citation (0 or 1) and average over all citations. Each citation comprises a triplet of 1) center entity qid, 2) relation 3) neighbour entity value. If the generated citation is complete with all three parts, and exactly matches a triplet from the question's retrieved KG, correctness = 1. PrecisionWe calculate citation precision for each citation (0 or 1) and average over all citations to get micro precision. Precision = 1 for a citation if and only if 1) it is correct, and 2) it matches one knowledge triplet from minimum knowledge set of the question. (See Figure 2.) RecallWe calculate citation recall for each knowledge (0 or 1) in minimum knowledge set, and average over all knowledge to get micro recall. Recall = 1 if and only if the knowledge if hit by a correct citation. (See Figure 2.) We average over all citations/knowledge in an answer, and average all answer-level precision/recall to get macro precision and recall. we calculate micro and macro F1-Score from corresponding precision and recall. ### Text-Citation Alignment Other than the text quality and citation quality, we measure whether the generated citations provide support for the corresponding sentences. A piece of useful knowledge is not an ideal citation if it is irrelevant to the sentence it links to. Therefore, we propose the metric "Alignment" which determines whether the generated citations are aligned to the sentences to which they belong. We use a state-of-the-art natural language inference (NLI) model TRUE Honovich et al. (2022), which is a fine-tuned T5-11B Raffel et al. (2020) model, to check whether the generated sentence entails the generated citation. Since one sentence could have multiple citations, we run NLI on all sentence-citation pairs and report the percentage of entailment. Additionally, we conduct human evaluation in SS 5.4 to showcase if the automatic evaluation is correlated with human judgments. ### Conscious Incompetence Evaluation Theoretically, each [NA] should map to a piece of knowledge absent from the retrieved knowledge graph. However, it is difficult to identify if sentence requires any absent knowledge since there is no ground truth. Therefore, we conduct a three-round experiment to manually create ground truth for absent knowledge. In round 1, we select one knowledge from the minimum knowledge set, and remove it from the ground-truth knowledge graph. We let the LLMs attribute to this incomplete knowledge graph to generate answers, whereby the removed knowledge forms the "absent knowledge ground truth". In subsequent rounds, we each remove one additional knowledge from the minimum knowledge set, simulating a knowledge graph with more serious coverage problem. We employ the NLI model TRUE Honovich et al. (2022) to measure the alignment between sentences and knowledge. A sentence with [NA] Figure 3: An illustration of how we evaluate the precision and recall for conscious incompetence ([NA]) Figure 2: An illustration of how we evaluate the precision and recall for generated citations. should be aligned to an absent knowledge. We calculate precision and recall for [NA]. [NA] precisionWe calculate [NA] precision for each sentence with [NA] (0 or 1) and average over all sentences with [NA]. Precision = 1 for a sentence if and only if it entails one knowledge triplet from absent knowledge set of the question. (See Figure 3.) [NA] RecallWe calculate [NA] recall for each knowledge (0 or 1) in absent knowledge set and average over all absent knowledge. Recall = 1 if and only if the knowledge if entailed by a sentence with [NA]. (See Figure 3.) ## 5 Experiments We run through the method pipeline described in SS 3 on different large language models and present the results in this section. The implementation details are reported in appendix A. We report four model baselines from two different model families: OpenAI ModelsWe use GPT4 (gpt-4-0314) and ChatGPT (gpt-3.5-turbo-0301) for our experiments. For ChatGPT, we experiment on temperature of 0.1, 0.5, and 0.9 to obtain different levels of randomness and creativity in generation. LLaMAWe conduct experiments with LLaMA-7B (Touvron et al., 2023) and LLaMA-13B since they are powerful open-source models that are widely accessible. We have also conducted human instruction tuned LLaMA models, including Alpaca-7B (Taori et al., 2023) and Vicuna-13B (Chiang et al., 2023). ### Main Results Citation Quality EvaluationWe present the main results in Table 3. For correctness, we report on a micro scale. For precision, recall, and F1-Score, we report on both micro and macro scales. The experimental results are the mean of three runs, and the standard deviation is reported in brackets. In general, there is a room of improvement for all models, and OpenAI models outperform the LLaMA family models in almost all metrics. Fine-tuning LLaMA models may improve their performances on this task. For ChatGPT, temperature does not play a significant role. The GPT-4 model achieves the best performance across almost all metrics, except for recall, since GPT-4 models tend to generate shorter answers with fewer citations, resulting in higher precision. While LLaMA is better at Recall by generating long answers with many citations. The F1-Score of models from the same family are close to one another, showing that our automatic evaluation metric designed is reliable. Text-Citation AlignmentFrom Table 3, models with 7B, 13B, 175B (ChatGPT), and trillion level (GPT4) parameters have an alignment of 40+, 60+, 80+, 92 respectively. The parameter size may play an important role in generating sentences and citations with good alignment. Text Quality EvaluationWe present the evaluation of generated text quality in Table 4. From the results, we find that OpenAI models, in general, have better text quality in all metrics compared to LLaMA family models, which corresponds to the citation evaluation results. All models exhibit rather high consistency, indicating that the LLMs are capable of generating answers that are not contradictory to the provided knowledge or self \begin{table} \begin{tabular}{l|c c c c c|c c c} \hline \hline & \multicolumn{4}{c}{Micro} & \multicolumn{4}{c}{Macro} \\ \cline{2-10} **Model** & **Align.** & **Corr.** & **Prec.** & **Rec.** & **F1.** & **Prec.** & **Rec.** & **F1.** \\ \hline GPT-4 (0.5) & \(\textbf{92.0}_{(1.5)}\) & \(\textbf{97.6}_{(0.1)}\) & \(\textbf{36.0}_{(0.6)}\) & \(43.6_{(1.0)}\) & \(\textbf{39.4}\) & \(\textbf{40.7}_{(1.1)}\) & \(43.9_{(1.0)}\) & \(\textbf{42.3}\) \\ ChatGPT (0.1) & \(85.9_{(2.5)}\) & \(96.1_{(0.4)}\) & \(29.0_{(0.0)}\) & \(\textbf{50.8}_{(0.3)}\) & \(36.9\) & \(32.7_{(0.4)}\) & \(\textbf{51.2}_{(0.3)}\) & \(39.9\) \\ ChatGPT (0.5) & \(84.5_{(1.1)}\) & \(94.8_{(0.2)}\) & \(29.9_{(0.2)}\) & \(49.0_{(0.8)}\) & \(37.2\) & \(34.1_{(0.5)}\) & \(49.4_{(0.9)}\) & \(40.4\) \\ ChatGPT (0.9) & \(84.1_{(0.5)}\) & \(94.2_{(0.4)}\) & \(28.7_{(0.2)}\) & \(49.0_{(0.3)}\) & \(36.2\) & \(32.5_{(0.2)}\) & \(49.4_{(0.3)}\) & \(39.2\) \\ \hline Alpaca-7B & \(46.9_{(0.9)}\) & \(78.9_{(0.6)}\) & \(14.9_{(1.4)}\) & \(19.4_{(0.2)}\) & \(16.8\) & \(19.8_{(0.4)}\) & \(19.9_{(0.3)}\) & \(19.8\) \\ LLaMA-7B & \(47.8_{(0.8)}\) & \(70.2_{(0.2)}\) & \(7.7_{(2.4)}\) & \(41.1_{(0.7)}\) & \(13.0\) & \(11.0_{(1.9)}\) & \(41.4_{(0.7)}\) & \(17.4\) \\ LLaMA-13B & \(62.1_{(0.4)}\) & \(71.7_{(1.9)}\) & \(10.5_{(3.3)}\) & \(43.7_{(1.0)}\) & \(16.9\) & \(13.8_{(2.2)}\) & \(43.5_{(1.0)}\) & \(20.9\) \\ Vicuna-13B & \(66.9_{(0.1)}\) & \(59.0_{(0.6)}\) & \(14.9_{(0.2)}\) & \(16.8_{(0.0)}\) & \(15.8\) & \(15.1_{(0.0)}\) & \(17.0_{(0.0)}\) & \(16.0\) \\ \hline \hline \end{tabular} \end{table} Table 3: Citation Quality OpenAI models and LLaMA family models. The first five metrics are reported in Micro, and the last three metrics are reported in Macro. We also report text citation alignment contradictory. However, the relevance is relatively low for smaller models, indicating the difficulty these models face in generating answers that are relevant to the questions. ### Conscious Incompetence We first evaluate **citation quality** of the generated text with knowledge removed using method described in SS 4.4. From Table 5, the removal of required knowledge has a minimal impact on correctness, but significantly affects citation precision and recall. With more knowledge absent from provided knowledge graph, both precision and recall drops drastically, demonstrating that the coverage issue poses a considerable challenge to generating answers with high quality citations. Next, we evaluate **[NA] precision** and **recall**. From Figure 4, The recall is stable at about 15 regardless of the number of absent knowledge. This indicates that the current LLMs have ability to identify absent knowledge to a limited extent. While precision and F1-Score exhibit a clear upward trend, which shows that with more absent knowledge in KG, [NA] enables generated outputs to locate absent knowledge more accurately. Therefore, the "Conscious Incompetence" setting plays an increasingly crucial role when the coverage problem of knowledge graph is more serious. ### Retrieval Analysis We conduct an ablation study to examine the impact of retrieval accuracy on the model's output. The experiment simulates retrieval accuracy from 100 to 20 at intervals of 20. We start with the ground truth knowledge graphs that we used for question construction. In each subsequent rounds, we randomly replace additional 20% knowledge graphs with irrelevant knowledge graphs to simulate retrieving wrong graphs. The results for citation quality are in Figure 5. Answers are generated using ChatGPT with a temperature of 0.5. The results show clear downward trends in all metrics as expected when retrieval accuracy dropped. Among precision and recall, the impact of poor retrieval quality on recall (green) is much more significant than on precision (yellow). This indicates that the model has the ability to filter out incorrect knowledge to a certain extent, resulting in less noticeable impact on precision compared to recall. The reduction in recall was nearly linear as retrieval accuracy decreased, which is understandable since a knowledge cannot be cited if it is not provided. The greatest drop in recall occurred between the ground truth (57.1) and 80 accuracy (42.5), demonstrating the potential of the model to generate high-quality citations under perfect retrieval conditions. In practice, a retrieval accuracy of 80 is closest to the actual scenario of our experiment (our retrieval accuracy is 75.9). Therefore, when retrieval accuracy is reasonably high, the correctness of citations is not the most significant concern compared to recall. \begin{table} \begin{tabular}{l|c c c c} \hline **Model** & **Coh.** & **Con.** & **Flu.** & **Rel.** \\ \hline GPT-4 (0.5) & 4.48 & 4.89 & 4.64 & 4.72 \\ ChatGPT (0.1) & **4.57** & **4.94** & 4.69 & **4.83** \\ ChatGPT (0.5) & **4.57** & **4.94** & **4.71** & 4.81 \\ ChatGPT (0.9) & 4.52 & 4.91 & 4.67 & 4.79 \\ \hline Alpaca-7B & 4.10 & 4.46 & 4.23 & 3.76 \\ LLaMa-7B & 3.06 & 3.79 & 3.62 & 2.96 \\ LLaMa-13B & 3.60 & 4.23 & 3.94 & 3.56 \\ Vicuna-13B & 3.67 & 4.50 & 3.96 & 3.64 \\ \hline \end{tabular} \end{table} Table 4: Evaluation on generated text quality. Figure 4: Precision, Recall, and F1-Score for [NA]. \begin{table} \begin{tabular}{l|c c c c} \hline **Removed** & **Corr.** & **Prec.** & **Rec.** & **F1.** \\ \hline 0 (gold) & 95.5 & 30.1 & 57.1 & 39.4 \\ 1 & 94.1 & 26.1 & 42.5 & 32.3 \\ 2 & 94.0 & 21.0 & 31.4 & 25.2 \\ 3 & 93.9 & 16.3 & 20.4 & 18.1 \\ \hline \end{tabular} \end{table} Table 5: Citation quality evaluation for generated texts using a KG with N pieces of knowledge removed. ### Human Evaluation We conduct human evaluation to verify the correlation between automatic evaluation and human judgment. We randomly sample 100 sentence-citation pairs from each of the three baselines: ChatGPT (temperature 0.5), LLaMA-7B, and Vicuna-13B. We request two proficient English annotators for each baseline to determine if the citatiom aligns to the sentence and provides support for it. The reason we choose metric alignment here is in appendix B, with instruction to annotators and IAA. The comparison between automatically calculated Alignment and human evaluation results is shown in Table 6. For all three baselines, the automatic and human scores are close with a gap of within 2.5, despite the significant differences among the baselines. This indicates a strong correlation between the automatically calculated alignment and human judgments. The results demonstrate that the automatic evaluation serves as a reliable measurement of the alignment between generated texts and citations. ## 6 Related Work Retrieval-augmented LLMsKiC Pan et al. (2022) empower models with external memory of multiple formats including knowledge graph but does not explore attribution. WebGPT Nakano et al. (2021) outsources document retrieval to Microsoft Bing and fine-tunes GPT3 to answer questions. GopherCite Menick et al. (2022) fine-tunes a Gopher Rae et al. (2021) model to generate text alongside quotes extracted from Google search. ALCE Gao et al. (2023) retrieves top-k passages from Wikipedia and asks LLMs to generate outputs with citations to corresponding supporting documents. These works attribute LLMs to unstructured documents but not knowledge graph. EvaluationRashkin et al. (2021) define the "Attributable to Identified Sources" (AIS) to measure whether model-generated statements are supported by underlying sources. Bohnet et al. (2022) study an automatic metric (AutoAIS) that formulates evaluation of automated question answering as a NLI task. Yue et al. (2023) investigate the automatic evaluation of attribution by prompting LLMs and fine-tuning smaller LMs Liu et al. (2023) conduct human evaluation to audit generative search engines for their citation qualities. ALCE Gao et al. (2023) evaluates generated answers by comparing with gold answers using MAUVE, and calculates precision and recall for citations using NLI. To the best of our knowledge, our evaluation methods are the first framework that requires no human annotated data. ## 7 Conclusion We propose KaLMA that comprises a new dataset BioKaLMA, a pipeline for generating attributed answers by retrieving from KGs, and a set of automatic evaluation metrics to assess text quality, citation quality, and text-citation alignment. We introduce the "Conscious Incompetence" setting, enabling LLMs to identify the knowledge required to support the answers but is absent from the KG. Through this benchmark, we address three challenges: incorporating diverse attribution sources, limited attribution source coverage, and the absence of human annotated ground truth for automatic evaluation. Our extensive experimental results demonstrate that current LLMs still have room for improvement when utilizing KGs as attribution sources. We also highlight the increasing effectiveness of "Conscious Incompetence" setting as the coverage of attribution source becomes worse. Lastly, we prove the crucial role of retrieval accuracy in generating high-quality attributed texts. \begin{table} \begin{tabular}{l|c c} \hline & **Alignment** & **Human Avg.** \\ \hline ChatGPT(0.5) & 84.5 & 82.0 \\ LLaMA-7B & 47.8 & 45.5 \\ Vicuna-13B & 66.9 & 64.5 \\ \hline \end{tabular} \end{table} Table 6: Result of Human Evaluation on text-citation alignment Figure 5: Citation evaluation (Micro) of generated texts using knowledge graphs with retrieval accuracy 100 (gold), 80, 60,40, and 20. ### Limitations One limitation is that our work only investigates a simple form of knowledge graph, where each node is an entity, and each sub-graph is a knowledge triple. There are more complicated forms of knowledge graph, where each node is a document. We will explore this setting in future works. Another limitation lies within the text quality evaluation. We uses ChatGPT as the model to evaluate texts, which could potentially have a bias if the model prefers the text style generated by itself. Such bias can be observed from the abnormal phenomenon that the scores of ChatGPT generated answers are higher than that of the GPT4 generated answers for all four dimensions. Due to cost considerations, we do not repeat the text quality evaluation with GPT-4.
2305.10931
Lyapunov-Driven Deep Reinforcement Learning for Edge Inference Empowered by Reconfigurable Intelligent Surfaces
In this paper, we propose a novel algorithm for energy-efficient, low-latency, accurate inference at the wireless edge, in the context of 6G networks endowed with reconfigurable intelligent surfaces (RISs). We consider a scenario where new data are continuously generated/collected by a set of devices and are handled through a dynamic queueing system. Building on the marriage between Lyapunov stochastic optimization and deep reinforcement learning (DRL), we devise a dynamic learning algorithm that jointly optimizes the data compression scheme, the allocation of radio resources (i.e., power, transmission precoding), the computation resources (i.e., CPU cycles), and the RIS reflectivity parameters (i.e., phase shifts), with the aim of performing energy-efficient edge classification with end-to-end (E2E) delay and inference accuracy constraints. The proposed strategy enables dynamic control of the system and of the wireless propagation environment, performing a low-complexity optimization on a per-slot basis while dealing with time-varying radio channels and task arrivals, whose statistics are unknown. Numerical results assess the performance of the proposed RIS-empowered edge inference strategy in terms of trade-off between energy, delay, and accuracy of a classification task.
Kyriakos Stylianopoulos, Mattia Merluzzi, Paolo Di Lorenzo, George C. Alexandropoulos
2023-05-18T12:46:42Z
http://arxiv.org/abs/2305.10931v1
Lyapunov-driven deep reinforcement learning for edge inference empowered by reconfigurable intelligent surfaces ###### Abstract In this paper, we propose a novel algorithm for energy-efficient, low-latency, accurate inference at the wireless edge, in the context of 6G networks endowed with reconfigurable intelligent surfaces (RISs). We consider a scenario where new data are continuously generated/collected by a set of devices and are handled through a dynamic queueing system. Building on the marriage between Lyapunov stochastic optimization and deep reinforcement learning (DRL), we devise a dynamic learning algorithm that jointly optimizes the data compression scheme, the allocation of radio resources (i.e., power, transmission precoding), the computation resources (i.e., CPU cycles), and the RIS reflectivity parameters (i.e., phase shifts), with the aim of performing energy-efficient edge classification with end-to-end (E2E) delay and inference accuracy constraints. The proposed strategy enables dynamic control of the system and of the wireless propagation environment, performing a low-complexity optimization on a per-slot basis while dealing with time-varying radio channels and task arrivals, whose statistics are unknown. Numerical results assess the performance of the proposed RIS-empowered edge inference strategy in terms of trade-off between energy, delay, and accuracy of a classification task. Kyriakos Stylianopoulos, Mattia Merluzzi, Paolo Di Lorenzo, and George C. Alexandropoulos Edge intelligence, inference, reconfigurable intelligent surfaces, Lyapunov optimization, reinforcement learning. ## 1 Introduction The development of the next generation of wireless communication systems, known as 6G, is still at its infancy. The main challenge of 6G is to provide an Artificial Intelligence (AI) and Machine Learning (ML) native communication infrastructure. This concept is known as Edge ML/Edge AI [1]. Edge AI comes with a twofold perspective, including both the benefits of AI/ML algorithms exploited for network optimization and orchestration, and the benefit of a powerful and efficient communication platform to process and distill large volumes of data collected by heterogeneous devices. The final aim is to accurately perform learning tasks within low end-to-end (E2E) delays, in the most efficient way from different perspectives entailing energy, communication overhead, etc. Edge AI will strongly benefit from Multi-access Edge Computing (MEC) [2], thanks to the deployment of distributed computing resources close to end users, namely in Mobile Edge Hosts (MEHs) that are, e.g. co-located with radio Access Points (APs). This will allows mobile end devices to access computing resources, albeit limited with respect to central clouds, in a fast, secure, reliable, and possibly sustainable manner. In this work, we focus on both roles of edge AI in future systems, devising a resource allocation framework to enable dynamic energy efficient edge classification on data collected by end devices, with target E2E delays and inference reliability constraints. Besides the paradigm of communications for AI, in this work, the twofold role of edge AI is explored through the marriage between model-based Lyapunov stochastic network optimization, and Deep Reinforcement Learning (DRL), with the latter playing the role of an AI-based optimization algorithm, able to compensate the limitations of model-based optimization, namely complexity and/or lack of system modeling. Previous works successfully attempted to merge such tools [3, 4], however never focusing on the edge AI use cases, i.e. not taking into account application performance. Instead, the goal of this paper is to focus on the performance (in terms of accuracy) of an edge classification task, along with typical MEC performance including End-to-End (E2E) delay and energy consumption. **Related works.** As illustrated in the overview papers [1, 5], an efficient design of edge-inference hinges on several aspects, such as memory footprint optimization [6], adaptive model selection [7], or goal-oriented optimization, for instance adapting frame rate and resolution of video streaming for efficient inference [8, 9]. The authors of [10] consider a video analytics task, maximizing the average accuracy under a frame rate and delay constraint. Also, the work in [11] proposed a joint management of radio and computation resources for edge ML, hinging on data quantization to control the accuracy of several learning tasks. Finally, the closest reference [12] proposes a dynamic joint optimization or radio, computing resources, and JPEG data compression, fully based on Lyapunov optimization, assuming full knowledge of the system model. All the aforementioned works enabled edge computing and/or inference considering the presence of a suitable wireless propagation environment. Moving toward millimeter wave communications (and beyond), the performance of MEC-oriented systems can be severely reduced due to poor channel conditions and blocking events. In this context, a strong performance improvement can be obtained exploiting Reconfigurable Intelligent Surfaces (RISs) [13, 14, 15, 16, 17], which are programmable surfaces made of hundreds of nearly passive reflective elements controlled to realize dynamic transformations of the wireless propagation environment, both in indoor and outdoor scenarios. The inclusion of RISs in MEH systems offer a two-fold benefit: i) it alleviates the effect of blockages, which leads to low offloading rates and poor performance; ii) it enables better exploitation of the computing resources of the edge server thanks to the improved offloading capabilities. Several works in the literature have already exploited RISs to empower wireless communications [18, 19, 20, 21] and, very recently, also for computation offloading [22, 23]. Finally, preliminary results on edge learning empowered by RISs appear in [24], which considered static resource allocation for edge inference, and in the work [25] that instead focused on adaptive federated learning. **Contributions.** In this paper, we focus on an edge inference task, aimed at performing classification of data collected, distilled, and processed at the edge of a wireless network endowed with MEC ca abilities and RISs. Our design aims at striking the minimum average power necessary by the system to perform low-latency inference with accuracy constraints. To this aim, we propose an online method based on the marriage of Lyapunov stochastic optimization DRL, to dynamically optimize: i) data compression scheme; ii) users' transmission precoding and power; iii) RISs' reflectivity parameters; iv) MEC resources. Our method leads to the convergence of model-based (i.e., Lyapunov) and data-driven (i.e., DRL) stochastic optimization into a single holistic framework, thus paving the way towards fully reconfigurable networks endowed with effective and efficient edge AI capabilities. The main novelties with respect to the closest work [12] are the following: i) the function linking accuracy and compression scheme is supposed to be unknown, and optimized through a DRL-based approach; ii) the RIS is not present in [12], and it is optimized here through the DRL-based approach, differently from [26], in which it is optimized though a projected gradient descent method. The computation part is handled as in [12, 26], thus it does not represent a novelty in this paper. Finally, numerical results assess the performance of the proposed method. **Notation.** Bold upper case letters denote matrices, \(\text{Tr}(\cdot)\) denote the trace operator, \(|\cdot|\) denotes the determinant, and the superscript \((\cdot)^{H}\) denotes the hermitian operator; \(\mathbf{I}_{N}\) and \(\mathbf{0}_{N}\) denote the \(N\times N\) identity matrix and the all-zeros matrix, respectively. Finally, given a variable \(X(t)\), \(\overline{X}=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{X( t)\}\) if the sequence converges, or \(\overline{X}=\lim\;\sup_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{X( t)\}\) otherwise. ## 2 System Model Let us consider a wireless setup with a set \(\mathcal{K}\) of Mobile Devices (MDs), each one equipped with \(N_{u}\) antennas, an \(N_{a}\) antennas Access Point (AP), with a co-located MEH able to process tasks through a pre-trained and pre-uploaded ML model (e.g. a deep neural network). Finally, an RIS with \(M\) reflecting unit elements is assumed to be available, and it can be dynamically reconfigured to enhance MEC service performance. As in [12], we model the inference process as a flow of request arrivals, which are first buffered locally at each MD before transmission and, after transmission, buffered remotely before computation. Time is organized in slots \(t\) of equal duration \(\tau\). Each device has its own buffer and all MDs compete for radio and computing resources. Then, uplink communication, along with computation, is the focus of this work. ### Channel model An RIS-aided wireless channel can be modeled by two components: i) a direct path from MD \(k\) to the AP, whose time-varying coefficient are the elements of a matrix \(\mathbf{H}_{k,d}(t)\in\mathbb{C}^{N_{a}\times N_{u}}\), and ii) an indirect path, created by the reflection of the RIS. The latter includes a channel matrix \(\mathbf{H}_{k,r}(t)\in\mathbb{C}^{M\times N_{u}}\) from MD \(k\) to the RIS, and a channel matrix \(\mathbf{H}_{r,a}(t)\in\mathbb{C}^{N_{a}\times M}\) from the RIS to the AP. Finally, the RIS is composed of passive elements, whose phases can be dynamically and opportunistically reconfigured, and its response can be written as a diagonal matrix \(\mathbf{\Phi}(t)\), with diagonal elements \(\{\mathbf{\Phi}_{i,i}(t)=e^{j\phi_{i}(t)}\}_{i=1}^{M}\), where \(\phi_{i}(t)\) denotes the (reconfigurable) phase shift of elements \(i\). Then, the overall channel is [20] \[\mathbf{H}_{k}(t)=\mathbf{H}_{k,d}(t)+\mathbf{H}_{r,a}(t)\mathbf{\Phi}(t) \mathbf{H}_{k,a}(t),\quad\forall k\in\mathcal{K}. \tag{1}\] We assume that, at each slot, every MD selects a transmit precoding strategy, based on current connect-compute system conditions. Then, denoting by \(\mathbf{F}_{k}(t)\) the input covariance matrix of MD \(k\), the instantaneous data rate can be written as follows: \[R_{k}(t)=W\log_{2}\left|\mathbf{I}_{N_{u}}+\frac{1}{\sigma^{2}}\mathbf{H}_{k} (t)\mathbf{F}_{k}(t)\mathbf{H}_{k}^{H}(t)\right|,\;\;\forall k\in\mathcal{K} \tag{2}\] where \(\sigma^{2}=N_{0}W\), with \(N_{0}\) the noise power spectral density, and \(W\) bandwidth. All users are coupled by the RIS reflection. ### Communication and computation queuing models Similarly to [12], buffers are intended as units of patterns (or data unit). At each slot, a generic MD \(k\) accepts \(A_{k}(t)\) new patterns into its communication buffer, while transmitting previously buffered patterns through the (RIS-aided) wireless connection with the AP, at a rate \(R_{k}(t)\) (cf. (2)). Then, denoting by \(Q_{k}^{l}(t)\) the buffer size at time \(t\), the communication queue evolves as follows: \[Q_{k}^{l}(t+1)=\max\left(0,Q_{k}^{l}(t)-\left\lfloor\frac{\tau R_{k}(t)}{n_{k,b }(c_{k}(t))}\right\rfloor\right)+A_{k}(t), \tag{3}\] where \(n_{k,b}(c_{k}(t))\) is the number of bits encoding all patterns transmitted at time \(t\), which is a function of the data compression level \(c_{k}(t)\in\mathcal{C}_{k}\) at time slot \(t\). The transmitted patterns join a remote computation queue, which is drained by the MEH's processing, i.e. the issuing of inference results. Then, denoting by \(f_{k}(t)\) the MEH's CPU cycle frequency assigned to user \(k\), the queue evolves as: \[Q_{k}^{r}(t+1)=\max\left(0,Q_{k}^{r}(t)-\left\lfloor\frac{\tau f _{k}}{w_{k}}\right\rfloor\right)\] \[\qquad\qquad\qquad+\min\left(Q_{k}^{l}(t),\left\lfloor\frac{\tau R _{k}(t)}{n_{k,b}(c_{k}(t))}\right\rfloor\right), \tag{4}\] where \(w_{k}\) is the computation load, i.e. the number of CPU cycles needed to output one inference result. ### Inference performance indicators Effective and efficient edge inference entails timing (i.e., the delay from request issuing at the device until its treatment at the MEH), accuracy (e.g., correctly classified patterns), and energy. The average delay entails communication and computation phases. Also, if the queues are strongly stable, i.e. \(\overline{Q}_{k}^{l(r)}<\infty\), the average E2E delay is finite and can be written in closed form thanks to Little's law: \(\overline{\mathcal{D}}_{k}=\tau(\overline{Q}_{k}^{l}+\overline{Q}_{k}^{r})/ \overline{A}_{k}\), where \(\overline{A}_{k}\) denotes the average number of arrivals per slot. In this paper, we consider a generic metric \(\mathcal{G}_{k}(c_{k})\) of inference reliability, which we mildly assume to be a function of the employed compression scheme \(c_{k}\). Such generic function could be the accuracy of a classification/regression/estimation task, or any other measure reflecting effective operation of an ML model running in the MEH (e.g., classification confidence). The aim will be to keep the long-term average of the accuracy metric \(\overline{\mathcal{G}}_{k}(c_{k})\) above a predefined threshold, set a priori as an application requirement. Finally, as source of MDs' power consumption we consider the long-term average transmit power, which can be expressed as \(\overline{\sum_{k\in\mathcal{K}}\text{Tr}(\mathbf{F}_{k})}\). ## 3 Problem Formulation In this work, our aim is to guarantee energy-efficient edge inference with a minimum level of accuracy and a given E2E delay. The problem is formulated as follows: \[\min_{\{\mathbf{P}(t)\}_{k,t},\{f_{k}(t)\}_{k,t},\{\phi(t)\}_{i,t},\{c_{k}(t)\}_{k, t}\}\;\;\overline{\sum_{k\in\mathcal{K}}\text{Tr}(\mathbf{F}_{k})} \tag{5}\] subject to \[(a)\;\overline{Q}_{k}^{l}<\infty,\;\forall k\in\mathcal{K} (b)\;\overline{Q}_{k}^{r}<\infty,\;\forall k\in\mathcal{K}\] \[(c)\;\overline{\mathcal{G}}_{k}(c_{k})\geq\mathcal{G}_{k,\text{th}},\; \forall k\in\mathcal{K} (d)\;\mathbf{F}_{k}(t)\succcurlyeq 0,\;\forall k\in\mathcal{K}\] \[(e)\;\text{Tr}(\mathbf{F}_{k}(t))\leq P_{k},\;\forall k\in \mathcal{K} (f)\;c_{k}\in\mathcal{C}_{k},\;\forall k\in\mathcal{K}\] \[(g)\;\;\phi_{i}(t)\in[0,2\pi],\;i=1,\ldots M\] \[(h)\;f_{k}(t)\geq 0\quad(i)\;\sum_{k\in\mathcal{K}}f_{k}(t)\leq f_{ \max}\] Besides queues' stability in \((a)\) and \((b)\), the constraints of (5) have the following meaning: \((c)\) the average accuracy is higher than a predefined threshold; \((d)\) the covariance matrix of each user is positive semidefinite; \((e)\) the instantaneous transmit power of each user is lower than a threshold; \((f)\) the compression scheme belongs to a discrete set \(\mathcal{C}_{k}\); \((g)\) the RIS elements' phases are chosen between \(0\) and \(2\pi\); \((h)\) the CPU cycle frequency assigned to each MD is non negative; \((i)\) the sum of the CPU cycle frequencies assigned to all MDs does not exceed the maximum MEH's frequency. ## 4 Lyapunov-driven DRL: The convergence of model-based and data-driven optimization Due to the lack of knowledge of channels and arrivals statistics, we now apply Lyapunov stochastic optimization tools to decouple the long-term problem in (5) into a sequence of simpler per-slot problems, whose solution only requires the instantaneous observation of such context parameters. As it will be explained in the sequel, part of the per-slot problem can be solved with closed-form expressions and fast algorithms, while part of it is solved through DRL, which can efficiently handle the inherent complexity and the lack of knowledge of the involved functions. First of all, to handle constraint \((c)\), we introduce a _virtual queue_\(Z_{k}(t),\forall k\in\mathcal{K}\), whose scope is to detect instantaneous violations of the constraint, and take consequent actions to guarantee the long-term desired performance (i.e. the accuracy above a threshold). In particular, the virtual queue evolves as follows: \[Z_{k}(t+1)=\max(0,Z_{k}(t)-\epsilon(\mathcal{G}_{k}(c_{k}(t))-\mathcal{G}_{k, \mathbb{n}})),\ \forall k\in\mathcal{K}. \tag{6}\] By definition, virtual queue \(Z_{k}\) grows whenever the constraint is not met instantaneously, and it is drained otherwise. Also, if the mean rate stability of \(Z_{k}\) is guaranteed, i.e. \(\lim_{T\to\infty}\mathbb{E}\{Z(T)\}/T=0\), constraint \((c)\) is guaranteed [27]. To guarantee physical queue and virtual queue stability, we first introduce the Lyapunov Function, a scalar measure of the system's congestion state [27]: \[L((t))=\frac{1}{2}\sum_{k\in\mathcal{K}}\left[Q_{k}^{l}(t)^{2}+Q_{k}^{r}(t)^{2 }+Z_{k}^{2}(t)\right], \tag{7}\] with \(\mathbf{\Lambda}(t)=[\{Q_{k}^{l}(t)\}_{\forall k\in\mathcal{K}},\{Q_{k}^{r}(t) \}_{\forall k\in\mathcal{K}},\{Z_{k}(t)\}_{\forall k\in\mathcal{K}}]\). From (7), we can define the _drift-plus-penalty_ function, i.e. the conditional expected variation of the Lyapunov function over two successive time slots, penalized by the objective function of problem (5): \[\Delta_{p}(\mathbf{\Lambda}(t))=\mathbb{E}\{L((t+1))-L((t))+V\sum_{k\in \mathcal{K}}\text{Tr}(\mathbf{F}_{k}(t))|\mathbf{\Lambda}(t)\}. \tag{8}\] Now, based on the theoretical findings in [27], our method proceeds by instantaneously minimizing a suitable upper bound of (8) at each slot, based on instantaneous observations of time-varying context parameters, and state variables (physical and virtual queues). The proposed upper bound (whose derivations are omitted due to the lack of space), built using [27, Eq. (4.46),(4.47)] reads as follows: \[\Delta_{p}(\mathbf{\Lambda}(t))\leq B+\mathbb{E}\bigg{\{}\sum_{k \in\mathcal{K}}\bigg{(}(Q_{k}^{r}(t)-Q_{k}^{l}(t))\frac{\tau R_{k}(t)}{n_{k,b}( c_{k}(t))}\] \[-Q_{k}^{r}(t)\tau f_{k}(t)/w_{k}-Z_{k}(t)(\mathcal{G}_{k}(c_{k}(t ))-\mathcal{G}_{k}^{\mathbb{n}})\bigg{)}\bigg{|}\mathbf{\Lambda}(t)\bigg{\}}, \tag{9}\] where \(B>0\) is a finite constant, whose expression is omitted due to the lack of space. Minimizing (9) in a per-slot basis leads to two sub-problems involving communication and computation variables, respectively, whose solution is illustrated in the next paragraphs. ### Computation sub-problem As already mentioned in the introduction, the computation sub-problem is the same as [26, section III.B, problem \((\ref{eq:10})\)], i.e. it does not represent a novelty in this work. The formulation involves the variables \(\{f_{k}\}_{k\in\mathcal{K}}\), and its solution consists on iteratively assigning computing resources to the users with the highest computation buffer load. The novelty consists in the formulation and solution of the communication sub-problem that follows. ### Communication sub-problem The communication sub-problem involves: i) users' covariance matrices \(\mathbf{F}_{k},\forall k\in\mathcal{K}\), ii) RIS reflectivity parameters \(\phi_{i},i=1,\ldots,M\) and iii) the compression scheme \(c_{k}\). The problem is formulated as follows (the time index \(t\) is omitted to ease the notation): \[\min_{\{\mathbf{F}_{k}\}_{k,\{\phi\}_{i},\{c_{k}\}_{k}\}} \mathcal{J}\] (10) subject to \[(a)\ \mathbf{F}_{k}\succcurlyeq 0,\ \forall k\in\mathcal{K}\] \[(c)\ c_{k}\in\mathcal{C}_{k},\ \forall k\in\mathcal{K} (d)\ \ \phi_{i}\in[0,2\pi],\ i=1,\ldots M\] where \(\mathcal{J}=\sum_{k\in\mathcal{K}}\left[(Q_{k}^{r}-Q_{k}^{l})\frac{\tau R_{k}}{n _{k,b}(c_{k})}+V\text{Tr}(\mathbf{F}_{k})-Z_{k}\mathcal{G}_{k}(c_{k})\right]\). Despite the dramatic complexity reduction with respect to the original problem (5), (10) is challenging for two main reasons: i) it is a mixed-integer non-convex program, and ii) the function \(\mathcal{G}_{k}(c_{k})\) is generally unknown. However, once \(\{\phi_{i}\}_{\forall i}\) and \(\{c_{k}\}_{\forall k}\) are fixed, the problem boils down to a convex problem that can be efficiently solved through a water-filling procedure that is presented in [26]. Therefore, we propose to first select RIS parameters and compression schemes through a DRL-based algorithm, to get rid of the complexity introduced by discrete non-convex function and lack of model. Then, once RIS parameters and compression schemes have been set, we solve the remaining part, which only involves the transmit covariance matrices, using a model-based approach and classical tools from convex optimization [28]. #### 4.2.1 DRL-based selection of RIS reflectivity and compression Reinforcement Learning aims to learn a parameterized policy function (i.e. a neural network) that maps from environmental observations to available actions, so that the expected cumulative reward signal \(r(t)\) is maximized. For the communication sub-problem, the agent being trained at time \(t\) observes a vector containing the system's past state variables and current channels in vectorized forms: \[\mathbf{s}(t)= [\{Q_{k}^{l}(t-1)\},\{Q_{k}^{r}(t-1)\},\{Z_{k}(t-1)\},\{\mathcal{G} _{k}(c_{k}(t-1))\},\] \[\{f_{k}(t-1)\},\{\mathbf{H}_{k,d}(t)\},\{\mathbf{H}_{r,a}(t)\},\{ \mathbf{H}_{k,a}(t)\}] \tag{11}\] of dimensionality \(5K+2KN_{a}N_{u}+2KMN_{u}2N_{a}M\) for which the real and imaginary parts of the channel vectors are received separately. Based on \(\mathbf{s}(t)\), the agent selects the action \(\mathbf{a}(t)=[\{c_{k}(t)\}_{\forall k},\{\phi_{m}(t)\}_{\forall m}]\), which fixes the current compression and RIS profiles, allowing for the optimal values for \(\{\mathbf{F}_{k}(t)\}_{\forall k}\) to be computed (see below) and consequently, for the objective value of (10) to be calculated. Since the RL problem is posed as maximization, we simply set \(r(t)=-\mathcal{J}(t)\) to attain the equivalent optimization problem of (10). The system's state then proceeds to \(t+1\). It is important to note that all component variables of \(\mathbf{s}(t)\) appearing in (11), as well as the objective value of (10) at time depend exclusively on the values of \(\mathbf{s}(t-1)\) and \(\mathbf{a}(t)\). This remark ensures that the system evolves as a _Markov Decision Process_ (MDP) [20], allowing for theoretically optimal policies to be derived without the need for knowledge of its evolution from \(t-2\) and backward. Besides, the DRL formulation is algorithmic-agnostic, which allows for a wide-variety of DRL agents to be applied with different component mechanisms. Our methodology involves the use of the well-established Proximal Policy Evaluation (PPO) algorithm [29], although we omit its detailed description due to space restrictions. To make PPO applicable to the system's mechanics, we configure its policy network to output \(\mathbf{a}(t)\) as vectors of continuous values in \([0,1]^{K+M}\) and we then construct \(c_{k}(t)\) by discretizing each of the first \(K\) elements to the allowed compression levels and \(\phi_{m}(t)\) by multiplying the remaining elements by \(2\pi\), as proposed in [30]. #### 4.2.2 Water-filling for covariance matrix optimization Once RIS reflectivity and compression schemes are fixed, the problem to find the optimal covariance matrix can be solved optimally. Indeed, first of all, once the RIS parameters are fixed, the problem can be decoupled across different users. Then, for all users \(k\in\widetilde{\mathcal{K}}=\{\widetilde{k}\in\mathcal{K}:Q_{\widetilde{k}}^{ i}\leq Q_{\widetilde{k}}^{2}\}\), the optimal solution is \(\mathbf{F}=\mathbf{0}_{N_{u}}\), a the first two terms in (10) are non-decreasing functions of the user transmit power, as in [12, 26]. Also, for all users \(k\in\mathcal{K}\setminus\widetilde{\mathcal{K}}\), the problem is convex and can be solved through the water-filling procedure described in [26, Algorithm 2]. The overall solution seeks to balance between data rate (weighted by physical queues, i.e. delay), accuracy (weighted by virtual queues), and objective function (weighted by the trade-off parameter \(V\)). ## 5 Numerical evaluation **System parameters.** To illustrate the effectiveness our proposed methodology we conceive an edge inference scenario of image recognition, in the presence of an RIS, and an AP. A ResNet32 [31] model is deployed on a MEH machine and trained to classify CIFAR-10 images, after being trained to approximately \(92\%\) accuracy (over the uncompressed images), offering a reasonable compromise between performance and E2E delay. For the shake of clarity of the results, we consider a single mobile user with no direct wireless link to the AP. The UE performs image compression on device using the JPEG protocol, so that \(c_{k}\) values correspond to the compression quality. In terms of inference reliability metric, we consider the average accuracy in order to provide interpretable thresholds and performance evaluations. While indeed, the accuracy of the network in unlabelled images cannot be known by definition, we have numerically quantified its average metric values for all possible JPEG compression levels of dataset's test set, to be used for performance evaluation. The rest of the parameter values are presented next: Classifier parameters: \(4.7\cdot 10^{5}\), \(w_{k}=5.6\)GHz, \(\bar{A}_{k}=4\) arrivals per time step, \(c_{k}\in\{1,\dots,100\}\), \(f_{\text{max}}=3.6\) GHz, \(P_{k}=100\) mW, \(\tau=0.01\) s, \(W=100\) MHz, \(\mathcal{G}_{k,\text{th}}=0.85\), Rice factor: \(25\) dB, \(\sigma^{2}=-120\) dB, operating frequency: \(5\) GHz, UE-RIS attenuation: \(62.60\) dB, AP-RIS attenuation: \(66.34\) dB, Maximum user movement displacement: \(5\) m. A frequency-division multiplexing transmission scheme is employed, where the power allocation among the narrowband frequency bins is kept equal and is not treated as part of the optimization problem. This assumption is necessary to integrate a realistic transmission paradigm without increasing the complexity of the problem. **Evaluation:** To solve the problem (5), we jointly apply the partial methodologies presented in Section 3. The water-filling and CPU scheduling optimization routines solve their respective sub-problems optimally at every time step. Therefore, to assess the overall formulation we numerically evaluate performance of the DRL's selections of RIS reflections and compression schemes. As baselines, we introduce the policies of always performing (i) maximum compression (which offers the minimum delay), (ii) no compression (offering maximum accuracy) and (iii) compression at a uniformly random level, while employing the rest of the optimization components and controlling the RIS reflections at random. Upon fixing a trade-off value for \(V\) from \(\{1\cdot 10^{5},2\cdot 10^{5},3\cdot 10^{5},4\cdot 10^{5},5\cdot 10^{5},3\cdot 10^{6}\}\), we train a PPO instance with \(5\) layers of \(32\) neurons for its policy and value networks for \(10^{6}\) time steps, resetting the system at randomized episodes of length \(1500\), using the final episode as evaluation. The training phases resulted in objective values orders of magnitudes lower than the baselines, although the scales of each objective instance is heavily influenced by the choice of \(V\). The resulting accuracy offered by PPO ranges from \(0.85\) to \(0.91\) denoting that the DRL component always satisfies the desired constraint. The maximum and random compression schemes offer accuracy scores of \(0.20\) and \(0.69\), respectively. Fig. 1 presents the achieved delay and mean transmit power offered by each method. Compared to the no-compression approach, DRL can achieve up to \(47\%\) reduction in the average UE power consumption and up to \(59\%\) reduction in the maximum E2E delay. At the same time, it results from \(45\%\) to up to \(2.5\) times (in the most delay-sensitive cases) higher power consumption and \(5-8\)ms added delay in comparison to the full compression policy, but the latter falls extremely short of any practical accuracy constraints. Clearly, the system endowed with the Lyapunov-based techniques and DRL is able to automatically offer a balance between performance (accuracy and E2E delay) and power consumption. ## 6 Conclusion In this paper, the problem of accurate, fast, and low-power edge inference has been investigated. A RIS-empowered MEC system has been proposed and a power minimization problem under E2E delay and accuracy constraints was formulated. The problem was solved through a combination of Lyapunov-driven optimization and DRL tools, showing how edge AI will play the twofold role of an efficient optimization tool and a service enabled by edge computing resources in 6G. A numerical evaluation in a RIS-empowered wireless scenario illustrated the capabilities of the methodology of achieving desired thresholds between accuracy, delay, and power consumption, striking typical the trade-off of edge AI native wireless networks. Figure 1: Evaluation of the achieved system objectives for different trade-off values of \(V\).
2305.16398
Trinification from $\mathrm{E}_{6}$ symmetry breaking
In the context of $\mathrm{E}_{6}$ Grand Unified Theories (GUTs), an intriguing possibility for symmetry breaking to the Standard Model (SM) group involves an intermediate stage characterized by either $\mathrm{SU}(3)\times\mathrm{SU}(3)\times\mathrm{SU}(3)$ (trinification) or $\mathrm{SU}(6)\times\mathrm{SU}(2)$. The more common choices of $\mathrm{SU(5)}$ and $\mathrm{SO}(10)$ GUT symmetry groups do not offer such breaking chains. We argue that the presence of a real (rank $2$ tensor) representation $\mathbf{650}$ of $\mathrm{E}_{6}$ in the scalar sector is the minimal and likely only reasonable possibility to obtain one of the novel intermediate stages. We analyze the renormalizable scalar potential of a single copy of the $\mathbf{650}$ and find vacuum solutions that support regularly embedded subgroups $\mathrm{SU}(3)\times\mathrm{SU}(3)\times\mathrm{SU}(3)$, $\mathrm{SU}(6)\times\mathrm{SU}(2)$, and $\mathrm{SO}(10)\times\mathrm{U}(1)$, as well as specially embedded subgroups $\mathrm{F}_{4}$ and $\mathrm{SU}(3)\times\mathrm{G}_{2}$ that do not contain the SM gauge symmetry. We show that for a suitable choice of parameters, each of the regular cases can be obtained as the lowest among the analyzed minima in the potential.
K. S. Babu, Borut Bajc, Vasja Susič
2023-05-25T18:03:10Z
http://arxiv.org/abs/2305.16398v1
# Trinification from \(\mathbf{E_{6}}\) symmetry breaking ###### Abstract In the context of \(\mathrm{E_{6}}\) Grand Unified Theories (GUTs), an intriguing possibility for symmetry breaking to the Standard Model (SM) group involves an intermediate stage characterized by either \(\mathrm{SU(3)\times SU(3)\times SU(3)}\) (trinification) or \(\mathrm{SU(6)\times SU(2)}\). The more common choices of \(\mathrm{SU(5)}\) and \(\mathrm{SO(10)}\) GUT symmetry groups do not offer such breaking chains. We argue that the presence of a real (rank 2 tensor) representation \(\mathbf{650}\) of \(\mathrm{E_{6}}\) in the scalar sector is the minimal and likely only reasonable possibility to obtain one of the novel intermediate stages. We analyze the renormalizable scalar potential of a single copy of the \(\mathbf{650}\) and find vacuum solutions that support regularly embedded subgroups \(\mathrm{SU(3)\times SU(3)\times SU(3)}\), \(\mathrm{SU(6)\times SU(2)}\), and \(\mathrm{SO(10)\times U(1)}\), as well as specially embedded subgroups \(\mathrm{F_{4}}\) and \(\mathrm{SU(3)\times G_{2}}\) that do not contain the SM gauge symmetry. We show that for a suitable choice of parameters, each of the regular cases can be obtained as the lowest among the analyzed minima in the potential. + Footnote †: institutetext: Department of Physics, University of Wisconsin, Madison, WI 53706, USA ## 1 Introduction An intriguing possibility of enlarging the gauge interactions of the Standard Model (SM) is to embed them together in a Grand Unified Theory (GUT) [1]. For such a GUT to be viable and consistent, its symmetry group must be a simple Lie group, which has the SM group \(\mathrm{SU}(3)\times\mathrm{SU}(2)\times\mathrm{U}(1)\) as its subgroup, and must admit complex representations (so as to produce a chiral theory). Only the groups \(\mathrm{SU}(N+3)\), \(\mathrm{SO}(4N+2)\) and \(\mathrm{E}_{6}\) satisfy these criteria, where \(N\geq 2\) and the minimal cases in the unitary and orthogonal families are the familiar \(\mathrm{SU}(5)\) and \(\mathrm{SO}(10)\) groups. The focus of this paper is the third possibility, viz., \(\mathrm{E}_{6}\). The exceptional Lie group \(\mathrm{E}_{6}\) contains \(\mathrm{SU}(5)\) and \(\mathrm{SO}(10)\) as its subgroups and is thus the largest of the usual GUT symmetry possibilities, as recognized long ago [2]. Unlike \(\mathrm{SU}(5)\) and \(\mathrm{SO}(10)\), the group \(\mathrm{E}_{6}\) also has the trinification group \(\mathrm{SU}(3)\times\mathrm{SU}(3)\times\mathrm{SU}(3)\) and \(\mathrm{SU}(6)\times\mathrm{SU}(2)\) as its subgroups (we shall sometimes denote them as \(G_{333}\) and \(G_{62}\) for simplicity), and thus allows breaking routes to the SM not present in the other two (minimal) GUT symmetry groups. Although \(\mathrm{E}_{6}\) also has other phenomenologically peculiar features, such as matter unification of each generation joined by 2 sterile neutrinos and vector-like exotics in the fundamental representation **27**, it is the exotic intermediate symmetries that \(\mathrm{E}_{6}\) can break into that are of interest to us in this paper. A crucial role in obtaining \(G_{333}\) or \(G_{62}\) at the intermediate stage is played by the representation \(\mathbf{650}\), since it is the simplest and seemingly only realistic1 representation that contains a SM-singlet of either, see e.g. [3; 4]. Any reasonable GUT model utilizing the \(G_{333}\) or \(G_{62}\) intermediate symmetries will thus require at least one scalar representation \(\mathbf{650}\) in its symmetry-breaking sector. Footnote 1: Consider an E\({}_{6}\) 4D Yang-Mills theory with a single representation of real scalars \(\mathbf{R}\), which contains \(G_{333}\) or \(G_{62}\) singlets. The one-loop RGE for the unified gauge coupling is \(dg/dt=\beta g^{3}/(16\pi^{2})\). The two simplest candidates for \(\mathbf{R}\) are the representations \(\mathbf{650}\) and \(\mathbf{2430}\), yielding \(\beta=-19\) and \(\beta=+91\), respectively. The second case leads to a fast Landau pole, hence the strong preference for \(\mathbf{650}\) in model building. In this paper, we determine the (non-supersymmetric) scalar potential of a single representation \(\mathbf{650}\) and study its minima. This effort can be seen as a first step in analyzing the class of E\({}_{6}\) models that aspires to use an exotic breaking route to the SM. We limit ourselves to the analysis of the first breaking stage, which should be followed by at least one more before reaching the SM. Also, although other representations are required for a realistic model,2 we do not commit to any particular fixed scalar sector beyond the presence of one copy of the \(\mathbf{650}\). Footnote 2: We note here that adding scalar representations \(\mathbf{27}\oplus\mathbf{351}^{\prime}\) to the \(\mathbf{650}\) seems especially promising to us, since their addition is well suited to describing a realistic Yukawa sector, that reduces to the \(\mathbf{10}\oplus\mathbf{126}\) case in SO(10) [5; 6; 7; 8] in the limit of SO(10)-spinors in the E\({}_{6}\) scalar representations not acquiring VEVs (deducible from [9]). Since the inclusions \(\mathbf{75}\subset\mathbf{210}\subset\mathbf{650}\) describe the largest irrep subparts with respect to the subgroup chain \(\mathrm{SU}(5)\subset\mathrm{SO}(10)\subset\mathrm{E}_{6}\), the study in this paper can be viewed as an E\({}_{6}\) analogue to the vacuum analysis studies of the \(\mathbf{75}\) in SU(5) GUT [10; 11; 12; 13] or \(\mathbf{210}\) in SO(10) [14; 15; 16; 17]. Within E\({}_{6}\) GUT, the representation \(\mathbf{650}\) has been considered before [18], albeit with a partial set of invariants. We organize the paper as follows: we present some group theory preliminaries required for describing the representation \(\mathbf{650}\) in Section 2, write down and analyze the scalar potential using one copy of this representation in Section 3, and then conclude in Section 4. We also provide a set of appendices, to which further technical details have been relegated. ## 2 Group Theory preliminaries of E\({}_{6}\) ### Generators and representations of E\({}_{6}\) The exceptional group E\({}_{6}\) is simple, contains the SM group \(\mathrm{SU}(3)_{C}\times\mathrm{SU}(2)_{L}\times\mathrm{U}(1)_{Y}\equiv G_{321}\) and has complex representations, and is thus a suitable candidate for the unification group in GUT. The E\({}_{6}\) group has 78 generators and is of rank 6, see e.g. [3; 4] for group theory resources. The most convenient way of considering the generators of E\({}_{6}\) is through the language of the trinification subgroup \(\mathrm{SU}(3)_{C}\times\mathrm{SU}(3)_{L}\times\mathrm{SU}(3)_{R}\), where the labels \(C\), \(L\), and \(R\) for the \(\mathrm{SU}(3)\) factors refer to _color_, _left_ and _right_, respectively. According to the trinification decomposition of the E\({}_{6}\) adjoint representation \[\mathbf{78}=(\mathbf{8},\mathbf{1},\mathbf{1})\,\oplus\,(\mathbf{1},\mathbf{8},\mathbf{1})\,\oplus\,(\mathbf{1},\mathbf{1},\mathbf{8})\,\oplus\,(\mathbf{3},\mathbf{\bar{3}},\mathbf{\bar{3}})\,\oplus\,(\mathbf{\bar{3}},\mathbf{3}, \mathbf{3}), \tag{1}\] the E\({}_{6}\) generators can be organized in matching order of the representations above as \[t_{C}^{A},\quad t_{L}^{A},\quad t_{R}^{A},\quad t_{\alpha a^{\prime}}^{A}, \quad\bar{t}_{\alpha a^{\prime}}^{A}. \tag{2}\] The indices \(\alpha\), \(a\) and \(a^{\prime}\) are the indices of the \(C\), \(L\) and \(R\) subgroups, respectively; they run from 1 to 3, and are fundamental (anti-fundamental) if they are upper (lower). We also use a single label for the adjoint index \(A\) running from 1 to 8, always in the upper position, with the subscript of the generator indicating which SU(3) factor the \(A\) refers to. This notation is standard and has been used before [9; 19; 20]. We also follow the same references in the use of tensor formalism for irreducible representations of E\({}_{6}\). For the (anti)-fundamental representation \(\mathbf{27}\)\((\mathbf{\bar{27}})\) of E\({}_{6}\), we use an upper (lower) index \(i\) that runs from 1 to 27. All irreducible representations of \(\mathrm{E}_{6}\) with dimension smaller than 1000 can be accessed through the following two tensor products: \[\mathbf{27}\otimes\mathbf{27}=\overline{\mathbf{27}}_{S}\oplus \mathbf{351}^{\prime}_{S}\oplus\mathbf{351}_{A}, \tag{3}\] \[\mathbf{27}\otimes\overline{\mathbf{27}}=\mathbf{1}\oplus\mathbf{7 8}\oplus\mathbf{650}, \tag{4}\] where the subscripts \(S\) and \(A\) indicate the symmetric and anti-symmetric parts of the product, respectively. Given these decompositions, and making use of the \(\mathrm{E}_{6}\) completely-symmetric invariant tensors \(d_{ijk}\) and \(d^{ijk}\), the irreducible representations in question can be formally written as tensors with up to 2 indices, as shown in Table 1. The \(\mathrm{E}_{6}\) generators there have been generically labeled as \(t^{K}\), where the adjoint index \(K\) runs from 1 to 78. Some further technical details, e.g. the commutation relations of \(\mathrm{E}_{6}\) generators and the definition of the invariant \(d\)-tensor, are provided in Appendix A.1. ### Maximal Subgroups of \(\mathrm{E}_{6}\) Since \(\mathrm{E}_{6}\) is a relatively large group, the possible breaking chains to the SM group \(G_{321}\) become very elaborate, much more so than e.g. in \(\mathrm{SO}(10)\)[21, 22, 14]. In fact, since it is a subgroup of \(\mathrm{E}_{6}\), all \(\mathrm{SO}(10)\) breaking chains become subchains of the larger \(\mathrm{E}_{6}\) breaking patterns. Our primary interest in this paper are the new breaking chains to the SM not available in \(\mathrm{SO}(10)\) GUT, in particular breaking through \(G_{333}\) and \(G_{62}\). Both are examples of maximal subgroups of \(\mathrm{E}_{6}\), and we shall therefore limit the investigation to such cases -- a limitation that we shall briefly revisit at the end of this section. The scope of maximal \(\mathrm{E}_{6}\)-subgroups with potential phenomenological relevance is then rounded out by \(\mathrm{SO}(10)\times\mathrm{U}(1)\), which is covered in this work as well. The list of maximal subgroups of \(\mathrm{E}_{6}\) is provided in Table 2 (based on [4, 23]). For each subgroup in the table, we list whether it requires a regular (R) or special (S) embedding, the decomposition of the adjoint representation \(\mathbf{78}\) of \(\mathrm{E}_{6}\) into this subgroups' irreducible representations, the number of subgroup singlets found when decomposing the \(\mathbf{650}\) of \(\mathrm{E}_{6}\), and whether \(G\) contains the SM group \(G_{321}\) as a subgroup. In the decomposition of the \(\mathbf{78}\), the parts which belong to the adjoint of the subgroup \(G\) are underlined; in a spontaneous symmetry breaking scenario to that subgroup the gauge bosons that are not underlined thus become massive. We observe from Table 2 that the SM group can be found in \(G\) for the first three cases, and hence these cases are of phenomenological relevance. Incidentally, the first three cases are also exactly those with a regular embedding of \(G\) into \(\mathrm{E}_{6}\), i.e. the embedding is rank-preserving. \begin{table} \begin{tabular}{c c c c c} \hline \hline irrep & indices & conditions & \(D_{2}\) & self-conjugate? \\ \hline \(\mathbf{27}\) & \(X^{i}\) & / & 3 & no \\ \(\mathbf{78}\) & \(X^{i}{}_{j}\) & \(X^{i}{}_{j}=X^{K}(t^{K})^{i}{}_{j}\) & 12 & yes \\ \(\mathbf{351}\) & \(X^{ij}\) & \(X^{ij}=-X^{ji}\) & 75 & no \\ \(\mathbf{351^{\prime}}\) & \(X^{ij}\) & \(X^{ij}=X^{ji}\), \(d_{ijk}\,X^{jk}=0\) & 84 & no \\ \(\mathbf{650}\) & \(X^{i}{}_{j}\) & \(\mathbf{X}=\mathbf{X}^{\dagger}\), \(X^{i}{}_{i}=0\), \(X^{i}{}_{j}(t^{K})^{j}{}_{i}=0\) & 150 & yes \\ \hline \hline \end{tabular} \end{table} Table 1: List of non-trivial irreducible representations (up to conjugation) of \(\mathrm{E}_{6}\) with \(\dim<1000\). They are written as tensors with a generic label \(X\) satisfying certain \(\mathrm{E}_{6}\)-invariant conditions. \(D_{2}\) denotes the Dynkin index of the representation. The cases that may be obtained as the residual symmetry after first stage breaking by the vacuum expectation value (VEV) of the representation \(\mathbf{650}\), however, are those which contain at least one singlet in this representation. Table 2 shows these are the regular cases, as well as the special embedding cases of \(\mathrm{F}_{4}\) and \(\mathrm{SU}(3)\times\mathrm{G}_{2}\), which involve the exceptional groups \(\mathrm{F}_{4}\) and \(\mathrm{G}_{2}\). It is these maximal cases with singlets in the \(\mathbf{650}\) that we focus on in our vacuum analysis, and we shall refer to them as the "relevant" subgroups. The "relevant" subgroups \(G\) can be specified concretely by listing their generators as a linear combinations of those from Eq. (2). The regular cases can be defined by subsets of \(\mathrm{E}_{6}\) generators \[\mathrm{SU}(3)^{3}: \{t^{A}_{C},\,t^{A}_{L},\,t^{A}_{R}\}, \tag{5}\] \[\mathrm{SU}(6)\times\mathrm{SU}(2): \{t^{A}_{C},\,t^{A}_{L},\,t^{1,2,3,8}_{R},\,t^{\alpha}{}_{a3}, \,\,\bar{t}_{\alpha}{}^{a3}\},\] (6) \[\mathrm{SO}(10)\times\mathrm{U}(1): \{t^{A}_{C},t^{1,2,3,8}_{L},t^{1,2,3,8}_{R},t^{\alpha}{}_{bb^{ \prime}},\,\bar{t}_{\alpha}{}^{b^{\prime}};\,\,(bb^{\prime})=(11),(12),(21),( 22),(33)\}, \tag{7}\] while a more elaborate construction is necessary for those with a special embedding, see Appendix A for technical details. In Eqs. (5)-(7) the index \(A\) runs unrestricted from 1 to 8 and indices \(a\) and \(a^{\prime}\) from 1 to 3, while specific index values are listed with commas in cases when only some generators of that type are present, e.g. \(t^{1,2,3,8}_{L}\). Along with the already defined abbreviations \(G_{333}\) and \(G_{62}\), we shall sometimes shorten \(\mathrm{SO}(10)\times\mathrm{U}(1)\) to \(G_{10,1}\), as indicated in Table 2. We finish this section on subgroups with a few noteworthy remarks: * The subgroup specifications of Eqs. (5)-(7) are only one possible embedding of each into \(\mathrm{E}_{6}\). Given a subalgebra embedding \(\mathfrak{g}\hookrightarrow\mathfrak{e}_{6}\), we can obtain an equivalent embedding by conjugation, i.e. \(A\mathfrak{g}A^{-1}\) also constitutes an embedding of \(\mathfrak{g}\) if we conjugate with any group element \(A\in\mathrm{E}_{6}\). This shows there are always an infinite number of ways to embed \(G\) into \(\mathrm{E}_{6}\). Each case of \(G\) has, however, only one embedding (of its Lie algebra) into \(\mathrm{E}_{6}\) up to conjugacy, cf. e.g. lists of maximal algebras in [4]. The choice of one representative for each case suffices for our vacuum analysis. \begin{table} \begin{tabular}{l l l l l} \hline \hline subgroup \(G\) & type & decomposition of \(\mathbf{78}\) & \(\#_{\mathbf{1}_{G}\mathbf{650}}\) & \(G_{321}\stackrel{{?}}{{\subset}}G\) \\ \hline \(\mathrm{SU}(3)\times\mathrm{SU}(3)\times\mathrm{SU}(3)\equiv G_{333}\) & R & \(\underline{(\mathbf{8},\mathbf{1},\mathbf{1})}\oplus\underline{(\mathbf{1}, \mathbf{8},\mathbf{1})}\oplus\underline{(\mathbf{1},\mathbf{1},\mathbf{8})}\oplus\) & 2 & yes \\ & & \(\oplus(\mathbf{3},\bar{\mathbf{3}},\bar{\mathbf{3}})\oplus(\bar{\mathbf{3}}, \mathbf{3})\) & & \\ \(\mathrm{SU}(6)\times\mathrm{SU}(2)\) & \(\equiv G_{62}\) & R & \(\underline{(\mathbf{35},\mathbf{1})}\oplus\underline{(\mathbf{1},\mathbf{3})} \oplus\underline{(\mathbf{20},\mathbf{2})}\) & 1 & yes \\ \(\mathrm{SO}(10)\times\mathrm{U}(1)\) & \(\equiv G_{10,1}\) & R & \(\underline{(\mathbf{45},0)}\oplus\underline{(\mathbf{1},0)}\oplus\underline{( \mathbf{16},+3)}\oplus\underline{(\mathbf{16},-3)}\) & 1 & yes \\ \(\mathrm{F}_{4}\) & & S & \(\underline{\mathbf{56}}\oplus\underline{\mathbf{26}}\) & 1 & no \\ \(\mathrm{SU}(3)\times\mathrm{G}_{2}\) & & S & \(\underline{(\mathbf{8},\mathbf{1})}\oplus\underline{(\mathbf{1},\mathbf{14})} \oplus\underline{(\mathbf{8},\mathbf{7})}\) & 1 & no \\ \(\mathrm{G}_{2}\) & & S & \(\underline{\mathbf{14}}\oplus\underline{\mathbf{64}}\) & 0 & no \\ \(\mathrm{SU}(3)\) & & S & \(\underline{\mathbf{8}}\oplus\mathbf{35}\oplus\overline{\mathbf{35}}\) & 0 & no \\ \(\mathrm{Sp}(8)\) & & S & \(\underline{\mathbf{36}}\oplus\mathbf{42}\) & 0 & no \\ \hline \hline \end{tabular} \end{table} Table 2: List of maximal subgroups \(G\) of \(\mathrm{E}_{6}\), along with embedding type (regular/special), decomposition of the \(\mathbf{E}_{6}\) adjoint into irreducible representations of \(G\) (with the adjoint representation(s) of \(G\) underlined), and number \(\#\) of \(G\)-singlets in the \(\mathbf{650}\). The last column indicates whether the SM group is a subgroup of \(G\). We consider the first 5 cases to be the “relevant” subgroups. * A related question concerns the embedding of the SM into \(\mathrm{E}_{6}\). Given a fixed embedding of SM, conjugating the embedding of \(G\) by an element \(A\in\mathrm{E}_{6}\) may or may not change the SM embedding. From this point of view, although all embeddings of \(G\) into \(\mathrm{E}_{6}\) are equivalent, they may differ by how the SM group is embedded into \(G\) (for example, how the hypercharge is embedded). The classification of how SM embeds into the regular subgroups \(G\) will not be relevant for us in this paper, since it has no bearing on the properties of the minima in the first stage of symmetry breaking, and as such this question is beyond the scope of this study, but will instead be presented in a follow-up. * In the case of breaking chains proceeding through trinification \(G_{333}\), there is an additional discrete symmetry that plays a role, as we shall see explicitly in Section 3. We denote this symmetry by \(D_{3}\). It essentially permutes the three group factors of \(G_{333}\) (and is hence isomorphic to the permutation group \(S_{3}\)), and is generated by three parities that exchange pairs of factors, dubbed as LR, CL and CR parity. One of these parities from \(D_{3}\) survives the breaking to \(G_{333}\) via the \(\mathbf{650}\). Note that this phenomenon is analogous to \(D\)-parity in the SO(10) context [24; 25; 26], and in fact the \(\mathrm{E}_{6}\) LR parity is equivalent to the SO(10) \(D\)-parity. We relegate the technical details and further discussion of the discrete group \(D_{3}\) to Appendix B. * Finally, we elaborate upon our limitation of analyzing vacua only for the "relevant" subgroups of Table 2. If one uses a single irreducible representation for spontaneous symmetry breaking, as we do in the first stage breaking, Michel's conjecture states that the resulting symmetry is a maximal little group of the representation, see [3; 27] and references therein. This suggests that large subgroups (such as maximal subgroups) are indeed of greatest interest in the analysis. Namely, vacua breaking to subgroups of the "relevant" ones would be in violation of Michel. The cases allowed by Michel that we do not analyze are the maximal subgroups of the last three cases in Table 2: all of these indeed contain singlet(s) in the \(\mathbf{650}\), and they are maximal little groups if they are not subgroups of the "relevant" cases. Although a complete classification of all breaking solutions of a single \(\mathbf{650}\) of \(\mathrm{E}_{6}\) is beyond the scope of this paper, the arguments above suggest the "relevant" cases may indeed be the only pertinent ones, at least phenomenologically. ## 3 Analysis of scalar potential with one \(\mathbf{650}\) of \(\mathrm{E}_{6}\) ### The scalar potential As seen in Eq. (4), the representation \(\mathbf{650}\) is found in the tensor product \(\mathbf{27\otimes\overline{27}}\). It can thus be implemented as a \(27\times 27\) matrix \(\mathbf{X}\), written as \(X^{i}{}_{j}\) in tensor notation, with some additional constraints placed on it, cf. Table 1. The most general renormalizable potential with the representation \(\mathbf{650\equiv X}\) can be written as \[\begin{array}{rl}V_{650}(\mathbf{X})=&M^{2}\cdot\mathrm{Tr}(\mathbf{X}^{2}) \\ &+\,m_{1}\cdot\mathrm{Tr}(\mathbf{X}^{3})+m_{2}\cdot X^{i}{}_{l}\,X^{j}{}_{m} \,X^{k}{}_{n}\,d^{lmn}d_{ijk}\\ &+\,\lambda_{1}\cdot(\mathrm{Tr}(\mathbf{X}^{2}))^{2}\,+\,\lambda_{2}\cdot \mathrm{Tr}(\mathbf{X}^{4})\,+\,\lambda_{3}\cdot(\mathbf{X}^{2})^{k}{}_{i}\,( \mathbf{X}^{2})^{l}{}_{j}\,D^{ij}{}_{kl}\\ &+\,\lambda_{4}\cdot X^{i}{}_{i^{\prime}}X^{j}{}_{j^{\prime}}\,X^{k}{}_{k^{ \prime}}\,X^{l}{}_{l^{\prime}}\,D^{l^{\prime}j^{\prime}}{}_{kl}\,D^{k^{\prime} l^{\prime}l^{\prime}}{}_{ij}\,+\,\lambda_{5}\cdot X^{i}{}_{l}\,X^{j}{}_{m}\,( \mathbf{X}^{2})^{k}{}_{n}\,d^{lmn}d_{ijk},\end{array} \tag{1}\] where the bold-typed \(\mathbf{X}\) indicates its matrix form, \(X^{i}{}_{j}\) the components of this matrix, while \(d_{ijk}\) is the primitive completely-symmetric invariant tensor in \(\mathrm{E}_{6}\) and \(D^{ij}{}_{kl}\) is a composite defined by \[D^{ij}{}_{kl}:=d^{ijm}\,d_{klm}. \tag{2}\] A few more technical details for the \(d\)-tensors are given in Appendix A.1. All indices involved in the above expression run from 1 to 27, and all parameters \(M^{2}\), \(m_{k}\) and \(\lambda_{l}\) are real. Notice that there are 2 cubic and 5 quartic invariants in Eq. (10). This result is non-trivial, since there can be many different ways in which the \(\mathrm{E}_{6}\) indices between the tensor \(X^{i}{}_{j}\) and the invariant tensors \(d\) can be contracted, and furthermore the different contractions may be related to one another. This difficulty can be solved if one knows the number of independent invariants in advance, for which we used the _symmetrised tensor power_ function of the computer algebra program LiE [28]. Having access to the explicit form of the tensor \(d\) then allows for constructing that many same-order invariants and checking their linear independence explicitly. To analyze the stationarity conditions of this potential, we need to consider only the singlet directions in \(\mathbf{X}\) with respect to the "relevant" subgroups, as discussed in Section 2.2. We thus digress to specifying these directions below. For the regular cases in Table 2, a singlet of \(G\) will also be a singlet under the SM group \(G_{321}\). The singlet directions of regular \(G\) are thus most conveniently viewed as directions in the sub-space of SM-singlets. The representation \(\mathbf{650}\) contains 11 real singlets of the SM group, which we label by \[\{\,s,\quad a,\quad z,\quad x_{1},\quad x_{2},\quad X_{3},\quad y _{1},\quad y_{2},\quad Y_{3}\,\}, \tag{11}\] where small letters are used for real VEVs and capital letters for complex valued VEVs containing 2 real degrees of freedom. They are partially specified by listing their \(G_{333}\) origin in Table 3, while their explicit definition in terms of entries \(X^{i}{}_{j}\) is given in Appendix C. As seen from the \(G_{333}\) decomposition in Table 3, the representation \(\mathbf{650}\) has two \(G_{333}\) singlets labeled by \(s\) and \(a\), standing for _symmetric_ and _anti-symmetric_ under left-right (LR) parity. This parity exchanges the \(\mathrm{SU}(3)_{L}\) and \(\mathrm{SU}(3)_{R}\) subgroups, see Appendix B for further elaboration. In the decomposition to \(G_{10,1}\) and \(G_{62}\), however, there is only one singlet state in the \(\mathbf{650}\). We use a generic label \(\tilde{s}\) for this state, and it can be specified in each case as a linear combination of the SM-singlets listed in Eq. (11). Explicitly, for the subgroups defined in Eqs. (6)-(7), the computed directions are given in Table 4. The \(G\)-singlet directions for the special embedding cases of \(\mathrm{F}_{4}\) and \(\mathrm{SU}(3)\times\mathrm{G}_{2}\) do not live in the SM-singlet subspace, since these groups don't contain the SM group. We still label the VEV-direction in the \(\mathbf{650}\) that retains these symmetries by \(\tilde{s}\), but we relegate their explicit definition to Appendix C. Note that the VEVs in the \(G\)-singlet direction(s) are always normalized such that the following hold: \[\langle\mathrm{Tr}(\mathbf{X}^{2})\rangle=\tfrac{1}{2}\,(s^{2}+ a^{2}), \langle\mathrm{Tr}(\mathbf{X}^{2})\rangle=\tfrac{1}{2}\,\tilde{s}^{2}. \tag{12}\] The former applies to \(G_{333}\), and the latter to all other cases with only one \(G\)-singlet in the \(\mathbf{650}\). Returning to the analysis of the scalar potential \(V_{650}\) in Eq. (10), the stationarity conditions can be expressed only in terms of the defined singlet directions for each of the relevant cases of Table 2. The \(G\)-non-singlets, on the other hand, do not need to be considered since they have a vanishing VEV -- we refer to the potential with all non-singlet zero-VEVs inserted as the _restricted potential_. It turns out all cases lead to a similar expression for the restricted potential that takes the general form \[V(s,a)=\tfrac{1}{2}M^{2}\,(s^{2}+a^{2})-\tfrac{1}{3}m\,s(s^{2}-3 a^{2})+\tfrac{1}{4}\lambda\,(s^{2}+a^{2})^{2}, \tag{13}\] where the parameter \(M^{2}\) is the same as in \(V_{650}\), while \(m\) and \(\lambda\) are determined from the original parameters in Eq. (10) via \[m:=\sum_{k=1}^{2}\alpha_{k}\,m_{k}, \lambda:=\sum_{l=1}^{5}\beta_{l}\,\lambda_{l}, \tag{14}\] where the numbers \(\alpha_{k}\) and \(\beta_{l}\) are subgroup dependent and are given in Table 5. For all cases other than trinification, when only one singlet is present, one obtains the restricted Pontential by setting \(s=\tilde{s}\) and \(a=0\). The last column of Table 5 also gives explicit expressions for the masses of those gauge bosons which correspond to broken symmetry generators, i.e. those representations of \(G\) not underlined in the decomposition of the \(\mathbf{78}\) in Table 2. These results come from the computation of all \(78\) gauge boson masses in each case. They thus offer an independent check that we correctly identified the singlets of \(G\) in the representation \(\mathbf{650}\), since we obtain the correct counting of degeneracies in gauge boson masses. We analyze the restricted potential of Eq. (10) and obtain minima candidates for the potential in Eq. (11) in the next subsection. ### Analysis of the vacua #### 3.2.1 Solution to stationarity conditions We start by analyzing the most general form of the restricted potential in Eq. (10). Since the VEVs \(s\) and \(a\) are real, the potential is a polynomial function mapping \(\mathbb{R}^{2}\to\mathbb{R}\). For the potential to be bounded from below, we demand \(\lambda>0\). First, we notice a three-fold symmetry in the restricted potential. Suppose that we perform a rotation in the \((s,a)\) plane by an angle \(\varphi\): \[\begin{pmatrix}s\\ a\end{pmatrix}\mapsto R(\varphi)\begin{pmatrix}s\\ a\end{pmatrix}, R(\varphi):=\begin{pmatrix}\cos\varphi&-\sin\varphi\\ \sin\varphi&\cos\varphi\end{pmatrix}. \tag{11}\] \begin{table} \begin{tabular}{l l} \hline \hline subgroup \(G\) & VEV alignment \\ \hline \(G_{333}\) & \(s\), \(a\) \\ \(G_{62}\) & \((s,a,x_{1},x_{2})=\frac{1}{2\sqrt{30}}\left(2\sqrt{3},6,-3\sqrt{2},-3\sqrt{6} \right)\tilde{s}\) \\ \(G_{10,1}\) & \((s,z,x_{1},x_{2},y_{1},y_{2})=\frac{1}{\sqrt{80}}\left(2\sqrt{2},2\sqrt{3}, \sqrt{3},3,-2\sqrt{3},6\right)\tilde{s}\) \\ \hline \hline \end{tabular} \end{table} Table 4: The VEV alignment of \(\tilde{s}\) in the \(\mathbf{650}\) which breaks \(E_{6}\) to the regular subgroups \(G\) defined in Eqs. (11)–(12). The \(G_{333}\) case has 2 possible independent directions \(s\) and \(a\) for the VEV. The special cases \(\mathrm{F}_{4}\) and \(\mathrm{SU}(3)\times\mathrm{G}_{2}\) are considered separately in Appendix C. \begin{table} \begin{tabular}{l l l} \hline \hline \(G_{333}\) irrep & LR parity & SM singlets inside \\ \hline \(\mathbf{(1,1,1)}\) & \(+1\) & \(s\) \\ \(\mathbf{(1,1,1)}\) & \(-1\) & \(a\) \\ \(\mathbf{(1,1,8)}\) & & \(x_{1}\), \(x_{2}\), \(\mathrm{Re}(X_{3})\), \(\mathrm{Im}(X_{3})\) \\ \(\mathbf{(1,8,1)}\) & & \(z\) \\ \(\mathbf{(1,8,8)}\) & & \(y_{1}\), \(y_{2}\), \(\mathrm{Re}(Y_{3})\), \(\mathrm{Im}(Y_{3})\) \\ \hline \hline \end{tabular} \end{table} Table 3: The \(G_{333}\)-locations of \(11\) SM real singlets from the \(\mathbf{650}\) of \(\mathrm{E}_{6}\). The two states \(s\) and \(a\) have a well-defined LR parity. Explicit definitions of states can be found in Appendix C. Inserting this transformation into the restricted potential, we get \[V_{650}(s,a)\mapsto\tfrac{1}{2}M^{2}\left(s^{2}+a^{2}\right)-\tfrac{1}{3}m\,\left( s(s^{2}-3a^{2})\cos(3\varphi)+a(a^{2}-3s^{2})\sin(3\varphi)\right)+\tfrac{1}{4} \lambda\,\,(s^{2}+a^{2})^{2}. \tag{10}\] Angles that satisfy \(3\varphi=2\pi k\) for integer \(k\) clearly yield back the original restricted potential of Eq. (11). The potential \(V\) thus exhibits a \(\mathbb{Z}_{3}\) symmetry in the \(s\)-\(a\) plane, generated by \(R(2\pi/3)\). Furthermore, the flip \(a\mapsto-a\) generates a \(\mathbb{Z}_{2}\) parity transformation, which also leaves the restricted potential invariant. All together these transformations form an \(S_{3}\) permutation symmetry of the restricted potential. The full two-singlet expression of the restricted potential is relevant only for the \(G_{333}\) trinification case, and in this context the \(S_{3}\) transformations correspond to the discrete group \(D_{3}\) from Section 2.2. To reiterate, it is generated by the \(\mathbb{Z}_{2}^{LR}\), \(\mathbb{Z}_{2}^{CL}\) and \(\mathbb{Z}_{2}^{CR}\) parities defined in Appendix B. The flip \(a\mapsto-a\) directly corresponds to \(\mathbb{Z}_{2}^{LR}\) parity, in accordance with Table 3. We now proceed by analyzing the restricted potential directly: the stationarity conditions \[\tfrac{\partial}{\partial s}V=\tfrac{\partial}{\partial a}V=0 \tag{11}\] yield 7 formal solutions \((s_{i},a_{i})\), with \(i=0,\dots 6\) as follows: \[(s_{0},a_{0}) =(0,0), \tag{12}\] \[(s_{1,2},a_{1,2}) =(\tfrac{m\pm\sqrt{m^{2}-4\lambda\,M^{2}}}{2\lambda},0), \tag{13}\] while the remaining 4 solutions are the \((s_{j},a_{j})\) solutions for \(j=1,2\) rotated by either \(R(2\pi/3)\) or \(R(4\pi/3)\) due to the \(\mathbb{Z}_{3}\) symmetry of the restricted potential. When the formal expressions for the solution yield real values, i.e. when the discriminant \(\mathcal{D}\) is positive \[\mathcal{D}:=m^{2}-4\lambda\,M^{2}>0, \tag{14}\] non-trivial minima can exist. Intuitively, demanding \(\lambda>0\) and destabilizing the trivial solution by \(M^{2}<0\), the discriminant is positive for all \(m\in\mathbb{R}\). Since at least one of the points must be a local minimum in such a case, combining \(\mathbb{Z}_{3}\) with the Poincare-Hopf index theorem results in having 3 minima and 3 saddle points. The pattern of minima is confirmed in the general case by the Hessian matrix of \(V\) with the inserted \((s_{1,2},a_{1,2})\) solutions: \[\partial^{2}V\Big{|}_{(s,a)=(s_{1,2},a_{1,2})}=\frac{m\pm\sqrt{\mathcal{D}}}{ 2\lambda}\,\operatorname{diag}\left(\pm\sqrt{\mathcal{D}},3m\right). \tag{15}\] \begin{table} \begin{tabular}{l l l l} \hline \hline subgroups & \(\alpha_{k}\) & \(\beta_{l}\) & value of \(M_{GUT}^{2}\) \\ \hline \(G_{333}\) & \(\tfrac{1}{4\sqrt{3}}(1,10)\) & \(\tfrac{1}{18}(18,1,7,34,4)\) & \(M_{(\mathbf{3},\mathbf{\bar{3}},\mathbf{\bar{3}})}^{2}=\tfrac{1}{3}\,g^{2}(s^ {2}+a^{2})\) \\ \(G_{62}\) & \(\tfrac{1}{4\sqrt{30}}(1,-44)\) & \(\tfrac{1}{180}(180,7,67,652,-8)\) & \(M_{(\mathbf{20},\mathbf{2})}^{2}=\tfrac{9}{20}\,g^{2}\hat{s}^{2}\) \\ \(G_{10,1}\) & \(\tfrac{1}{8\sqrt{30}}(29,20)\) & \(\tfrac{1}{720}(720,397,97,520,40)\) & \(M_{\mathbf{16}(-3)}^{2}=\tfrac{9}{16}\,g^{2}\hat{s}^{2}\) \\ \(\mathrm{F}_{4}\) & \(\tfrac{1}{\sqrt{78}}(25,34)\) & \(\tfrac{1}{234}(234,217,295,394,292)\) & \(M_{\mathbf{26}}^{2}=\tfrac{9}{13}\,g^{2}\hat{s}^{2}\) \\ \(\mathrm{SU}(3)\times\mathrm{G}_{2}\) & \(\tfrac{1}{\sqrt{42}}(5,23)\) & \(\tfrac{1}{126}(126,13,55,181,43)\) & \(M_{(\mathbf{8},\mathbf{7})}^{2}=\tfrac{9}{28}\,g^{2}\hat{s}^{2}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Coefficients \(\alpha_{k}\) and \(\beta_{l}\) needed to compute the parameters of the potential in Eq. (11) using Eq. (11). The last column gives masses acquired by the gauge bosons once the VEVs are engaged. For \(m>0\), together with the non-trivial minima and boundedness conditions \(\lambda,{\cal D}>0\), we therefore see that \((s_{i},a_{i})\) is a (global) minimum of the restricted potential \(V\) for \(i=1\) (the point with \(s>0\)), and a saddle point for \(i=2\). For \(m<0\), the role of minima and saddle points are exchanged.3 Also, if \(M^{2}=0\), the saddle points merge with \(P_{0}\) for any sign of \(m\). Footnote 3: Note that the sign of \(\bf 650\) can be redefined it necessary, so that one can take \(m>0\) without loss of generality. The parameter \(m\) is, however, a derived parameter dependent on \(G\), so the redefinition is possible only for an analysis limited to vacua of a single \(G\), while \(m>0\) may not be achieved for all \(G\) simultaneously. The potential \(V\) is visualized in the contour plot of the restricted potential in Figure 1. Specific parameter values \((M^{2},m,\lambda)=(-1,0.3,1)\) were chosen. The stationary points are indicated as red dots and denoted by \(P_{i}=(s_{i},a_{i})\) for \(i=0,\ldots,6\). The plot clearly exhibits the three-fold \({\mathbb{Z}}_{3}\) rotational symmetry, and it is symmetric under the \(a\mapsto-a\) reflection. The point \(P_{0}\) is a local maximum, \(P_{1,3,5}\) are global minima, and \(P_{2,4,6}\) are saddle points. Suppose we define two new rotated sets of coordinates beside the pair \((s,a)\): \[(s^{\prime},a^{\prime})^{T}:=R(2\pi/3)\,(s,a)^{T},\hskip 56.905512pt(s^{\prime \prime},a^{\prime\prime})^{T}:=R(4\pi/3)\,(s,a)^{T}. \tag{3.14}\] We see that the points \(P_{2}\), \(P_{0}\) and \(P_{1}\) lie on the \(a=0\) line. Similarly, the points \(P_{4,0,3}\) lie on the \(a^{\prime}=0\) line and \(P_{6,0,5}\) lie on the \(a^{\prime\prime}=0\) line due to rotational symmetry. The implication for trinification is that each of the 3 minima preserves a different \({\mathbb{Z}}_{2}\) parity: \(\{P_{1},P_{3},P_{5}\}\) preserve the {LR,CL,CR} parities corresponding to \(\{a=0,a^{\prime}=0,a^{\prime\prime}=0\}\), respectively. In particular, any trinification vacuum preserves a particular \({\mathbb{Z}}_{2}\) parity remnant of the discrete symmetry \(D_{3}\) of the restricted potential. For all cases of subgroups \(G\) other than trinification, when there is only one singlet \(\tilde{s}\), the stationarity condition is a cubic equation of \(\tilde{s}\) and therefore has 3 formal solutions. These consist of setting \(s=\tilde{s}\) and \(a=0\) in Eqs. (3.10)-(3.11), with the second derivative at \(\tilde{s}_{1,2}\) equal to the upper-left entry in the matrix of Eq. (3.13). * In the regime \(M^{2}<0\) and \(\lambda>0\), we always have \({\cal D}>m^{2}\), and thus the solution \(\tilde{s}_{0}=0\) is a local maximum and \(\tilde{s}_{1,2}\) are two local minima. For \(m>0\), the \(+\) solution \(\tilde{s}_{1}\) is the global minimum of Figure 1: The contour plot in the \((s,a)\)-plane for the restricted potential \(V\) of Eq. (3.5) taking \(M^{2}=-1\), \(m=0.3\) and \(\lambda=1\) (using arbitrary mass units). The stationary points \(P_{i}=(s_{i},a_{i})\) are shown in red. \(V\). This can be visualized in Figure 1 if we restrict to the dashed line \(a=0\): \(P_{1}\) and \(P_{2}\) are local minima along this line, with \(P_{1}\) being also the global minimum. * In the alternative regime \(M^{2},\lambda>0\), the trivial solution is a local minimum and non-trivial solutions exist only for \({\cal D}>0\); in such a case we have \(|m|>\sqrt{{\cal D}}\), and so for \(m>0\) the solution \(\tilde{s}_{1}\) is a local minimum, while \(\tilde{s}_{2}\) is a local maximum. * For \(M^{2}=0\), the trivial minimum and local maximum from the \(M^{2}>0\) case merge into a saddle point, and one local minimum remains (\(s_{1}\) for \(m>0\)). To summarize, for all symmetries \(G\) there is at least one corresponding (local) minimum of the restricted potential when \(\lambda,{\cal D}>0\). For \(G_{333}\) there are 3 degenerate minima, and for all other cases of \(G\) in Table 5 we have either one (\(M^{2}\geq 0\)) or two (\(M^{2}<0\)) local minima. In the latter case, their difference in depth is controlled by the parameter \(m\) of the cubic term. We stress that these results hold for the restricted potential. In the full potential \(V_{650}\) of Eq. (10), one also has to consider the fields in non-singlet directions. By construction, these were taken with zero VEVs and the local minima of the restricted potential \(V\) are stationary points of the full potential \(V_{650}\). Their ultimate fate, i.e. whether they become local minima or saddle points of \(V_{650}\), is addressed in Section 3.2.2. #### 3.2.2 The scalar spectrum in different vacua Armed with the solutions for stationarity points for different cases of subgroup \(G\), the obvious next step is to determine the conditions under which these are (local) minima of the potential \(V\) in Eq. (10) rather than merely saddle points. In other words, we are seeking conditions under which the potential is stabilized in all 650 field directions. This is easily addressed by considering the masses of all the states in the **650**, which can be grouped according to the residual symmetry \(G\). We provide the masses of representations in different vacua in a series of tables. The masses for the \(G_{333}\) vacuum are listed in Table 6, for the \(G_{62}\) case in Table 7, and for the \(G_{10,1}\) case in Table 8. In all tables we also specify for each representation \(R\) whether the fields inside are complex or real; the reader can verify in each case that the total number of real degrees of freedom adds up to 650. Since the \(\mathrm{F}_{4}\) and \(\mathrm{SU}(3)\times\mathrm{G}_{2}\) vacua are not of phenomenological relevance, we omit the computation of associated masses for these cases. The results in the tables were obtained by differentiating the full potential of Eq. (10) with respect to two appropriately-transforming fields \(\phi_{a}\) and \(\phi_{b}\) (components of the in principle complex representation \(R\) of \(G\)) from \(\mathbf{\hat{X}}\sim\mathbf{650}\), and inserting the vacuum condition, i.e. \((m^{2})^{a}{}_{b}=\partial^{a*}\partial_{b}V_{650}|\). In order to obtain simpler expressions, we solved the stationarity conditions for \(M^{2}\) instead of the VEV \(s\) (or \(\tilde{s}\)). Note that for a fixed parameter point \((M^{2},m_{k},\lambda_{l})\), the VEVs take different values in different tables. Explicitly, \(s\) in Tables 6 and \(\tilde{s}\) in Tables 7 and 8 take the value of the solution in Eq. (11) that corresponds to a local minimum of the restricted potential \(V\) (identified by \(\partial^{2}V\) via Eq. (13)), where the parameters \(m\) and \(\lambda\) are \(G\)-dependent and are determined from Eq. (12) and Table 5. For trinification masses in Table 6, we chose the LR-symmetric vacuum with \(a=0\) out of the three discrete possibilities. Considering the labeling in the tables: when there is more than one copy of the representation \(R\) of subgroup \(G\) in the **650**, we distinguish the eigenstates by labeling them with a numeric index. This happens only for group \(G_{333}\) in Table 6. For a given parameter point \((M^{2},m_{k},\lambda_{l})\), the VEV solution associated to the symmetry \(G\) is a local minimum of the potential \(V_{650}\) if all the masses are non-tachyonic for that case.4 We analyze the non-tachyonic regions of parameter space numerically in Section 3.3, but conclude the discussion here with some observations on the analytic form of mass expressions: * A vanishing mass expression in Tables 6-8 corresponds to the would-be Goldstone states, as confirmed by the lists of broken generators denoted by non-underlined terms in the decomposition of \(\mathbf{78}\) found in Table 2. Note that this adjoint \(\mathbf{78}\) is taken as a real representation, so the complex representations in the decomposition have their degrees of freedom listed twice -- both in the original form and as a complex conjugate. In particular, this remark applies for the \(\mathbf{(3,\bar{3},\bar{3})}\) of \(G_{333}\) and the \(\mathbf{(16,-3)}\) of \(G_{10,1}\). \begin{table} \begin{tabular}{c c c} \hline \hline irreps \(R\) of \(G_{333}\) & \(\mathbb{R}/\mathbb{C}\) & masses-square \(m_{R}^{2}\) \\ \hline \(\mathbf{(1,1,1)}_{1}\) & \(\mathbb{R}\) & \(\frac{1}{36}s\left(4s(18\lambda_{1}+\lambda_{2}+7\lambda_{3}+34\lambda_{4}+4 \lambda_{5})-3\sqrt{3}(m_{1}+10m_{2})\right)\) \\ \(\mathbf{(1,1,1)}_{2}\) & \(\mathbb{R}\) & \(\frac{1}{4}\sqrt{3}s(m_{1}+10m_{2})\) \\ \(\mathbf{(8,1,1)}\) & \(\mathbb{R}\) & \(\frac{1}{4}s\left(\sqrt{3}m_{1}-2\sqrt{3}m_{2}+2s(4\lambda_{4}-\lambda_{5})\right)\) \\ \(\mathbf{(1,8,1)}\), \(\mathbf{(1,1,8)}\) & \(\mathbb{R}\) & \(\frac{1}{12}s\left(18\sqrt{3}m_{2}+s(\lambda_{2}-5\lambda_{3}-20\lambda_{4}-5 \lambda_{5})\right)\) \\ \(\mathbf{(3,\bar{3},\bar{3})}_{1}\) & \(\mathbb{C}\) & \(\frac{1}{18}s\left(3\sqrt{3}(m_{1}+10m_{2})-2s(\lambda_{3}+2(8\lambda_{4}+ \lambda_{5}))\right)\) \\ \(\mathbf{(3,\bar{3},\bar{3})}_{2}\) & \(\mathbb{C}\) & \(0\) \\ \(\mathbf{(3,6,\bar{3})}\), \(\mathbf{(3,\bar{3},6)}\) & \(\mathbb{C}\) & \(s\left(\sqrt{3}m_{2}-\frac{1}{6}s(12\lambda_{4}+\lambda_{5})\right)\) \\ \(\mathbf{(6,3,3)}\) & \(\mathbb{C}\) & \(\frac{1}{12}s\left(3\sqrt{3}(m_{1}+2m_{2})-4s(4\lambda_{4}+\lambda_{5})\right)\) \\ \(\mathbf{(8,8,1)}\), \(\mathbf{(8,1,8)}\) & \(\mathbb{R}\) & \(\frac{1}{4}s\left(\sqrt{3}(m_{1}+4m_{2})-s(8\lambda_{4}+\lambda_{5})\right)\) \\ \(\mathbf{(1,8,8)}\) & \(\mathbb{R}\) & \(\frac{1}{12}s\left(2s(\lambda_{2}+\lambda_{3}-8\lambda_{4}+\lambda_{5})-3\sqrt {3}(m_{1}-2m_{2})\right)\) \\ \hline \hline \end{tabular} \end{table} Table 6: Masses of irreducible representations in the \(\mathrm{SU}(3)_{C}\times\mathrm{SU}(3)_{L}\times\mathrm{SU}(3)_{R}\) vacuum with LR parity. While \(a=0\) has already been inserted, \(s\) is a local minimum of \(V\) determined from Eqs. (3.11)–(3.13), where \(m\) and \(\lambda\) are computed from Eq. (3.6) and Table 5. \begin{table} \begin{tabular}{c c c} \hline \hline irrep \(R\) of \(G_{62}\) & \(\mathbb{R}/\mathbb{C}\) & mass-square \(m_{R}^{2}\) \\ \hline \(\mathbf{(1,1)}\) & \(\mathbb{R}\) & \(\frac{1}{360}\tilde{s}\left(4\tilde{s}\left(180\lambda_{1}+7\lambda_{2}+67 \lambda_{3}+652\lambda_{4}-8\lambda_{5}\right)-3\sqrt{30}\left(m_{1}-44m_{2} \right)\right)\) \\ \(\mathbf{(35,1)}\) & \(\mathbb{R}\) & \(\frac{1}{120}\tilde{s}\left(2\tilde{s}\left(5\lambda_{2}-19\lambda_{3}-208 \lambda_{4}+2\lambda_{5}\right)-3\sqrt{30}\left(m_{1}+28m_{2}\right)\right)\) \\ \(\mathbf{(20,2)}\) & \(\mathbb{R}\) & \(0\) \\ \(\mathbf{(35,3)}\) & \(\mathbb{R}\) & \(\frac{1}{40}\tilde{s}\left(4\tilde{s}\left(\lambda_{2}+\lambda_{3}-32\lambda _{4}-2\lambda_{5}\right)-3\sqrt{30}\left(m_{1}+4m_{2}\right)\right)\) \\ \(\mathbf{(70,2)}\) & \(\mathbb{C}\) & \(-\frac{3}{40}\tilde{s}\left(6\sqrt{30}\,m_{2}+\tilde{s}\left(52\lambda_{4}- \lambda_{5}\right)\right)\) \\ \(\mathbf{(189,1)}\) & \(\mathbb{R}\) & \(\frac{1}{40}\tilde{s}\left(3\sqrt{30}\left(m_{1}-4m_{2}\right)+2\tilde{s} \left(\lambda_{2}+\lambda_{3}-64\lambda_{4}+6\lambda_{5}\right)\right)\) \\ \hline \hline \end{tabular} \end{table} Table 7: Masses of irreducible representations in the \(\mathrm{SU}(6)\times\mathrm{SU}(2)\) vacuum, where \(\tilde{s}\) is a local minimum of \(V\) determined from Eqs. (3.11)–(3.13), where \(m\) and \(\lambda\) are computed from Eq. (3.6) and Table 5. The correct would-be Goldstone structure is a non-trivial consistency check of our vacuum and mass computation. * The masses-square have a structure consistent with power-counting of mass dimensions. The only dimensionful quantities in the theory are \(m_{1}\), \(m_{2}\) and \(M^{2}\), with the the latter being absorbed into the VEV \(s\) or \(\tilde{s}\) in the mass expressions. The mass-square expressions can thus be formed only from 7 linearly independent terms: \(m_{k}\tilde{s}\) (\(k=1,2\)) arising from cubic terms in the scalar potential, and \(\lambda_{l}\tilde{s}^{2}\) (for \(l=1,\ldots,5\)) arising from quartic terms. For the \(G_{333}\) case, the same applies if we replace \(\tilde{s}\) with \(s\) in the argumentation. The explicit forms in the tables confirm this analytic structure. Furthermore, the masses within each table are linearly independent; the exception is the case \(G_{333}\), where 9 different mass-square expressions span a 7-dimensional linear space, implying 2 independent mass relations. Specifically, the 2 mass sum rules in the trinification case can be written as \[2\,m_{(\mathbf{8},\mathbf{1},\mathbf{1})}^{2}+8\,m_{(\mathbf{8},\mathbf{8},\mathbf{1})}^{2} =m_{(\mathbf{1},\mathbf{1},\mathbf{1})_{2}}^{2}+9\,m_{(\mathbf{6},\mathbf{3})}^{2},\] (3.15) \[m_{(\mathbf{1},\mathbf{8},\mathbf{8})}^{2}+9\,m_{(\mathbf{3}, \mathbf{3},\mathbf{3})_{1}}^{2}+3\,m_{(\mathbf{6},\mathbf{3},\mathbf{3})}^{2} =2\,m_{(\mathbf{1},\mathbf{1},\mathbf{1})_{2}}^{2}+2\,m_{(\mathbf{1}, \mathbf{8},\mathbf{1})}^{2}+3\,m_{(\mathbf{3},\mathbf{6},\mathbf{3})}^{2}+6\,m _{(\mathbf{8},\mathbf{8},\mathbf{1})}^{2}.\] (3.16) * All 3 vacua in the trinification case \(G_{333}\) are equally suitable, i.e. have the same potential value. We chose the LR-symmetric case with \(a=0\), which then manifests in a LR-symmetric scalar spectrum, as seen from mass degeneracies in Table 6. The spectra of the other two vacua consist of the same masses reshuffled among the representations, in accordance with the action of the parity that transforms the LR-symmetric vacuum to the desired one. The set of 3 solutions in the trinification case also indicates a possible formation of domain walls during the first stage of symmetry breaking. Also, \(\mathbb{Z}_{2}\)-strings are the other type of possible topological defect arising due to preserved parity, as has been pointed out for the \(D\)-parity trinification case in Ref. [29]. We don't purse here the question of topological defect further; they are not problematic for cosmology if the scale of inflation is lower than the breaking scale. * Curiously, the mass of the second trinification singlet \((\mathbf{1},\mathbf{1},\mathbf{1})_{2}\) in Table 6 is a linear combination of the massive parameters \(m_{k}\), and hence vanishes if one takes the limit \(m_{k}\to 0\). \begin{table} \begin{tabular}{c c c} \hline \hline irrep \(R\) of \(G_{10,1}\) & \(\mathbb{R}/\mathbb{C}\) & mass-square \(m_{R}^{2}\) \\ \hline \((\mathbf{1},+0)\) & \(\mathbb{R}\) & \(\frac{1}{720}\tilde{s}\left(-3\sqrt{30}(29m_{1}+20m_{2})+2\tilde{s}(720\lambda _{1}+397\lambda_{2}+97\lambda_{3}+520\lambda_{4}+40\lambda_{5})\right)\) \\ \((\mathbf{10},+6)\) & \(\mathbb{C}\) & \(\frac{1}{240}\tilde{s}\left(3\sqrt{30}(4m_{2}-5m_{1})+\tilde{s}(65\lambda_{2} +185\lambda_{3}-112\lambda_{4}+26\lambda_{5})\right)\) \\ \((\mathbf{16},-3)\) & \(\mathbb{C}\) & \(0\) \\ \((\mathbf{45},+0)\) & \(\mathbb{R}\) & \(\frac{1}{80}\tilde{s}\left(9\sqrt{30}(m_{1}+4m_{2})-6\tilde{s}(7\lambda_{2} -5\lambda_{3}-8\lambda_{4}+4\lambda_{5})\right)\) \\ \((\mathbf{54},+0)\) & \(\mathbb{R}\) & \(\frac{1}{240}\tilde{s}\left(21\sqrt{30}m_{1}-60\sqrt{30}m_{2}+\tilde{s}(-127 \lambda_{2}+173\lambda_{3}+200\lambda_{4}+140\lambda_{5})\right)\) \\ \((\mathbf{144},-3)\) & \(\mathbb{C}\) & \(\frac{1}{96}\tilde{s}\left(12\sqrt{30}(m_{1}+m_{2})-\tilde{s}(52\lambda_{2} -32\lambda_{3}+76\lambda_{4}+\lambda_{5})\right)\) \\ \((\mathbf{210},+0)\) & \(\mathbb{R}\) & \(\frac{1}{240}\tilde{s}\left(3\sqrt{30}(13m_{1}+4m_{2})-4\tilde{s}(31\lambda_{ 2}+\lambda_{3}+7(4\lambda_{4}+\lambda_{5}))\right)\) \\ \hline \hline \end{tabular} \end{table} Table 8: Masses of irreducible representations in the \(\mathrm{SO}(10)\times\mathrm{U}(1)\) vacuum, where \(\tilde{s}\) is a local minimum of \(V\) determined from Eqs. (3.11)–(3.13), where \(m\) and \(\lambda\) are computed from Eq. (3.6) and Table 5. This can be understood directly from the restricted potential \(V(s,a)\). Taking \(m_{k}=0\) removes the cubic terms, and the remaining quadratic and quartic terms are functions only of the combination \(s^{2}+a^{2}\). In this limit the restricted potential \(V\) thus becomes rotationally symmetric in the \((s,a)\) plane, and the vacuum manifold of \(V\) becomes a circle, i.e. 1-dimensional, on which all \(P_{i}\) for \(i\) from 1 to 6 lie. The second \(G_{333}\)-singlet mass eigenstate can thus be interpreted as a pseudo-Goldstone boson associated to the explicit breaking of the \(\mathrm{SO}(2)\) rotational symmetry in the \((s,a)\)-plane by cubic terms with parameters \(m_{k}\). Note, however, that in the \(m_{k}\to 0\) limit only the restricted potential \(V\) in the \(G_{333}\)-singlet plane becomes rotationally symmetric, and not the full potential \(V_{650}\). This is not surprising, since the aforementioned \(\mathrm{SO}(2)\) symmetry belongs to a \(\mathrm{SO}(650)\) group of transformations of the real representation \(\mathbf{650}\), but not to the smaller \(\mathrm{E}_{6}\). In particular, the \(M^{2}\) and \(\lambda_{1}\) terms are symmetric under this \(\mathrm{SO}(2)\), while \(\lambda_{l}\) for \(l=2,3,4,5\) explicitly break it in the full space and preserve it only in the \((s,a)\)-plane. ### Candidates for global minima As a final step in our vacuum analysis, we (partially) address the question of the global minimum of the potential \(V_{650}\) in Eq. (11). This is achieved by comparing the value of the potential \(V_{650}\) of the solutions to vacua corresponding to the five "relevant" cases of Table 2 that we found in Section 3.2, and we refer to the deepest among them as the "global" minimum. We now proceed as follows: the procedure for determining the "global" minimum for a given parameter point and the associated caveats are discussed in Section 3.3.1, while concrete numerical results are then given in Section 3.3.2. #### 3.3.1 Procedure and limitations First, let us specify the procedure by which a parameter point \((M^{2},m_{k}^{2},\lambda_{l})\) is assessed, in part summarizing the results obtained from local analyses. We perform the following computational steps: 1. The potential \(V_{650}\) must be bounded from below. We test this only in the singlet directions; in particular, we confirm that \(\sum_{i}\beta_{i}\lambda_{i}>0\) for all cases of \(\vec{\beta}\) in Table 5. If this criterion is not passed, we do not consider the parameter point as viable. 2. All VEV solutions for the five "relevant" breaking possibilities in Table 2 are determined from Eq. (12), where \(m\) and \(\lambda\) are determined from Eq. (13) and Table 5. 3. Each VEV solution, together with its associated values of \(m\) and \(\lambda\), is inserted back into \(V\) of Eq. (12) to determine the corresponding values of the potential in the candidate points. Additionally, the potential value for the stationary point with a vanishing VEV \(\mathbf{(650)}=0\), corresponding to the unbroken \(\mathrm{E}_{6}\) phase, is considered. 4. For the regular cases of Table 2, which contain the SM group and are thus the physically relevant ones, we compute the masses for each of their VEV solutions using Tables 6-8 and check for possible tachyonicity. We thus unambiguously determine which cases correspond to local minima. 5. The case with the lowest value of the potential that is also a determined to be a local minimum is then considered the sole candidate for the "global" minimum. It is considered a viable "global" minimum if it also passes the following two physically motivated constraints: 1. The cubic couplings \(m_{1}\) and \(m_{2}\) should not be too large compared to the heaviest scalar mass (computed in the "global" minimum candidate only), otherwise the scattering amplitudes become non-unitary at low energies, see e.g. [30]. 2. A similar limitation applies to the gauge boson masses in the "global" minimum candidate: they should not be much larger than the heaviest scalar state lost the tree-level analysis becomes threatened by large one-loop corrections to the potential [31]. For both criteria, the "much heavier" threshold is taken to be a factor 10 at the level of masses-square, and is explicitly implemented as the following simple criteria:5 Footnote 5: In practice, the criteria of (3.17) are unlikely to be violated for a random parameter point; their exact formulation is thus not crucial, and we essentially use them as a very simple way of confirming that the breaking phenomenon occurs at roughly a single energy scale, which is not destabilized by any of the massive parameters or VEVs at the quantum level. \[\max\{m_{1}^{2},m_{2}^{2}\}\leq 10\,\max_{R}\,m_{R}^{2}, m_{\rm gauge}^{2}\leq 10\,\max_{R}\,m_{R}^{2},\] (3.17) where \(R\) goes over the irreducible scalar representations of \(G\) (the residual symmetry in the "global" minimum candidate) present in the \({\bf 650}\), and \(m_{\rm gauge}^{2}\) labels the gauge boson mass from Table 5 when breaking into \(G\) with the unified gauge coupling value taken as \(g\simeq 0.5\). The above procedure allows to asses any point in parameter space and determine its "global" minimum, which in turn serves as our best estimate for the actual global minimum of the scalar potential. The procedure, however, does have some caveats: * Local minima that are lower than the "global" one may in principle exist, since we did not fully classify all VEV solutions in the 650-dimensional space of scalars. As discussed in Section 2.2, however, the only other plausible minima consistent with Michel's conjecture (assuming discrete remnants of E\({}_{6}\) are not present) lead to subgroups of the last three cases of Table 2. Indeed, the representation \({\bf 650}\) contains singlets under all maximal subgroups of the three cases; we list all these candidates in Table 9. Among them, the candidates consistent with Michel are those who are not simultaneously subgroups of the "relevant" cases (up to conjugation). We do not proceeded further with this analysis, but one can argue that these small groups are not favored to produce a local minimum in most of the parameter space: under small subgroups of E\({}_{6}\), the \({\bf 650}\) decomposes into a large number of irreducible representations, leading to a large set of non-tachyonicity conditions that would all need to simultaneously hold. * The "global" analysis searches for the global minimum of the potential among the local minima. Consider though the well-known counterexample of a "runaway" potential in two dimensions: \[V(x,y)=(M^{2}-xy)^{2}+m^{2}\,x^{2},\] (3.18) where \(m,M>0\). Clearly \(V\geq 0\). The only stationary point is \((0,0)\) and it is a saddle point, the value \(V=0\) cannot be reached by any \((x,y)\), while for any \(\epsilon>0\) the point \((x_{\epsilon},y_{\epsilon})=(m\sqrt{\epsilon},\frac{M^{2}}{m\sqrt{\epsilon}})\) gives the potential value \(V(x_{\epsilon},y_{\epsilon})=\epsilon\,m^{4}\), which is arbitrarily close to zero for a small enough \(\epsilon\). This potential is bounded from below, but does not have an absolute minimum at any point. The existence of the global minimum for a continuous potential is guaranteed only in a compact domain, and the key feature of the counterexample is that the potential does not diverge at infinity in all directions. Such a pathology though is unlikely to arise in our \(V_{650}\) case for the generic parameter point, since already the quartic invariant associated to the parameter \(\lambda_{1}\) in Eq. (3.1) diverges at infinity for all directions of \({\bf X}\). * Boundedness from below was checked only in the 5 main VEV directions, i.e. \(\lambda>0\) for all cases in Table 5. This is only a necessary condition that the full potential \(V_{650}\) is bounded from below. Formulating the most general sufficient conditions for boundedness from below in the full 650-dimensional space is a difficult task. Although boundedness can be checked in some special cases, e.g. in the case of an adjoint of SU(5) [32], we are not aware of a general procedure. We do make the observation, however, that the invariants with coefficients \(\lambda_{1}\) and \(\lambda_{2}\) in Eq. (3.1) give only positive directions for quartic terms, since \({\bf X}\) is a Hermitian matrix. Therefore, qualitatively, one sufficient condition for boundedness from below is to have \(\lambda_{1},\lambda_{2}>0\) dominating over the other quartic terms, i.e. \(\lambda_{1,2}\gg|\lambda_{3,4,5}|\). Quantitatively, the argument can be pushed a bit further by noting that the other quartic invariants involve the invariant tensors \(d\) and \(D\), which consist only of numeric values \(\pm 1\) or \(0\), see Appendix A.1. It thus seems likely that a condition in the form \[\lambda_{1}+\lambda_{2}>C\,\left(|\lambda_{3}|+|\lambda_{4}|+|\lambda_{5}|\right) \tag{3.19}\] would be sufficient for boundedness from below for some unknown positive constant \(C\sim\mathcal{O}(1)\). * The entire analysis is performed only at tree-level. Quantum corrections to the effective scalar potential may be large, especially considering the large number of fields that can run in the loops. This consideration may considerably shrink the range of couplings in which the computation is amenable to perturbative methods, see e.g. [33, 34] for a recent example from SO(10) GUT. We do not pursue here this issue further, and simply limit the region for dimensionless parameters to \(|\lambda_{l}|\leq 1\). Keeping in mind the limitations discussed above, the analysis of the "relevant" cases is nevertheless expected to yield the best candidate for the global minimum. #### 3.3.2 Results for "global" minima We now investigate the parameter space \((M^{2},m_{k},\lambda_{l})\) for "global" minima with a symmetry among the "relevant" cases in accordance with the considerations of Section 3.3.1. The parameters are those present in \(V_{650}\) of Eq. (3.1). First, we provide a list of benchmark points in Table 10 that serves as an existence proof for various scenarios. The points were carefully selected so that the following remarks can be made: * Notice that points A-E cover all the symmetry cases for a global minimum we considered from Table 2. * Points A-C correspond to situations, where the global minimum breaks to one of the regular subgroups. For each of these there are no local minima of the potential corresponding to the other two regular cases, since they are destabilized by tachyonicity and the associated stationary points are saddle points of the full potential \(V_{650}\). * Case F is an example where a local minimum exists for all 3 types of regular cases. The \(\mathrm{SO}(10)\times\mathrm{U}(1)\) minimum is the deepest among them. * Points D and E break into one of the special cases, where only the depth of the minimum was checked and not the masses. Although local minima to regular cases exist for these points, they are metastable. \begin{table} \begin{tabular}{l l} \hline \hline \(G\) & maximal subgroups of \(G\) \\ \hline \(G_{2}\) & \(\mathrm{SU}(3)\), \(\mathrm{SU}(2)\times\mathrm{SU}(2)\), \(\mathrm{SU}(2)\) \\ \(\mathrm{SU}(3)\) & \(\mathrm{SU}(2)\times\mathrm{U}(1)\), \(\mathrm{SU}(2)\) \\ \(\mathrm{Sp}(8)\) & \(\mathrm{SU}(4)\times\mathrm{U}(1)\), \(\mathrm{SU}(2)\times\mathrm{Sp}(6)\), \(\mathrm{Sp}(4)\times\mathrm{Sp}(4)\), \(\mathrm{SU}(2)\), \(\mathrm{SU}(2)^{3}\) \\ \hline \hline \end{tabular} \end{table} Table 9: The maximal subgroups of groups \(G\), which are in turn maximal subgroups of \(\mathrm{E}_{6}\). We listed only those \(G\) which don’t already have a \(G\)-invariant state in the **650**. Second, we perform a search for points where the "global" minimum is in one of the regular scenarios of Table 2, i.e. one of the phenomenologically interesting cases. Although a local minimum with a long enough vacuum lifetime would already be phenomenologically viable, the demand for a "global" minimum ensures exclusivity of each parameter point and thus yields a phase diagram in the 8-dimensional space \((M^{2},m_{k},\lambda_{l})\). We limit our search to the parameter region \[|M^{2}| \lesssim 1, |m_{k}| \lesssim 1, |\lambda_{l}| \lesssim 1, m_{1} \gtrsim 0. \tag{3.20}\] The limitation on the dimensionless couplings \(\lambda_{l}\) is essentially a rough perturbativity bound, cf. Section 3.3.1, while \(m_{1}\) can be made non-negative by redefining the sign of \({\bf X}\) in \(V_{650}\) if necessary. Note that the "global" analysis does not change qualitatively under mass rescaling, i.e. under \(m_{k}\mapsto\eta\,m_{k}\) and \(M^{2}\mapsto\eta^{2}\,M^{2}\), so there is a one parameter redundancy in the space \((M^{2},m_{k},\lambda_{l})\). When \(M^{2}\neq 0\), one could in principle rescale the dimensionful parameters so that \(M^{2}=\pm 1\) and eliminate the redundancy, but a separate analysis is then required for each discrete case. Since this obscures the smooth transition in quantitative results when \(M^{2}\) changes sign, we rather perform the analysis in the full parameter space. In this context, the limits on \(M^{2}\) and \(m_{k}\) in Eq. (3.20) are not important, provided they define a region with equal opportunity for either \(M^{2}\) or \(m_{k}\) to dominate. For each regular case, we scan for a dataset of \(10^{4}\) viable parameter points. The search was performed by first finding a starting generation of points in each case, and then obtaining new ones by use of a stochastic variant of the differential evolution algorithm [35] (version "DE/rand/1" with a random choice of \(F\in(0.5,2)\) for each point). To present the results visually, the points must be projected from the 8D space \((M^{2},m_{k},\lambda_{l})\) down to 1D or 2D for plotting. Colors encode the different datasets: **red** for trinification \(\mathrm{SU}(3)^{3}\) (left bar), blue for \(\mathrm{SU}(6)\times\mathrm{SU}(2)\) (middle bar), and green for \(\mathrm{SO}(10)\times\mathrm{U}(1)\) (right bar). We show the 1D _highest density intervals_ (HDIs) for separate input parameters in Figure 2. Due to mass-scale redundancy, we show only dimensionless ratios of mass parameters. Also, \(m_{1}/|M|>0\) in accordance with Eq. (3.20). The vertical bars indicate the 1- and 2-\(\sigma\) HDIs in decreasing opacity. Additionally, it is also instructive to show 2D correlation plots for specific pairs of inputs. Figure 3 shows scatter plots in the \((m_{2}/m_{1})\)-\(\lambda_{4}\) and \(\lambda_{3}\)-\(\lambda_{4}\) planes. These choices of pairs best discriminate between the various regular cases. The results from Figures 2 and 3 show the (projected) regions, where parameter values most often lead to a given type of "global" minimum. One can make the following observations: \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline point & \(M^{2}\) & \(m_{1}\) & \(m_{2}\) & \(\lambda_{1}\) & \(\lambda_{2}\) & \(\lambda_{3}\) & \(\lambda_{4}\) & \(\lambda_{5}\) & \(G_{10,1}\,G_{62}\,G_{333}\) & “global” minimum \\ \hline A & \(-1\) & \(1.0\) & \(0.0\) & \(0.2\) & \(0.0\) & \(0.2\) & \(0.0\) & \(0.0\) & ✓ & \(-\) & \(-\) & \(\mathrm{SO}(10)\times\mathrm{U}(1)\) \\ B & \(-1\) & \(0.0\) & \(1.0\) & \(0.2\) & \(0.0\) & \(0.0\) & \(0.0\) & \(0.0\) & \(-\) & ✓ & \(-\) & \(G_{62}\) \\ C & \(-1\) & \(0.5\) & \(1.0\) & \(0.15\) & \(0.2\) & \(0.2\) & \(0.2\) & \(0.15\) & \(-\) & \(-\) & ✓ & \(G_{333}\) \\ D & \(-1\) & \(1.0\) & \(1.0\) & \(0.1\) & \(0.0\) & \(0.0\) & \(0.0\) & \(0.0\) & \(-\) & ✓ & \(-\) & \(\mathrm{F}_{4}\) \\ E & \(-1\) & \(1.0\) & \(1.0\) & \(0.5\) & \(0.6\) & \(0.0\) & \(0.2\) & \(0.0\) & ✓ & ✓ & \(-\) & \(\mathrm{SU}(3)\times\mathrm{G}_{2}\) \\ F & \(-1\) & \(0.6\) & \(0.7\) & \(0.3\) & \(0.0\) & \(0.3\) & \(0.2\) & \(0.4\) & ✓ & ✓ & ✓ & \(\mathrm{SO}(10)\times\mathrm{U}(1)\) \\ \hline \hline \end{tabular} \end{table} Table 10: A list of benchmark points. We provide for each a label, the parameter values \((M^{2},m_{k},\lambda_{l})\), the existence of local minima for regular cases (✓ for yes and \(-\) for no), and the symmetry in the “global” minimum. * "Global" minima with \(G_{333}\) or \(G_{62}\) symmetry prefer large ratios \(|m_{2}/m_{1}|\), while \(G_{10,1}\) prefers small such ratios, large positive values of \(\lambda_{3}\) and negative values of \(\lambda_{2}\). * The parameter \(\lambda_{4}\) is best for discriminating "globally" between the \(G_{333}\) and \(G_{62}\) cases. * There is a high preference for \(\lambda_{1}>0\) in all cases, and \(\lambda_{3}\) and \(\lambda_{4}\) also prefer positive ranges. This is likely due to the conditions for boundedness from below of the full potential \(V_{650}\), cf. Section 3.3.1. In summary, we see from Table 10 that "global" minima for all "relevant" cases exist somewhere in parameter space, while Figures 2 and 3 reveal for each regular case the preferred (projected) regions of parameters. \(\mathrm{SU}(6)\times\mathrm{SU}(2)\). The real representation \(\mathbf{650}\) of \(\mathrm{E}_{6}\) is the minimal and only realistic way to implement these breaking chains, since the next-largest representation of \(\mathrm{E}_{6}\) with \(G_{333}\)- or \(G_{62}\)-singlets is of dimension 2430. A Yang-Mill theory with only a real scalar \(\mathbf{650}\) is still asymptotically free, while the presence of \(\mathbf{2430}\) leads inextricably to a Landau pole within an order of magnitude. We investigated in this paper the most general renormalizable scalar potential with a single copy of the representation \(\mathbf{650}\), which consists of one quadratic, two cubic and five quartic linearly independent invariants. We obtained explicit VEV solutions that break \(\mathrm{E}_{6}\) to the following cases: 1. the regular maximal subgroups \(\mathrm{SU}(3)^{3}\), \(\mathrm{SU}(6)\times\mathrm{SU}(2)\) and \(\mathrm{SO}(10)\times\mathrm{U}(1)\), 2. the special maximal subgroups \(\mathrm{F}_{4}\) and \(\mathrm{SU}(3)\times\mathrm{G}_{2}\). Although this may not be a complete classification of solutions, we argued that the above list consists of the most important cases, at least in most of parameter space. Phenomenologically, the important cases are those with a regular maximal subgroup, since these still retain the SM group as a subgroup. For these vacuum solutions we computed the scalar spectrum, thus studying when they correspond to a local minimum of the full scalar potential. In a (partial) global analysis, we then scanned the parameter space for points which support each of these regular cases as the lowest minimum (among those analyzed); indeed, non-empty regions were identified for all of them, effectively producing a phase diagram withing the 8-dimensional parameter space. An aspect of the trinification solution worth emphasizing is that it also preserves a discrete part of \(\mathrm{E}_{6}\), analogous to \(D\)-parity in \(\mathrm{SO}(10)\): either LR, CR or CL parity remain unbroken, where intuitively these parities exchange the factors of \(\mathrm{SU}(3)_{C}\times\mathrm{SU}(3)_{L}\times\mathrm{SU}(3)_{R}\). The imprint of the remaining parity is manifest in the spectrum, and it may lead to topological defects, in particular domain walls and strings. Finally, the representation \(\mathbf{650}\) has a an intriguing outlook in \(\mathrm{E}_{6}\) model building, especially in the non-supersymmetric case where the breaking is expected to occur in multiple stages. This paper addressed the possibilities for the first stage in \(\mathrm{E}_{6}\) breaking without further regard to additional phenomenological considerations or a concrete model setup; it serves as a piece in the wider effort of \(\mathrm{E}_{6}\) GUT model building, including our future work. The work of KSB is supported in part by the U.S. Department of Energy under grant number DE-SC0016013. BB acknowledges the financial support from the Slovenian Research Agency (research core funding No. P1-0035 and in part research grant J1-4389). VS acknowledges financial support from the Grant Agency of the Czech Republic (GACR) via Contract No. 20-17490S, as well as from the Charles University Research Center Grant No. UNCE/SCI/013. We thank Raymond Volkas for discussion and pointing out Ref. [18]. Appendix A The special embeddings of \(\mathrm{F}_{4}\) and \(\mathrm{SU}(3)\times\mathrm{G}_{2}\) into \(\mathrm{E}_{6}\) We provide in this Appendix some technical details on dealing with the Lie algebra of \(\mathrm{E}_{6}\), as well as the special embeddings of \(\mathrm{F}_{4}\) and \(\mathrm{SU}(3)\times\mathrm{G}_{2}\). Note that a special embedding \(H\subset G\) breaks the rank of the larger group \(G\). Suppose we have a basis of generators in \(G\), such that every generator has well-defined quantum numbers of \(G\) (this is the basis of lowering and raising operators). Suppose we have an analogous basis of generators in \(H\), such that each of those have well-defined quantum numbers in \(H\). Due to the rank-breaking property of the maximal embedding, the basis of \(H\) cannot be simply a subset of the generators of \(G\). Instead, some generators of \(H\) will necessarily have to be linear combinations of those in \(G\) -- a feature we shall indeed see explicitly in the embeddings of \(\mathrm{F}_{4}\) and \(\mathrm{SU}(3)\times\mathrm{G}_{2}\). This makes the analysis of special embeddings more complicated, hence its relegation to the appendix. The most convenient treatment of the exceptional groups \(\mathrm{E}_{6}\), \(\mathrm{F}_{4}\) and \(\mathrm{G}_{2}\) is perhaps by making use of their maximal regular subgroups \(\mathrm{SU}(3)^{n}\), see Table 11. We describe each of these cases in turn, and specify explicit embeddings in the following subsections. ### The algebra of \(\mathrm{E}_{6}\) As already stated in Section 2.1, the 78 generators of \(\mathrm{E}_{6}\) can be labeled by \[(T_{C})^{A},\quad(T_{L})^{A},\quad(T_{R})^{A},\quad t^{\alpha}{}_{aa^{\prime}},\quad\bar{t}_{\alpha}{}^{aa^{\prime}}, \tag{104}\] where \(A\) is the adjoint \(\mathrm{SU}(3)\) index (for the appropriate factor), while \(\{\alpha,a,a^{\prime}\}\) are fundamental for \(\mathrm{SU}(3)_{\{C,L,R\}}\). The most general element in the Lie algebra \(\mathfrak{e}_{6}\) is thus written as \[\alpha_{C}{}^{A}\,T_{C}^{A}+\alpha_{L}{}^{A}\,T_{L}^{A}+\alpha_{R}{}^{A}\,T_{ R}^{A}+\beta_{\alpha}{}^{aa^{\prime}}\,t^{\alpha}{}_{aa^{\prime}}+\beta^{*}{}^{ \alpha}{}_{aa^{\prime}}\,\bar{t}_{\alpha}{}^{aa^{\prime}}, \tag{105}\] where the 24 coefficients \(\alpha_{C}{}^{A}\), \(\alpha_{L}{}^{A}\), \(\alpha_{R}{}^{A}\) are real, while the 27 coefficients \(\beta_{\alpha}{}^{aa^{\prime}}\) are complex. For quick reference by "sector", we shall sometimes suppress the indices and simply refer to the generators as \(\{T_{C},T_{L},T_{R},t,\bar{t}\}\). The explicit \(\mathrm{E}_{6}\) commutation relations are then written as \[\left[T_{C}^{A},T_{R}^{B}\right]=\left[T_{R}^{A},T_{L}^{B}\right]=\left[T_{L}^ {A},T_{C}^{B}\right]=0, \tag{106}\] \[\left[T_{C}^{A},T_{C}^{B}\right]=if^{ABD}\,T_{C}^{D}, \tag{107}\] \[\left[T_{L}^{A},T_{B}^{B}\right]=if^{ABD}\,T_{L}^{D},\] (108) \[\left[T_{R}^{A},T_{R}^{B}\right]=if^{ABD}\,T_{R}^{D}, \tag{109}\] \[\left[T_{C}^{A},t^{\alpha}{}_{aa^{\prime}}\right]=-\tfrac{1}{2} \langle\lambda^{A}\rangle^{\alpha}{}_{\beta}\,t^{\beta}{}_{aa^{\prime}}, \tag{110}\] \[\left[T_{L}^{A},t^{\alpha}{}_{aa^{\prime}}\right]=\tfrac{1}{2} \langle\lambda^{A}\rangle^{b}{}_{a}\,t^{\alpha}{}_{ba^{\prime}},\] (111) \[\left[T_{R}^{A},t^{\alpha}{}_{aa^{\prime}}\right]=\tfrac{1}{2} \langle\lambda^{A}\rangle^{b^{\prime}}{}_{a^{\prime}}\,t^{\alpha}{}_{ab^{ \prime}},\] (112) \[\left[T_{C}^{A},\bar{t}_{\alpha}{}^{aa^{\prime}}\right]=\tfrac{1} {2}\langle\lambda^{A}\rangle^{\beta}{}_{\alpha}\,\bar{t}_{\beta}{}^{aa^{ \prime}},\] (113) \[\left[T_{L}^{A},\bar{t}_{\alpha}{}^{aa^{\prime}}\right]=-\tfrac{1} {2}\langle\lambda^{A}\rangle^{a}{}_{b}\,\bar{t}_{\alpha}{}^{ba^{\prime}},\] (114) \[\left[T_{R}^{A},\bar{t}_{\alpha}{}^{aa^{\prime}}\right]=-\tfrac{1} {2}\langle\lambda^{A}\rangle^{a^{\prime}}{}_{b^{\prime}}\,\bar{t}_{\alpha}{}^{ ab^{\prime}}, \tag{115}\] \begin{table} \begin{tabular}{c c c c c} \hline \hline \(G\) & adjoint & decomposition & maximal \(\mathrm{SU}(3)^{n}\subset G\) \\ \hline \(\mathrm{E}_{6}\) & **78** & \(\mathbf{8}_{C}+\mathbf{8}_{L}+\mathbf{8}_{R}+(\mathbf{3},\mathbf{\bar{3}}, \mathbf{\bar{3}})+(\mathbf{\bar{3}},\mathbf{3},\mathbf{3})\) & \(\mathrm{SU}(3)^{3}\) & \(\equiv\) & \(\mathrm{SU}(3)_{C}\times\mathrm{SU}(3)_{L}\times\mathrm{SU}(3)_{R}\) \\ \(\mathrm{F}_{4}\) & **52** & \(\mathbf{8}_{C}+\mathbf{8}_{LR}+(\mathbf{3},\mathbf{\bar{6}})+(\mathbf{\bar{3}}, \mathbf{6})\) & \(\mathrm{SU}(3)^{2}\) & \(\equiv\) & \(\mathrm{SU}(3)_{C}\times\mathrm{SU}(3)_{LR}\) \\ \(\mathrm{G}_{2}\) & **14** & \(\mathbf{8}+\mathbf{3}+\mathbf{\bar{3}}\) & \(\mathrm{SU}(3)^{1}\) & \\ \hline \hline \end{tabular} \end{table} Table 11: A comparison of exceptional groups \(\mathrm{E}_{6}\), \(\mathrm{F}_{4}\) and \(\mathrm{G}_{2}\), and the branching rules for their adjoints to a maximal \(\mathrm{SU}(3)^{n}\) in each case, which facilitates a convenient language for their description. \[\left[\bar{t}_{\alpha\;a\dot{a}^{\prime}},t^{\beta}{}_{b\dot{b}^{ \prime}}\right]=-\varepsilon^{\alpha\beta\gamma}\;\varepsilon_{abc}\;\varepsilon_{a \prime}v_{\ell^{\prime}\ell^{\prime}}\;\bar{t}_{\gamma\;c^{\prime}},\] (A.13) \[\left[\bar{t}_{\alpha\;a\dot{a}^{\prime}},\bar{t}_{\beta\;b^{ \prime}}\right]= \;\varepsilon_{\alpha\beta\gamma}\;\varepsilon^{abc}\;\varepsilon^{a \prime b^{\prime}\ell^{\prime}}\;t^{\gamma}{}_{c^{\prime}\;c^{\prime}},\] (A.14) \[\left[\bar{t}_{\alpha\;a\dot{a}^{\prime}},t^{\beta}{}_{b\dot{b}^{ \prime}}\right]=(\lambda^{A})^{\beta}{}_{\alpha}\;\delta^{a}{}_{b}\;\delta^{a ^{\prime}}{}_{b^{\prime}}\;T_{C}^{A}-\delta^{\beta}{}_{\alpha}\;(\lambda^{A})^ {a}{}_{b}\;\delta^{a^{\prime}}{}_{b^{\prime}}\;T_{L}^{A}-\delta^{\beta}{}_{ \alpha}\;\delta^{a}{}_{b}\;(\lambda^{A})^{a^{\prime}}{}_{b^{\prime}}\;T_{R}^ {A}.\] (A.15) These relation are a modified version of those from [19]. We use a conjugate embedding for the C SU(3)-factor relative to those of L and R in \(G_{333}\), in accordance with Eq. (2.1). Our preferred embedding is phenomenologically more convenient and is also the more common one adopted in the literature [3, 4, 9, 20, 36], while [19] implements a more symmetric embedding with the last two terms in Eq. (2.1) replaced by \((\mathbf{3},\mathbf{3},\mathbf{3})\oplus(\mathbf{\bar{3}},\mathbf{\bar{3}}, \mathbf{\bar{3}})\). Note that the only objects necessary to define E\({}_{6}\) commutation relations are SU(3)-related: 1. The Gell-Mann matrices \(\lambda^{A}\). 2. The SU(3) structure constants \(f^{ABC}\): \[[\tfrac{1}{2}\lambda^{A},\tfrac{1}{2}\lambda^{B}]=if^{ABC}\;\tfrac{1}{2} \lambda^{C}.\] (A.16) 3. The SU(3) invariant tensors \(\varepsilon_{abc}\), \(\varepsilon^{abc}\), \(\delta^{a}{}_{b}\) (the completely anti-symmetric Levi-Civita and Kronecker delta tensors in three dimensions). Schematically, the commutation relations by sector can be understood most easily from Table 12. **Table 12**: The schematic table of the E\({}_{6}\) commutation relations by sector. \begin{tabular}{c|c c c c c} \([\,,.\,]\) & \(T_{C}\) & \(T_{L}\) & \(T_{R}\) & \(t\) & \(\bar{t}\) \\ \hline \(T_{C}\) & \(T_{C}\) & \(0\) & \(0\) & \(t\) & \(\bar{t}\) \\ \(T_{L}\) & \(0\) & \(T_{L}\) & \(0\) & \(t\) & \(\bar{t}\) \\ \(T_{R}\) & \(0\) & \(0\) & \(T_{R}\) & \(t\) & \(\bar{t}\) \\ \(t\) & \(t\) & \(t\) & \(t\) & \(\bar{t}\) & \(T_{C,L,R}\) \\ \(\bar{t}\) & \(\bar{t}\) & \(\bar{t}\) & \(\bar{t}\) & \(T_{C,L,R}\) & \(t\) \\ \end{tabular} For completeness, we now provide also the action of the E\({}_{6}\) generators on the fundamental representation \(\mathbf{27}\). The representation \(\mathbf{27}\) decomposes under the \(G_{333}\) subgroup (in our chosen embedding) as \[\mathbf{27}=(3,3,1)\oplus(1,\bar{3},3)\oplus(\bar{3},1,\bar{3})\equiv L^{ \alpha a}\oplus M_{a}{}^{a^{\prime}}\oplus N_{a^{\prime}\alpha}.\] (A.17) In this \((L^{\alpha a},M_{a}{}^{a^{\prime}},N_{a^{\prime}\alpha})\) notation for the \(\mathbf{27}\), the generators act via \[T_{C}^{A}\;\left(L^{\alpha a},\;M_{a}{}^{a^{\prime}},\;N_{a^{ \prime}\alpha}\right) =\left(\tfrac{1}{2}(\lambda_{A})^{\alpha}{}_{\beta}\;L^{\beta a}, \;0,\;-\tfrac{1}{2}(\lambda_{A}^{*})_{\alpha}{}^{\beta}\;N_{a^{\prime}\beta}\right),\] (A.18) \[T_{L}^{A}\;\left(L^{\alpha a},\;M_{a}{}^{a^{\prime}},\;N_{a^{ \prime}\alpha}\right) =\left(\tfrac{1}{2}(\lambda_{A})^{a}{}_{b}\;L^{ab},\;-\tfrac{1}{2} (\lambda_{A}^{*})_{a}{}^{b}\;M_{b}{}^{a^{\prime}},\;0\right),\] (A.19) \[T_{R}^{A}\;\left(L^{\alpha a},\;M_{a}{}^{a^{\prime}},\;N_{a^{ \prime}\alpha}\right) =\left(0,\;\tfrac{1}{2}(\lambda_{A})^{a^{\prime}}{}_{b^{\prime}} \;M_{a}{}^{b^{\prime}},\;-\tfrac{1}{2}(\lambda_{A}^{*})_{a^{\prime}}{}^{b^{ \prime}}N_{b^{\prime}\alpha}\right),\] (A.20) \[t^{\alpha}{}_{aa^{\prime}}\;\left(L^{\alpha a},\;M_{a}{}^{a^{ \prime}},\;N_{a^{\prime}\alpha}\right) =\left(\varepsilon^{\alpha\beta\gamma}\;\delta^{b}{}_{a}\;N_{a^{ \prime}\gamma},\;-\varepsilon_{abc}\;\delta^{b^{\prime}}{}_{a^{\prime}}\;L^{ \alpha c},\;-\varepsilon_{a^{\prime}b^{\prime}c^{\prime}}\;\delta^{\alpha}{}_{ \beta}\;M_{a}{}^{c^{\prime}}\right),\] (A.21) \[\bar{t}_{\alpha}{}^{aa^{\prime}}\;\left(L^{\alpha a},\;M_{a}{}^{a^ {\prime}},\;N_{a^{\prime}\alpha}\right) =\left(\varepsilon^{abc}\;\delta^{\beta}{}_{\alpha}\;M_{c}{}^{a^{ \prime}},\;\varepsilon^{a^{\prime}b^{\prime}c^{\prime}}\;\delta^{a}{}_{b}\;N_{c^{ \prime}\alpha},\;-\varepsilon_{\alpha\beta\gamma}\;\delta^{a^{\prime}}{}_{b^{ \prime}}\;L^{\gamma a}\right).\] (A.22) The generator action has again been taken from [19] and modified in accordance with our preferred \(G_{333}\) embedding, cf. discussion below Eq. (A.15). Note that Eqs. (A.18)-(A.22) allow for an explicit construction of \(\mathrm{E}_{6}\) generator matrices in the fundamental representation \(\mathbf{27}\). These indeed satisfy the commutation relations of Eqs. (A.3)-(A.15). The explicit form of the generators provides the computational machinery behind the explicit results in this paper. Finally, the group \(\mathrm{E}_{6}\) has its own primitive invariant tensors: \(\delta^{i}{}_{j}\), \(d_{ijk}\) and \(d^{ijk}\), where \(i,j,k\) are fundamental \(\mathrm{E}_{6}\) indices and run from \(1\) to \(27\). Their components can be extracted from the relation \[\tfrac{1}{3!}\,d_{ijk}\;\mathbf{27}^{i}\,\mathbf{27}^{j}\,\mathbf{27}^{k}=- \mathrm{det}\,\mathbf{L}+\mathrm{det}\,\mathbf{M}-\mathrm{det}\,\mathbf{N}- \mathrm{Tr}(\mathbf{LMN}),\] (A.23) where the \(\mathbf{27}^{i}\) was written in terms of matrices \((\mathbf{L},\mathbf{M},\mathbf{N})\) as in Eq. (A.17). The tensors \(d_{ijk}\) and \(d^{ijk}\) have the same numerical components, are completely symmetric, and in a suitable basis all their entries are either \(\pm 1\) or \(0\), see [20]. In that basis they have the normalization \[d^{ikl}d_{jkl}=10\,\delta^{i}{}_{j}.\] (A.24) ### The algebra of \(\mathrm{F}_{4}\) and its embedding into \(\mathrm{E}_{6}\) The group \(\mathrm{F}_{4}\) has \(52\) generators. Making use of their decomposition into irreducible representations of the \(\mathrm{SU}(3)_{C}\times\mathrm{SU}(3)_{LR}\) maximal subgroup, see Table 11, the \(\mathrm{F}_{4}\) generators can be labeled as follows: \[(\Phi_{C})^{A},\quad(\Phi_{LR})^{A},\quad\phi^{\alpha}{}_{ab},\quad\bar{\phi} _{\alpha}{}^{ab},\] (A.25) where symmetry is imposed in \((a,b)\) (so that it forms a \(6=(3\times 3)_{s}\) of \(\mathrm{SU}(3)_{LR}\)). As usual, \(A\) is the adjoint index of \(\mathrm{SU}(3)\), and \(\{\alpha,a\}\) are fundamental for \(\mathrm{SU}(3)_{C,LR}\). The subscripts \(C\) and \(LR\) in the maximal subgroup \(\mathrm{SU}(3)^{2}\) already anticipate a particular embedding into \(\mathrm{E}_{6}\). We abbreviate the labels of \(\mathrm{F}_{4}\) generators by sector as \(\{\Phi_{C},\Phi_{LR},\phi,\bar{\phi}\}\). One possible embedding is to define the \(\mathrm{F}_{4}\) generators in terms of \(\mathrm{E}_{6}\) generators as follows: \[\Phi_{C}^{A} :=T_{C}^{A},\] (A.26) \[\Phi_{LR}^{A} :=T_{L}^{A}+T_{R}^{A},\] (A.27) \[\phi^{\alpha}{}_{ab} :=t^{a}{}_{ab}+t^{\alpha}{}_{ba},\] (A.28) \[\bar{\phi}_{\alpha}{}^{ab} :=\bar{t}_{\alpha}{}^{ab}+\bar{t}_{\alpha}{}^{ba}.\] (A.29) Notice that the set of \(\mathrm{F}_{4}\) generators is obtained from \(\mathrm{E}_{6}\) generators by symmetrizing with respect to LR parity from Appendix B. This is a particularly simple embedding, in the context of which the \(\mathfrak{f}_{4}\) Lie algebra can be understood as the maximal subalgebra of \(\mathfrak{e}_{6}\) that consists of LR-symmetric elements. Using the commutation relations of \(\mathrm{E}_{6}\) from Eqs. (A.3)-(A.15), we get the following consistent set of \(\mathrm{F}_{4}\) commutation relations: \[\left[\Phi_{C}^{A},\Phi_{LR}^{B}\right]=0,\] (A.30) \[\left[\Phi_{C}^{A},\Phi_{C}^{B}\right] =if^{ABC}\;\Phi_{C}^{C},\] (A.31) \[\left[\Phi_{LR}^{A},\Phi_{LR}^{B}\right] =if^{ABC}\;\Phi_{LR}^{C},\] (A.32) \[\left[\Phi_{C}^{A},\phi^{\alpha}{}_{ab}\right] =-\tfrac{1}{2}(\lambda^{A})^{\alpha}{}_{\beta}\;\phi^{\beta}{}_{ab},\] (A.33) \[\left[\Phi_{LR}^{A},\phi^{\alpha}{}_{ab}\right] =\phantom{-}\tfrac{1}{2}\left((\lambda^{A})^{c}{}_{a}\,\phi^{\alpha}{}_ {cb}+(\lambda^{A})^{c}{}_{b}\,\phi^{\alpha}{}_{ac}\right),\] (A.34) \[\left[\Phi_{C}^{A},\bar{\phi}_{\alpha}{}^{ab}\right] =\phantom{-}\tfrac{1}{2}(\lambda^{A})^{\beta}{}_{\alpha}\;\bar{ \phi}_{\beta}{}^{ab},\] (A.35) \[\left[\Phi_{LR}^{A},\bar{\phi}_{\alpha}{}^{ab}\right] =-\tfrac{1}{2}\left((\lambda^{A})^{a}{}_{c}\,\bar{\phi}_{\alpha}{}^ {cb}+(\lambda^{A})^{b}{}_{c}\,\bar{\phi}_{\alpha}{}^{ac}\right),\] (A.36) \[\left[\phi^{\alpha}{}_{ab},\phi^{\beta}{}_{cd}\right] =-\varepsilon^{\alpha\beta\gamma}\left(\varepsilon_{ac}\,\varepsilon_{ bdf}+\varepsilon_{bce}\,\varepsilon_{adf}\right)\,\bar{\phi}_{\gamma}{}^{ef}, \tag{104}\] \[\left[\bar{\phi}_{\alpha}{}^{ab},\bar{\phi}_{\beta}{}^{cd}\right] =+\varepsilon_{\alpha\beta\gamma}\left(\varepsilon^{ac}\, \varepsilon^{bdf}+\varepsilon^{bce}\,\varepsilon^{aff}\right)\,\phi^{\gamma}{} _{ef}, \tag{105}\] \[\left[\bar{\phi}_{\alpha}{}^{ab},\phi^{\beta}{}_{cd}\right] =2\left(\lambda^{A}\right)^{\beta}{}_{\alpha}\left(\delta^{a}{}_ {c}\,\delta^{b}{}_{d}+\delta^{a}{}_{d}\,\delta^{b}{}_{c}\right)\,\Phi^{A}_{C}-\] \[\quad-\delta^{\beta}{}_{\alpha}\left(\left(\lambda^{A}\right)^{a }{}_{c}\,\delta^{b}{}_{d}+\left(\lambda^{A}\right)^{a}{}_{d}\,\delta^{b}{}_{c }+\left(\lambda^{A}\right)^{b}{}_{d}\,\delta^{a}{}_{c}+\left(\lambda^{A} \right)^{b}{}_{c}\,\delta^{a}{}_{d}\right)\,\Phi^{A}_{LR}. \tag{106}\] The commutation relations are consistent with the generator \(\phi^{\alpha}{}_{ab}\) being symmetric in the indices \(a\) and \(b\), as they should be. As in \(\mathrm{E}_{6}\), the commutation relations of \(\mathrm{F}_{4}\) were possible to write down by using only \(\mathrm{SU}(3)\)-related objects: the Gell-Mann matrices \(\lambda^{A}\), the Levi-Civita tensors \(\varepsilon_{abc}\) and the \(\mathrm{SU}(3)\) structure constants \(f^{ABC}\). This is the advantage offered by considering \(\mathrm{F}_{4}\) in the language of its \(\mathrm{SU}(3)^{2}\) maximal subalgebra. Schematically, the \(\mathrm{F}_{4}\) commutation relations by sector are displayed in Table 13. The generators from different sectors do not necessarily have a consistent normalization. ### The algebra of \(\mathrm{SU}(3)\times\mathrm{G}_{2}\) and its embedding into \(\mathrm{E}_{6}\) We label the 8 generators of \(\mathrm{SU}(3)\) by \[T^{A} \tag{107}\] and the 14 generators of \(G_{2}\) by \[G^{A},\quad g^{a},\quad\bar{g}_{a}. \tag{108}\] We used the label \(A\) for the adjoint index of \(\mathrm{SU}(3)\), and the (upper) \(a\) as the fundamental index of \(\mathrm{SU}(3)\). Furthermore, we made use of the maximal \(\mathrm{SU}(3)\) subgroup of \(\mathrm{G}_{2}\) to decompose the generators of \(\mathrm{G}_{2}\), see Table 11. In the abbreviated notation by sector, the generators of \(\mathrm{SU}(3)\times\mathrm{G}_{2}\) are written simply as \(\{T,G,g,\bar{g}\}\). One possible embedding of \(\mathrm{SU}(3)\times\mathrm{G}_{2}\) into \(\mathrm{E}_{6}\) is to define its generators as follows: \[T^{12+} =T_{L}^{45+}+t^{2}{}_{23}, T^{45+} =t^{2}{}_{12}-t^{2}{}_{21}, T^{67+} =T_{R}^{45+}+t^{2}{}_{32}, \tag{109}\] \[T^{12-} =T_{L}^{45-}+\bar{t}_{2}{}^{23}, T^{45-} =\bar{t}_{2}{}^{12}-\bar{t}_{2}{}^{21}, T^{67-} =T_{R}^{45-}+\bar{t}_{2}{}^{32},\] (110) \[T^{3} =(\tfrac{1}{2},-\tfrac{1}{2\sqrt{3}},0,\tfrac{2}{\sqrt{3}},0,- \tfrac{1}{\sqrt{3}}),\] (111) \[T^{8} =(\tfrac{\sqrt{3}}{2},-\tfrac{1}{2},0,0,0,1), \tag{112}\] \[G^{12+} =t^{1}{}_{22}, G^{45+} =t^{3}{}_{22}, G^{67+} =T_{C}^{45+}, \tag{113}\] \[G^{12-} =\bar{t}_{1}{}^{22}, G^{45-} =\bar{t}_{3}{}^{22}, G^{67-} =T_{C}^{45-}, \tag{114}\] \begin{table} \begin{tabular}{c|c c c c} \([.,.]\) & \(\Phi_{C}\) & \(\Phi_{LR}\) & \(\phi\) & \(\bar{\phi}\) \\ \hline \(\Phi_{C}\) & \(\Phi_{C}\) & \(0\) & \(\phi\) & \(\bar{\phi}\) \\ \(\Phi_{LR}\) & \(0\) & \(\Phi_{LR}\) & \(\phi\) & \(\bar{\phi}\) \\ \(\phi\) & \(\phi\) & \(\phi\) & \(\bar{\phi}\) & \(\Phi_{C,LR}\) \\ \(\bar{\phi}\) & \(\bar{\phi}\) & \(\bar{\phi}\) & \(\Phi_{C,LR}\) & \(\phi\) \\ \end{tabular} \end{table} Table 13: The schematic table of the \(\mathrm{F}_{4}\) commutation relations by sector. \[G^{3} =(-\tfrac{1}{2},-\tfrac{1}{2\sqrt{3}},-\tfrac{1}{2},\tfrac{1}{2}, \tfrac{1}{2\sqrt{3}},-\tfrac{1}{2},\tfrac{1}{2\sqrt{3}}) \tag{111}\] \[G^{8} =(\tfrac{1}{2\sqrt{3}},\tfrac{5}{6},-\tfrac{1}{2\sqrt{3}},\tfrac{1} {6},-\tfrac{1}{2\sqrt{3}},\tfrac{1}{6}), \tag{112}\] \[g^{1} =t_{L}^{12+}+t_{R}^{12+}+t^{2}{}_{33}, g^{2} =t^{1}{}_{12}+t^{1}{}_{21}+\bar{t}_{3}{}^{11}, g^{3} =t^{3}{}_{12}+t^{3}{}_{21}-\bar{t}_{1}{}^{11}, \tag{113}\] \[\bar{g}_{1} =t_{L}^{12-}+t_{R}^{12-}+\bar{t}_{2}{}^{33}, \bar{g}_{2} =\bar{t}_{1}{}^{12}+\bar{t}_{1}{}^{21}+t^{3}{}_{11}, \bar{g}_{3} =\bar{t}_{3}{}^{12}+\bar{t}_{3}{}^{21}-t^{1}{}_{11}. \tag{114}\] Above, we made use of the following "vector notation" for the diagonal generators: \[(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5},\alpha_{6}):=\alpha_{ 1}\;T_{C}^{3}+\alpha_{2}\;T_{C}^{8}+\alpha_{3}\;T_{L}^{3}+\alpha_{4}\;T_{L}^{8 }+\alpha_{5}\;T_{R}^{3}+\alpha_{6}\;T_{R}^{8}. \tag{115}\] The diagonal generators \(\{T^{3},T^{8},G^{3},G^{8}\}\) form a set of orthogonal states in the sense of the Killing form \(K(\mathbf{X},\mathbf{Y})=\mathrm{Tr}(\mathrm{ad}_{\mathbf{X}}\mathrm{ad}_{ \mathbf{Y}})\), although they are not normalized in the same way. Another observation is that the generators of the factor \(\mathrm{G}_{2}\) are symmetric under LR parity, and hence the group \(\mathrm{G}_{2}\) defined here is a subgroup of the group \(\mathrm{F}_{4}\), whose embedding is defined in Appendix A.2. The generators \(T\) of the \(\mathrm{SU}(3)\) factor, however, are not LR-parity symmetric, ensuring that \(\mathrm{SU}(3)\times\mathrm{G}_{2}\) is not a subgroup of \(\mathrm{F}_{4}\) and can thus be a maximal subgroup of \(\mathrm{E}_{6}\). Inserting the commutation relations of \(\mathrm{E}_{6}\) from Eqs. (101)-(103), it is possible to work out the following consistent set of \(\mathrm{SU}(3)\times\mathrm{G}_{2}\) commutation relations: \[[T^{A},G^{A}]=[T^{A},g^{a}]=[T^{A},\bar{g}_{a}]=0, \tag{116}\] \[[T^{A},T^{B}]=if^{ABC}\;T^{C}, \tag{117}\] \[[G^{A},G^{B}]=if^{ABC}\;G^{C}, \tag{118}\] \[[G^{A},g^{a}]=-\tfrac{1}{2}\,(\lambda^{A})^{a}{}_{b}\;g^{b}, \tag{119}\] \[[G^{A},\bar{g}_{a}]=+\tfrac{1}{2}\,(\lambda^{A})^{b}{}_{a}\;\bar{g}_{b} \tag{120}\] \[[g^{a},g^{b}]=-2\,\varepsilon^{abc}\,\bar{g}_{c}, \tag{121}\] \[[\bar{g}_{a},\bar{g}_{b}]=+2\,\varepsilon_{abc}\,g^{c}, \tag{122}\] \[[\bar{g}_{a},g^{b}]=3\,(\lambda^{A})^{b}{}_{a}\,G^{A}. \tag{123}\] Note that the above commutation relations use \(T^{A}\) and \(G^{A}\) in the Hermitian basis, while the generator definitions of Eq. (105)-(114) were given in a complex basis of raising and lowering operators. The conversion between them can be done via the relations \[T^{AB\pm}=T^{A}\pm iT^{B}, G^{AB\pm}=G^{A}\pm iG^{B}. \tag{124}\] Analogous to \(\mathrm{E}_{6}\) and \(\mathrm{F}_{4}\), the language of the maximal subgroup \(\mathrm{SU}(3)\) in \(\mathrm{G}_{2}\) again allowed us to write down the \(\mathrm{G}_{2}\) commutation relations only with \(\mathrm{SU}(3)\)-related objects. In fact, the use of \(\mathrm{SU}(3)\)-covariant indices structurally already restricts the form of the commutation relations up to overall numeric coefficients. The commutation relations also have a straightforward interpretation in this \(\mathrm{SU}(3)\) language, for example the relation for \([G^{A},g^{a}]\) simply states that \(g^{a}\) transform as a triplet under the \(\mathrm{SU}(3)\) subgroup defined by \(G^{A}\). For better intuition, we again write the commutation relations of \(\mathrm{G}_{2}\) by sector in Table 14. The generators from different sectors do not necessarily have a consistent normalization. Discrete symmetries in \(\mathrm{E}_{6}\) One could consider reshuffling the labels for color (C), left (L) and right (R) of the SU(3) factors in trinification \(G_{333}\). Such transformations form a permutation group of 3 objects, which we denoted by \(D_{3}\) in Section 2.2. The group \(D_{3}\) is generated by 3 parities, which exchange pairs of SU(3) factors: \(\mathbb{Z}_{2}^{LR}\), \(\mathbb{Z}_{2}^{CL}\) and \(\mathbb{Z}_{2}^{CR}\), also dubbed LR, CL and CR parity. The permutation group \(D_{3}\) is a discrete subgroup of \(\mathrm{E}_{6}\). This can be seen by explicitly constructing the parities; it turns out we can define them by \[P_{LR} :=e^{i\pi A_{LR}}\,e^{i\pi B_{LR}}, P_{CL} :=e^{i\pi A_{CL}}\,e^{i\pi B_{CL}}, P_{CR} :=e^{i\pi A_{CR}}\,e^{i\pi B_{CR}}, \tag{113}\] where the \(A\)- and \(B\)-matrices are all \(\epsilon_{6}\) algebra elements: \[A_{LR} :=\tfrac{1}{21}(t^{1}{}_{22}-\bar{t}_{1}{}^{22})+\tfrac{1}{21}(t ^{1}{}_{11}-\bar{t}_{1}{}^{11}), B_{LR} :=\tfrac{1}{21}(t^{1}{}_{33}-\bar{t}_{1}{}^{33})-t^{\overline{ \phantom{1}}}_{C}, \tag{114}\] \[A_{CL} :=\tfrac{1}{21}(t^{2}{}_{21}-\bar{t}_{2}{}^{21})+\tfrac{1}{21}(t ^{1}{}_{11}-\bar{t}_{1}{}^{11}), B_{CL} :=\tfrac{1}{21}(t^{3}{}_{31}-\bar{t}_{3}{}^{31})-t^{\overline{ \phantom{1}}}_{R},\] (115) \[A_{CR} :=\tfrac{1}{21}(t^{2}{}_{12}-\bar{t}_{2}{}^{12})+\tfrac{1}{21}(t ^{1}{}_{11}-\bar{t}_{1}{}^{11}), B_{CR} :=\tfrac{1}{21}(t^{3}{}_{13}-\bar{t}_{3}{}^{13})-t^{\overline{ \phantom{1}}}_{L}. \tag{116}\] The actions of the parities on the \(\mathbf{27}\) can then be conveniently specified in the language of the \(\mathbf{(L,M,N)}\) matrices from Eq. (115). Using the parity definitions of Eqs. (113)-(115) and the action of \(\mathrm{E}_{6}\) generators on the \(\mathbf{27}\) from Eqs. (116)-(115), we obtain \[P_{LR} :\quad\mathbf{(L,M,N)}\mapsto(\mathbf{N}^{T},\mathbf{M}^{T}, \mathbf{L}^{T}), \tag{117}\] \[P_{CL} :\quad\mathbf{(L,M,N)}\mapsto(\mathbf{L}^{T},-\mathbf{N}^{T},- \mathbf{M}^{T}),\] (118) \[P_{CR} :\quad\mathbf{(L,M,N)}\mapsto(-\mathbf{M}^{T},-\mathbf{L}^{T}, \mathbf{N}^{T}). \tag{119}\] The parities thus act as specific permutations of the states in the representation \(\mathbf{27}\) (up to minus signs). The presence of minuses is due to our use of the phenomenologically more convenient embedding of trinification, see discussion below Eq. (116). Armed with the prescription of Eqs. (117)-(119), we can easily confirm that the defined transformations are parities in the mathematical sense: \[P_{LR}^{2}=P_{CL}^{2}=P_{CR}^{2}=\mathbb{1}. \tag{120}\] Furthermore, we define the composite transformation \(P_{CLR}\equiv P_{LR}P_{CR}\); the group elements of \(D_{3}\) can now be written as \[D_{3}=\{\mathbb{1},P_{CLR},P_{CLR}^{2},P_{LR},P_{CL},P_{RC}\}. \tag{121}\] It can be checked explicitly that the set closes under multiplication, and the group structure is indeed that of a permutation group \(S_{3}\) of 3 elements. The element \(P_{CLR}\) generates a \(\mathbb{Z}_{3}\) cyclic subgroup, and can be understood intuitively to cyclically permute \(C\mapsto L\mapsto R\mapsto C\). In the context of Section 3.2.1 \begin{table} \begin{tabular}{r|c c c} \([.,.]\) & \(G\) & \(g\) & \(\bar{g}\) \\ \hline \(G\) & \(G\) & \(g\) & \(\bar{g}\) \\ \(g\) & \(g\) & \(\bar{g}\) & \(G\) \\ \(\bar{g}\) & \(\bar{g}\) & \(G\) & \(g\) \\ \end{tabular} \end{table} Table 14: The schematic table of the \(\mathrm{G}_{2}\) commutation relations by sector. and Eq. (3.7), we can identify \(P_{CLR}\) with clockwise rotations of 3-fold symmetry: \(P_{CLR}=R(4\pi/3)\) and \(P_{CLR}^{2}=R(2\pi/3)\). The action of the parities can be extended to any representation of \(\mathrm{E}_{6}\) by tensoriality. We now consider how the parities act on the adjoint \(\mathbf{78}\), which in tensor form has one upper and one lower index, cf. Table 1. The action of a parity \(P\) on a generator \(\mathbf{T}\) (written as a matrix) is then given explicitly by \[\hat{P}(\mathbf{T})=\mathbf{P}\,\mathbf{T}\,\mathbf{P}^{T},\] (B.10) where the matrix \(\mathbf{P}\) is constructed via Eqs. (B.5)-(B.7). This gives the explicit results of Table 15. The expressions confirm the intuition behind what these parities do, i.e. they exchange \(\mathrm{SU}(3)\) factors (up to taking the generator in the conjugate representation). To summarize, we saw explicitly that Eqs. (B.1)-(B.4) indeed define parities forming the discrete \(D_{3}\) subgroup of \(\mathrm{E}_{6}\) with all intuitive properties of exchanging the \(\mathrm{SU}(3)\) factor labels of the \(G_{333}\) subgroup. The importance of the \(D_{3}\) group is obvious from the vacuum analysis when breaking along the \(G_{333}\) direction. In particular, it appears as a symmetry of the restricted potential of \(G_{333}\)-singlets in Eq. (3.5) and is visually manifest in Figure 1. Furthermore, the \(G_{333}\) vacua preserve a \(\mathbb{Z}_{2}\) remnant (one of the parities), cf. Section 3.2.1. Placing ourselves in e.g. the LR symmetric vacuum, the full preserved symmetry is in fact \(G_{333}\rtimes\mathbb{Z}_{2}^{LR}\), as confirmed also by the LR symmetric spectrum in Table 6. We emphasize that the product with the discrete group is semi-direct, since elements of \(D_{3}\) do not commute with those of \(G_{333}\), i.e. they act on \(G_{333}\) elements non-trivially in accordance with Table 15. Finally, we observe that given the definition of \(P_{LR}\) in Eqs. (B.1) and (B.2), LR parity is also part of the \(\mathrm{SO}(10)\) subgroup of \(\mathrm{E}_{6}\) that is specified (together with an Abelian factor) in Eq. (2.7). LR parity can thus be identified (up to phase conventions) as \(D\)-parity [24; 25; 14; 26] in the \(\mathrm{SO}(10)\) context, and has also been referred to as such in the context of trinification originating from \(\mathrm{E}_{6}\)[29; 37]. Following these references, \(D\)-parity is defined as \[P_{D}=e^{i\pi T_{67}}e^{i\pi T_{23}}=-\Sigma_{67}\Sigma_{23},\] (B.11) where \(T_{ij}\equiv\Sigma_{ij}/2=[\Gamma_{i},\Gamma_{j}]/(4i)\) are the \(\mathrm{SO}(10)\) generators in the spinorial representation6 (the full \(\mathbf{16\oplus\overline{16}}\)), with \(\Gamma_{i}\) the gamma matrices of the 10-dimensional Clifford algebra \(\{\Gamma_{i},\Gamma_{j}\}=2\delta_{ij}\), where indices \(i,j\) run from 1 to 10. Note that the definition assumes the embedding of the Pati-Salam group \(G_{422}\equiv\mathrm{SU}(4)_{C}\times\mathrm{SU}(2)_{L}\times\mathrm{SU}(2)_{R}\) into \(\mathrm{SO}(10)\) in a convention, where the indices \(1\leq i,j\leq 6\) refer to the \(\mathrm{SO}(6)\equiv\mathrm{SU}(4)_{C}\) factor. The definition of \(P_{D}\) can then be extended to all other \(\mathrm{SO}(10)\) \begin{table} \begin{tabular}{l c c c c} \hline map \(P\) & \(\hat{P}(t_{C}^{A})\) & \(\hat{P}(t_{L}^{A})\) & \(\hat{P}(t_{R}^{A})\) & \(\hat{P}(t^{\alpha}{}_{a\alpha^{\prime}})\) & \(\hat{P}(\bar{t}_{\alpha}{}^{a\alpha^{\prime}})\) \\ \hline \(P_{LR}\) & \(-(t_{C}^{A})^{T}\) & \(-(t_{R}^{A})^{T}\) & \(-(t_{L}^{A})^{T}\) & \(-(t^{\alpha}{}_{a^{\prime}a})^{T}\) & \(-(\bar{t}_{\alpha}{}^{a^{\prime}a})^{T}\) \\ \(P_{CL}\) & \(t_{L}^{A}\) & \(t_{C}^{A}\) & \(-(t_{R}^{A})^{T}\) & \(-(t^{a}{}_{a\alpha^{\prime}})^{T}\) & \(-(\bar{t}_{a}{}^{a\alpha^{\prime}})^{T}\) \\ \(P_{CR}\) & \(t_{R}^{A}\) & \(-(t_{L}^{A})^{T}\) & \(t_{C}^{A}\) & \(-(t^{a}{}_{a\alpha})^{T}\) & \(-(\bar{t}_{a^{\prime}}{}^{a\alpha})^{T}\) \\ \hline \end{tabular} \end{table} Table 15: The action of the 3 parities \(\mathbb{Z}_{2}^{LR}\), \(\mathbb{Z}_{2}^{CL}\) on the basis elements of the adjoint representation \(\mathbf{78}\) of \(\mathrm{E}_{6}\) (when the adjoint representation is considered as the span of generators in the fundamental representation). representations by tensoriality. Observe that the last equality in Eq. (145), i.e. the evaluation of the exponentials, is a simplification possible in \(\mathrm{SO}(10)\) due to the Clifford algebra relation, while no such simplification is possible in the case of \(\mathrm{E}_{6}\) parities of Eq. (145). Also, as a last clarification, we do not aspire to consider any of the \(\mathrm{E}_{6}\) parities as an extension of the space-time parity as done in the left-right model [38; 39; 40]. ## Appendix C Location of singlets in the 650 We first consider the fundamental representation \(\mathbf{27}\), whose decomposition into trinification representations is given in Eq. (152). In the spirit of Appendix A in [20], the field content of this representation is even better elucidated in terms of its SM irreducible components and the use of SM-fermion language: \[\mathbf{27}=(Q\oplus L\oplus d^{c}\oplus u^{c}\oplus e^{c})\oplus(d^{\prime} \oplus d^{\prime c}\oplus L^{\prime c})\oplus(\nu^{c}\oplus n), \tag{156}\] where these fields transform under the SM group as \[Q\sim(\mathbf{3},\mathbf{2},+\tfrac{1}{6}), d^{c},d^{c}\sim(\mathbf{\bar{3}},\mathbf{1},+\tfrac{1}{3}), L,L^{\prime}\sim(\mathbf{1},\mathbf{2},-\tfrac{1}{2}), e^{c}\sim(\mathbf{1},\mathbf{1},+1),\] \[u^{c}\sim(\mathbf{\bar{3}},\mathbf{1},-\tfrac{5}{3}), d^{\prime}\sim(\mathbf{3},\mathbf{1},-\tfrac{1}{3}), L^{c}\sim(\mathbf{1},\mathbf{2},+\tfrac{1}{2}), \nu^{c},n\sim(\mathbf{1},\mathbf{1},0). \tag{157}\] The first grouping of representations (in terms of parentheses) in Eq. (156) consists of all SM fermions in one generation. The second grouping consists of vector-like down-type quarks \(d^{\prime}\oplus d^{\prime c}\) and vector-like lepton doublets \(L^{\prime}\oplus L^{\prime c}\). The third grouping contains SM singlets: \(\nu^{c}\) is a right-handed neutrino in the subpart \(\mathbf{16}\) of the \(\mathrm{SO}(10)\) subgroup, and \(n\) is a \(\mathrm{SO}(10)\) singlet (denoted by \(s\) in [20]). Writing in terms of \(\mathrm{SU}(3)_{C}\times\mathrm{U}(1)_{EM}\) components of the EW broken phase, we use the notation \[Q=(u,d), L=(\nu,e), L^{\prime}=(\nu^{\prime},e^{\prime}) L^{\prime c}=(\nu^{\prime c},e^{\prime c}). \tag{158}\] We suppressed all color indices in the notation. Collecting all the states together, the \(\mathbf{27}\) is explicitly written in the following basis: \[\mathbf{e}_{i}=\{u,\;d,\;d^{\prime},\;u^{c},\;d^{c},\;d^{\prime c},\;e,\;e^{ \prime},\;e^{c},\;e^{\prime c},\;\nu,\;\nu^{\prime},\;\nu^{\prime c},\;\nu^{ c},\;n\}. \tag{159}\] For the sake of being fully explicit, we insert this basis into the matrices \((L^{\alpha a},M_{a}{}^{a^{\prime}},N_{a^{\prime}\alpha})\) of Eq. (152) in the following way: \[\mathbf{L}=\begin{pmatrix}u_{1}&d_{1}&d_{1}^{\prime}\\ u_{2}&d_{2}&d_{2}^{\prime}\\ u_{3}&d_{3}&d_{3}^{\prime}\end{pmatrix}, \mathbf{M}=\begin{pmatrix}\nu^{\prime c}&e^{\prime}&e\\ e^{\prime c}&-\nu^{\prime}&-\nu\\ e^{c}&\nu^{c}&n\end{pmatrix}, \mathbf{N}=\begin{pmatrix}u_{1}^{c}&u_{2}^{c}&u_{3}^{c}\\ -d_{1}^{c}&-d_{2}^{c}&-d_{3}^{c}\\ d_{1}^{c}&d_{2}^{c}&d_{3}^{\prime c}\end{pmatrix}. \tag{160}\] The quarks were written with explicit color indices in their subscripts, and the presence of minuses is part of our conventions for greater convenience. The two minuses in \(\mathbf{M}\) are present due to this representation transforming as a \(\mathbf{\bar{3}}\) under \(\mathrm{SU}(3)_{L}\), and hence the lepton doublets \(L\) and \(L^{\prime}\) are embedded as a \(\mathbf{\bar{2}}\) of \(\mathrm{SU}(2)_{L}\); the minuses ensure the definitions align with those in the SM. Similarly, the minuses in \(\mathbf{N}\) indicate the \(\mathrm{SU}(2)_{R}\) quark doublet \((u^{c},d^{c})\) is embedded as a \(\mathbf{\bar{2}}\). We are now ready to consider the representation \(\mathbf{650}\). Since its components \(X^{i}{}_{j}\) are written with one upper and one lower fundamental index, cf. Table 1, its basis is a 650-dimensional subspace spanned by \(\mathbf{e}_{i}\otimes\mathbf{e}^{*j}\equiv\mathbf{e}_{i}\mathbf{e}^{*j}\). It is in this "\(\mathbf{ee}^{*}\)" basis that we specify some the states of interest in the \(\mathbf{650}\equiv\mathbf{X}\). The SM singlets listed in Eq. (3.3) and Table 3 have the following specific form in \(\mathbf{X}\): \[s\simeq\tfrac{1}{6\sqrt{3}}\big{(}(uu^{*}+dd^{*}+d^{\prime}d^{\prime*})\] \[-2\left(\nu^{c}\nu^{c}\nu^{c*}+e^{\prime}e^{\prime*}+ee^{*}+e^{ \prime c}e^{\prime c*}+\nu^{\prime}\nu^{\prime*}+\nu\nu^{*}+e^{c}e^{c*}+\nu^{ \prime}\nu^{\prime c*}+nn^{*}\right)\] \[+\left(u^{c}u^{c*}+d^{c}d^{c*}+d^{c}d^{c*}\right) \tag{122}\] \[a\simeq\tfrac{1}{6}\big{(}\big{(}uu^{*}+dd^{+}+d^{\prime}d^{ \prime}*\big{)}-\left(u^{c}u^{c*}+d^{c}d^{c*}+d^{c}d^{c*}\right)\big{)}\] (123) \[x_{1}\simeq\tfrac{1}{6\sqrt{2}}\big{(}2\big{(}u^{c}u^{c*}+\nu^{ c}\nu^{\prime c}+e^{c}e^{c*}+e^{c}e^{c*}\big{)}\] \[-\left(d^{c}d^{c*}+d^{c}d^{c*}+e^{\prime}e^{\prime*}+ee^{*}+\nu^{ \prime}\nu^{\prime*}+\nu\nu^{*}+\nu^{c}\nu^{\prime c*}+nn^{*}\right)\big{)}\] (124) \[x_{2}\simeq\tfrac{1}{2\sqrt{6}}\big{(}\big{(}d^{c}d^{c*}+e^{ \prime}e^{\prime*}+\nu^{\prime}\nu^{\prime*}+\nu^{c}\nu^{c*}\big{)}-\left(d^{ c}d^{c}d^{c*}+ee^{*}+\nu\nu^{*}+nn^{*}\right)\big{)}\] (125) \[X_{3}\simeq\tfrac{1}{2\sqrt{3}}\big{(}e^{\prime}e^{*}+\nu^{ \prime}\nu^{*}+\nu^{c}n^{*}+d^{c}d^{c*}\big{)}\] (126) \[y_{1}\simeq\tfrac{1}{6\sqrt{2}}\big{(}\big{(}2\nu^{\prime c}\nu^ {\prime c*}-e^{\prime}e^{\prime*}-ee^{*}\big{)}+\left(2e^{\prime c}e^{c*}-\nu^ {\prime}\nu^{\prime*}-\nu\nu^{*}\right)-2(2e^{c}e^{c*}-\nu^{c}\nu^{c*}-nn^{*} \big{)}\big{)}\] (127) \[y_{2}\simeq\tfrac{1}{2\sqrt{6}}\big{(}-\left(e^{\prime}e^{\prime *}-ee^{*}\right)-\left(\nu^{\prime}\nu^{\prime*}-\nu\nu^{*}\right)+2(\nu^{c} \nu^{c*}-nn^{*})\big{)}\] (128) \[Y_{3}\simeq\tfrac{1}{2\sqrt{3}}\big{(}e^{\prime}e^{*}+\nu^{ \prime}\nu^{*}-2\nu^{c}n^{*}\big{)}\] (129) \[z\simeq\tfrac{1}{6\sqrt{2}}\big{(}\big{(}uu^{*}+dd^{+}+\nu^{c} \nu^{c*}+e^{\prime}e^{\prime*}+ee^{*}+e^{c}e^{\prime c*}+\nu^{\prime}\nu^{\prime *}+\nu\nu^{*}\big{)}\] \[-2\big{(}d^{\prime}d^{\prime*}+e^{c}e^{c*}+\nu^{c}\nu^{c*}+nn^{*} \big{)}\big{)}. \tag{130}\] The conjugates \(X_{3}^{*}\) and \(Y_{3}^{*}\) are obtained by exchanging the order in each term and complex conjugating each fermion label (equivalent to Hermitian conjugation in \(\mathbf{X}\)). Also, the states \(X_{3}\) and \(Y_{3}\) are the only SM-singlet states with off-diagonal entries in \(\mathbf{X}\). These states are normalized to \(1/2\) for real fields and \(1\) for complex fields: \[X^{i}{}_{j}\,X^{*}{}_{i}{}^{j}=\tfrac{1}{2}\big{(}s^{2}+a^{2}+x_{1}^{2}+x_{2}^ {2}+y_{1}^{2}+y_{2}^{2}+z^{2}\big{)}+|X_{3}|^{2}+|Y_{3}|^{2}. \tag{131}\] As always, the overall sign for each real field and phase for a complex field is conventional. We emphasize again that color indices were suppressed in our notation; the quark terms should thus be taken as a sum over all colors, e.g. \(uu^{*}\equiv u^{a}u^{*}{}_{a}\), where \(a\) is a color index going from \(1\) to \(3\). Since these states in Eqs. (122)-(130) are SM singlets, they could alternatively be written in the language of the EW unbroken phase with weak indices suppressed, e.g. \[z\simeq\tfrac{1}{6\sqrt{2}}\big{(}\big{(}QQ^{*}+LL^{*}+L^{\prime}L^{\prime*}+ L^{\prime c}L^{c*}\big{)}-2\big{(}d^{\prime}d^{\prime*}+e^{c}e^{c*}+\nu^{c}\nu^{c*}+ nn^{*}\big{)}\big{)}. \tag{132}\] Finally, we specify the singlets of the exceptional cases. For the case of the F\({}_{4}\), with its embedding specified in Eqs. (121)-(126), the singlet state \(\tilde{s}\) has the direction \[\tilde{s}_{\text{F}_{4}}\simeq\frac{1}{6\,\sqrt{39}}\left(\sum_{i=1}^{27} \mathbf{e}_{i}\,\mathbf{e}^{*i}-9\left(ss^{*}+\nu^{\prime}\nu^{\prime*}+\nu^{ \prime c}\nu^{\prime c*}\right)-9\left(\nu^{\prime c}s^{*}-\nu^{\prime}s^{*}- \nu^{\prime}\nu^{\prime c*}+h.c.\right)\right). \tag{133}\] For the other exceptional case \(\text{SU}(3)\times\text{G}_{2}\) with the embedding given in Eqs. (115)-(116), the singlet has the direction \[\tilde{s}_{\text{SU}(3)\times\text{G}_{2}} \simeq\frac{1}{12\,\sqrt{21}}\left(4\,\sum_{i=1}^{27}\mathbf{e}_{ i}\,\mathbf{e}^{*i}-9\left(u_{2}u_{2}^{*}+u_{2}^{c}u_{2}^{c*}+ee^{*}+e^{c}e^{c*}+ \nu^{\prime}\nu^{\prime*}+\nu^{\prime c}\nu^{\prime c*}\right)-\right.\] \[\left.-18\left(ss^{*}+d_{2}^{\prime}d_{2}^{\prime*}+d_{2}^{\prime c }d_{2}^{\prime c*}\right)-9\left(u_{2}e^{*}-\nu^{\prime}\nu^{\prime c*}+u_{2}^ {c}e^{c*}+h.c.\right)\right). \tag{134}\] Notice that the singlets in the exceptional cases cannot be written in terms of SM representations, i.e. \(\tilde{s}_{\text{F}_{4}}\) explicitly breaks SU(2)\({}_{L}\) while \(\tilde{s}_{\text{SU}(3)\times\text{G}_{2}}\) breaks both SU(2)\({}_{L}\) and SU(3)\({}_{C}\). This is not surprising, since the exceptional cases do not have a SM group as a subgroup, and thus their singlets are not SM singlets.
2302.05425
Deep Learning Based Object Tracking in Walking Droplet and Granular Intruder Experiments
We present a deep-learning based tracking objects of interest in walking droplet and granular intruder experiments. In a typical walking droplet experiment, a liquid droplet, known as \textit{walker}, propels itself laterally on the free surface of a vibrating bath of the same liquid. This motion is the result of the interaction between the droplets and the surface waves generated by the droplet itself after each successive bounce. A walker can exhibit a highly irregular trajectory over the course of its motion, including rapid acceleration and complex interactions with the other walkers present in the same bath. In analogy with the hydrodynamic experiments, the granular matter experiments consist of a vibrating bath of very small solid particles and a larger solid \textit{intruder}. Like the fluid droplets, the intruder interacts with and travels the domain due to the waves of the bath but tends to move much slower and much less smoothly than the droplets. When multiple intruders are introduced, they also exhibit complex interactions with each other. We leverage the state-of-art object detection model YOLO and the Hungarian Algorithm to accurately extract the trajectory of a walker or intruder in real-time. Our proposed methodology is capable of tracking individual walker(s) or intruder(s) in digital images acquired from a broad spectrum of experimental settings and does not suffer from any identity-switch issues. Thus, the deep learning approach developed in this work could be used to automatize the efficient, fast and accurate extraction of observables of interests in walking droplet and granular flow experiments. Such extraction capabilities are critically enabling for downstream tasks such as building data-driven dynamical models for the coarse-grained dynamics and interactions of the objects of interest.
Erdi Kara, George Zhang, Joseph J. Williams, Gonzalo Ferrandez-Quinto, Leviticus J. Rhoden, Maximilian Kim, J. Nathan Kutz, Aminur Rahman
2023-01-27T16:40:54Z
http://arxiv.org/abs/2302.05425v2
# Deep Learning Based Object Tracking in Walking Droplet and Granular Intruder Experiments ###### Abstract We present a deep-learning based tracking objects of interest in walking droplet and granular intruder experiments. In a typical walking droplet experiment, a liquid droplet, known as _walker_, propels itself laterally on the free surface of a vibrating bath of the same liquid. This motion is the result of the interaction between the droplets and the surface waves generated by the droplet itself after each successive bounce. A walker can exhibit a highly irregular trajectory over the course of its motion, including rapid acceleration and complex interactions with the other walkers present in the same bath. In analogy with the hydrodynamic experiments, the granular matter experiments consist of a vibrating bath of very small solid particles and a larger solid _intruder_. Like the fluid droplets, the intruder interacts with and travels the domain due to the waves of the bath but tends to move much slower and much less smoothly than the droplets. When multiple intruders are introduced, they also exhibit complex interactions with each other. We leverage the state-of-art object detection model YOLO and the Hungarian Algorithm to accurately extract the trajectory of a walker or intruder in real-time. Our proposed methodology is capable of tracking individual walker(s) or intruder(s) in digital images acquired from a broad spectrum of experimental settings and does not suffer from any identity-switch issues. Thus, the deep learning approach developed in this work could be used to automatize the efficient, fast and accurate extraction of observables of interests in walking droplet and granular flow experiments. Such extraction capabilities are critically enabling for downstream tasks such as building data-driven dynamical models for the coarse-grained dynamics and interactions of the objects of interest. ## 1 Introduction Since the seminal works of Couder and co-workers [1, 2, 3], walking droplets have revealed a wide array of exotic behavior such as quantum-like phenomena [4, 5, 6, 7, 8], diffusive behavior [9, 10, 11], orbital dynamics [1, 12, 13, 14, 15, 16, 17], and nonlinear dynamics and chaos [18, 19, 20, 21, 22, 23]. Droplet tracking has been the crucial first step in characterizing the dynamical properties associated with these observations. The least technical way to track droplets is to manually identify the droplet at each timestep and record its location. However, this requires a prohibitive amount of human input. The primary bottleneck in automating this process is in droplet identification, after which recording the coordinates of the droplets is trivial. Similar to vibrating fluid baths, pattern-forming phenomena in vibrated baths of granular matter received serious attention by Melo et al. [24] and Metcalf et al. [25], who both elucidated transitions between square-forming and stripe-forming parameter regimes. More complex behaviors can form, and the behavior of vibrated granular materials varies richly with the bed depth \(F\), the dimensionless shaking parameter \(\Gamma=\frac{a\omega^{2}}{g}\), the particle diameter \(d\), and other particle properties such as the coefficients of friction and restitution [26]. The experiments with granular matter parallel those with hydrodynamic droplets: a much larger particle, known as the _intruder_, is included in the vibrating media, and for certain ranges of \(F\) and \(\Gamma\), the intruder will travel the experimental domain, its movement both propelled and mediated by the sea of vibrating particles surrounding it. The terminology of an _intruder_ in the granular system stems from the idea of a much larger particle intruding upon a collection of otherwise identical particles. [27], [28]. Generally, the much larger particle is (generally) carried to the surface by convective forces within the granular matter generated by the vibrations [29]. With an effective tracking algorithm, accurate long-time statistics of droplet and intruders can be extracted. These statistics are often used to compare the dynamics with that of analogous quantum systems. Sub-optimal tracking algorithms can potentially lead to incorrect statistical assessments and negatively affect the reproducability of experiments. Moreover, the lack of reproducibility and erroneous statistics can produce misinformed dynamical models of the underlying physical processes. Deep neural networks have become the state-of-the-art method for image recognition applications. Yet there have been few studies leveraging deep learning based object detection and tracking methods for applications related to droplet dynamics in micro-fluids and cellular imaging. Some applications of deep learning such as droplet detection, sorting and classification are discussed in [30]. In [31; 32; 33], YOLO [34] combined with the object tracking algorithm Deep SORT [35] is used to track droplet motion in a class of microfluidic experiments and soft granular flows based on synthetic images generated via Lattice-Boltzman simulations. In [36], Faster R-CNN[37] and YOLO are compared for droplet detection in several microfluidic experiments. However, to the best of our knowledge, there has been no deep-learning based investigation regarding extraction of walking droplets and granular intruders from real experiments. Unlike other droplet systems, walking droplets prove to be quite difficult to track due to their odd, morphing geometries and highly nonlinear motion, in addition to being microscopic and lacking in texture and color. These difficulties are exacerbated by sub-optimal experimental setups such as poor lighting and/or low resolution video capturing. Thus it is clear that there is a need for more accurate tracking algorithms. This becomes even more essential as precise long-time statistics are imperative to distinguish quantum-like behavior from probabilistic noise. There are several key differences between the behaviors and tracking of the walking droplets and the granular intruders. First, the larger size of the intruder compared to the hydrodynamic droplet means that we are less vulnerable to false positives. Second, while multiple droplets may collide and merge into a single droplet, the intruders will never merge, but instead collide and bounce apart. Indeed such collision events pose their own challenge for particle-tracking algorithms due to the large velocity and acceleration of the intruders after the collision. However, intruders never merging is convenient for running an experiment for very long periods; in turn, this is necessary as the slow and erratic motion of the intruders means longer experimental time is required in order for the intruder to meaningfully explore the domain. Finally, to our knowledge, there has never been an application of intruder identification and tracking in any similar granular flow experiments involving surface waves. The remainder of the manuscript is as follows: In Sec. 2, we outline the data acquisition stages regarding walking droplet and granular intruder experiments. In Sec. 3, we detail the preparation of object detection datasets described in Sec. 2 and discuss the model training and testing. Then, in Sec. 4, we present our detection and tracking results. Finally, in Sec. 5 we conclude our study with discussions and future directions. Data Collection ### Walking Droplets While experimenting with previously available droplet videos and detection methods, we noticed that variations in some experimental variables, namely lighting and droplet resolution, led to inconsistent detection accuracy. Therefore, to determine the optimal experimental setup which yields the best performance, we gathered 10 videos of droplets while varying five experimental parameters (lighting, droplet resolution, corral color, forcing amplitude, number of droplets) unilaterally. The specific parameter values used for each video are listed in the table 1. All videos were captured with an iPad Pro mounted on a tripod at 720p (which are all 30fps except for Corral White which was 60fps). To reduce algorithmic workload and minimize distractions, the videos were cropped to only contain the corral region. Experimental variables not listed were kept controlled to the best of our abilities. We first 3D printed identically-shaped black and white circular corrals using PLA. Then, we filled the corrals to the same height for each experiment with silicone oil. The forcing frequency was also kept at 60HZ for all experiments. Different levels of lighting were achieved with a tabletop LED photography light as the only light source within the room. The LED light has 11 different illumination levels, of which we captured footage at level 0 (Lights Off), level 1 (Lights Low), level 6 (Lights Mid), and level 11 (Lights High). Varying droplet resolution is achieved by placing the iPad at different vertical distances from the corral. The closer the iPad is to the corral, the higher the droplet resolution. Both the silicone oil fluid bath and the droplets on top appear and behave drastically different depending on whether the forcing is below or above the Faraday threshold. Thus, we kept the frequency of the forcing constant at 60HZ and varied the amplitude to stay just below the Faraday threshold (1.47g) and to exceed the threshold (1.53g). ### Granular Materials The granular matter experiments share some similarities to the walking droplets experiments, but differ in some key ways. First, the bath of smaller particles consists of chrome steel bearing balls of \(d=1.6\) mm and a bed depth of \(F=7.5\) mm. The bath is enclosed in a cylindrical container with \(D=214\) mm. We use intruder particles of two different ceramics, silicon nitride (Si\({}_{3}\)N\({}_{4}\)) and zirconium oxide (ZrO\({}_{2}\)), both less dense than chrome steel. The intruders are sized \(d=9.5\) mm. \begin{table} \begin{tabular}{l l l l l l} \hline Experiment & Lighting & Resolution & Color & Amplitude (g) & \# of Droplets \\ \hline Control & Medium (6) & High & Black & 1.47 & 1 \\ Lights Off & Off (0) & High & Black & 1.47 & 1 \\ Lights Low & Low (1) & High & Black & 1.47 & 1 \\ Lights High & High (11) & High & Black & 1.47 & 1 \\ Two Droplets & Medium (6) & High & Black & 1.47 & 2 \\ Three Droplets & Medium (6) & High & Black & 1.47 & 3 \\ Res Low & Medium (6) & Low & Black & 1.47 & 1 \\ Res Mid & Medium (6) & Medium & Black & 1.47 & 1 \\ Faraday & Medium (6) & High & Black & 1.53 & 1 \\ Corral White & Medium (6) & High & White & 1.47 & 1 \\ \hline \end{tabular} \end{table} Table 1: Experimental parameters for walking droplet videos Key differences in the experiments are found in the hyper-parameters. Neither lighting nor resolution were hyper-parameters for the granular matter experiments, but intruder color was. The Si\({}_{3}\)N\({}_{4}\) ceramic is white and the ZrO\({}_{2}\) ceramic intruders are black. Identity switching is more likely when intruders of the same color are interacting with each other. The color of the corral was not varied. Reference Table 1 for the experimental parameters of our granular systems. As with the walking droplets, the granular experiments were also filmed with an iPad Pro mounted on a tripod at 720p and 30fps and the videos were cropped. The frequency and amplitude of the vibrations were varied across the experiments to ensure sufficient wave patterns and intruder activity. ## 3 Deep Learning Based Object Detection As a fundamental computer vision problem, object detection is the task of locating objects and categorizing them into predefined classes in images or video frames. Autonomous driving, video surveillance systems, and real-time scene understanding are just a few applications of object detection. Conventionally, object detection has been achieved without neural networks and deep learning. Image processing techniques such as the Hough transform and edge detection extract information from images by applying pixel-level operations or filters [38, 39, 40, 41, 42, 43, 44]. Background subtraction excels in extracting moving foreground objects in front of static backgrounds [45, 46, 47, 48, 49, 50, 51, 52]. Machine learning approaches such as the Viola-Jones algorithm train to detect objects given labelled data [53]. Successful implementations of image processing [54, 55, 56, 57, 58, 59], background subtraction [60, 61], and more recently machine learning techniques [62] have been readily reported in agricultural and biological contexts and, to our interest, on various forms of droplets. Readers can consult [63, 64, 65] for a more comprehensive review of commonly applied computer vision methods. While these techniques are not the main focus of our paper, we acknowledge their ability and viability. Thus, see Sec-4.2 and Appendix A for the detection results of these methods on walking droplets. In addition, we note that the background subtraction problem typically assumes a static background. However, the droplet and intruder system both have objects of interest whose background is also dynamics. For instance, the granular material is dynamic as the intruder evolves over its trajectory. This makes the detection problem more challenging. As the complexity and amount of experimental data increases, conventional feature extraction processes have difficulty providing effective detection and tracking results. In the last 10 years, deep neural networks achieved breakthrough results particularly in image recognition domain. In terms of performance, many of these conventional methods were surpassed by neural networks in almost all image recognition applications including object detection. Since the full review of deep learning based object detection literature is beyond the scope of this paper, readers may consult to [66, 67] for an extensive review of current state of this field. In this work, we use the YOLO architecture proposed by Redmon et.al in [68]. Since then, it has been extensively adopted in computer vision research due to its fast and accurate object detection capabilities in real time. In the subsequent years, the model was gradually improved resulting in several versions [69, 70, 71] and achieved state-of-art results on the most commonly used object detection datasets such \begin{table} \begin{tabular}{l l l l l l} \hline Experiment & Intruder Material & Intruder Color & Amplitude (g) & Frequency \(f\) & \# of Intruders \\ \hline 3white & 3x Si\({}_{3}\)N\({}_{4}\) & 3x White & 2.26 & 20 & 3 \\ 2white2black-long & 2x Si\({}_{3}\)N\({}_{4}\), 2x ZrO\({}_{2}\) & 2x White, 2x Black & 2.15 & 20 & 4 \\ 2white2black-short & 2x Si\({}_{3}\)N\({}_{4}\), 2x ZrO\({}_{2}\) & 2x White, 2x Black & 3.10 & 50 & 4 \\ \hline \end{tabular} \end{table} Table 2: Experimental parameters for granular intruders videos as Pascal VOC and COCO [72; 73]. YOLO frames the object detection problem as a regression task. As a supervised machine learning framework, YOLO must be optimized on a training dataset. In this case, the training data is composed of images and bounding box coordinates of the object classes present in each image. Each input image is divided into \(S\times S\) regular rectangular grids. For each grid, the model predicts B bounding boxes represented by the box center coordinates (relative to the cell) and the width and height (relative to the entire image). A confidence score \(CS\) and conditional class probability \(Pr(C_{i})\) are also part of the model output. Confidence score \(CS\) is calculated as \[CS=Pr(Object)\cdot IOU \tag{1}\] where \(Pr(Object)\) indicates the probability that the grid contains an object belonging to any of the predefined classes. While values close to 0 indicate the absence of an object in that particular grid, values close to 1 point to the presence of an object. The _intersection-over-union_ (IOU) is one of key metrics in object detection problems that measures the degree of overlap (0 being no overlap, 1 being complete overlap) between the bounding box prediction and the ground truth bounding box. In model training or testing stages, one can set certain IOU threshold values to decrease the number of false positive identifications. The conditional class probability is computed as \[Pr(C_{i})=Pr(C_{i}|Object),\text{ for }i=1,2,..N \tag{2}\] where \(N\) is the number of classes. \(Pr(C_{i})\) is a direct measure of the probability of the object belonging to the class \(C_{i}\). The model outputs one set of probabilities per grid. Note that if the grid does not contain any object, \(Pr(C_{i})\) must be zero. As a result, for a single image, YOLO produces a tensor of size \(S\times S\times(5B+N)\), which is the prediction of the output layer of the YOLO network. Finally, this output is passed through a Non-Maximal suppression algorithm, eliminating the bounding boxes with low confidence scores. Once the output is obtained, YOLO computes the error via a _multi-component_ loss function which is composed of localization error for bounding box predictions and the classification error for conditional class probabilities [68]. The confidence score associated with each box is readily available during training phase. In testing time, it is a predicted quantity calculated by the network. For droplets and intruders, the number of classes \(N=1\). Thus, whenever the object is predicted present in the image, the confidence score reduces to \(IOU\). We lastly discuss the concept of _mean average precision (mAP)_ which is one of the most common metrics used in object detection problems. This is a single number between 0 and 1 measuring the accuracy of the network across the dataset in consideration. To compute mAP, we first calculate precision (P) and recall (R) using the following formula \[P=\frac{TP}{TP+FP}\quad,\quad R=\frac{TP}{TP+FN} \tag{3}\] In (3), TP represents the number of true positives, i.e, droplet is present and detected while TN indicates the number of true negatives, i.e, droplet is not present and not detected. FN is the number of false negatives, i.e, the droplet is present but not detected while FP is the number of false positives, i.e. the droplet is not present but detected. Notice that to classify a detection as TP or FP, a threshold value must be imposed. In the object detection problem, it is very common to choose an IOU to set this threshold. In this work, we report the mAP metric with the threshold value \(IOU=0.5\). In other words, whenever \(IOU\geq 0.5\), we classify this detection as a true positive. Once we set the threshold value, we obtain precision and recall values and consider precision as a function of recall. Formally, the average precision (AP) is defined as \[AP=\int_{0}^{1}P(r)dr \tag{4}\] where mAP is calculated among all predefined object classes in the dataset. This metric is known as [email protected]_ when the threshold is set to \(IOU=0.5\). Another common metric is _mAP@[.5:.95]_. In this case, IOU threshold is varied from 0.5 to 0.95 and mAPs registered for each separate threshold are averaged out to obtain a single mAP@[.5::.95] score. ### Data Preparation The following tables detail the properties of each video regarding the walking droplet experiments introduced in Sec-2.1 as well as the number of training, validation and testing images utilized to train and test the YOLO architecture described above. Our general approach for walking droplet experiments is to employ around 180 frames that are sampled with uniform time intervals from the corresponding video source. Out of these frames, we then reserve 70% for training, 20% for validation, and 10% for testing. Notice that we consider only around 2% of the frames relative to the total number of frames present in the video sources. Since data preparation takes up the largest chunk of time in model training, it is essential to keep the training data as small as possible without compromising the model accuracy. In particular, the droplets in our dataset are approximately squares 6-10 pixels wide which renders manual bounding box annotation particularly challenging. But using more training images does generally result in better performing models. We will detail our choice regarding the number of images to train our model in Sec-4.1. Therein, we train separate models for Control and Three Droplets experiments by increasing the number of training images. It appears that the model performance starts to saturate around 80-100 images. Based on this observation, we opted to consider a slightly conservative estimate and created 120 training images from each experiment. Regarding granular intruder experiments, the datasets used for model training are summarized in Table-4. Notice that we employ considerably less images compared to droplet experiments. As it will be detailed below, large and distinct shapes of intruders relative to their surrounding medium significantly helps the YOLO model to learn from a small number of training samples. Due to the same reason, we do not create a separate annotations for the 2white2black-short experiment since the intruders in both experiments are qualitatively similar. \begin{table} \begin{tabular}{l l l l l l} \hline Experiment & Duration(mins) & Frame Count & TrainImg & ValidImg & TestImg \\ \hline Control & 4.31 & 7494 & 120 & 34 & 17 \\ Lights off & 4.53 & 7884 & 126 & 36 & 18 \\ Lights low & 4.58 & 7962 & 127 & 36 & 18 \\ Lights high & 4.27 & 7426 & 124 & 36 & 17 \\ Two droplets & 4.62 & 8032 & 129 & 36 & 18 \\ Three droplets & 5.19 & 9034 & 127 & 36 & 18 \\ Res mid & 5.63 & 9798 & 123 & 35 & 17 \\ Res low & 4.49 & 7805 & 125 & 36 & 17 \\ Faraday & 4.67 & 8118 & 130 & 37 & 18 \\ Corral White & 2.08 & 7692 & 126 & 36 & 17 \\ \hline \end{tabular} \end{table} Table 3: Summary of each walking droplet experiment As noted above, object detection datasets are usually composed of images and the coordinates of the bounding boxes associated with each object of interest in the corresponding image or frame. To create our dataset, we used the online annotation tool _LabelImg_[74]. ### Model Training In this work, we adopted the Pytorch [75] implementation of the YOLO architecture maintained by Ultralytics in [76]. In particular, we selected the YOLOv5 architecture which is one of the fastest and lightweight implementations of YOLO. One can train a separate model for each of the experiments above or a single model by combining the individual datasets. For simplicity, we use the single model option. However, we experimented with the former as well and observed similar results in both cases. While the combined walking droplet dataset is composed of 1257 training, 358 validation and 175 testing images with their annotations, the granular flow dataset has 62 training, 16 validation and 8 testing images. We trained the models with pretrained COCO weights and used the default set of hyperparameter configuration provided by the same repository [76]. The model regarding droplet data was trained for 63 minutes with 285 epochs. The granular experiment training was carried out for 4.6 minutes with 300 epochs. On the validation set, the droplet model's [email protected] and mAP@[.5:.95] scores were recorded as 0.995 and 0.532, respectively. The granular flow model's [email protected] and mAP@[.5:.95] scores were observed to be 0.995 and 0.715, respectively. In each training session, the model with the best [email protected] is saved and its performance is verified on the test data which was not seen by the model. For the testing data, our droplet model achieved 0.982 [email protected] and 0.48 mAP@[.5:.95] while the granular flow model scored 0.995 [email protected] and 0.672 mAP@[.5:.95]. These results are summarized in Table-5. Fig-1 demonstrates the evolution of [email protected] and mAP@[.5:.95] over the course of model training. In both cases, we can observe that the [email protected] values gradually increase and saturate around 1 which points to a perfect accuracy. All training and testing stages were performed on a GE66 Raider 10SFS with a 2.60GHz Intel Core i7-10750H CPU and an NVIDIA GeForce RTX 2070 Super GPU. ## 4 Results ### Walker and Intruder Detection Once we have a trained model accurately detecting droplets, we can process the walking droplet and granular flow videos on a frame-by-frame basis and detect the walker or intruder locations in each frame. \begin{table} \begin{tabular}{l l l l l l} \hline Experiment & Duration(mins) & FrameCount & TrainImg & ValidImg & TestImg \\ \hline 3white & 7.04 & 7884 & 31 & 8 & 4 \\ 2white2black-long & 20.22 & 36259 & 31 & 8 & 4 \\ 2white2black-short & 10.31 & 17893 & - & - & - \\ \hline \end{tabular} \end{table} Table 4: Summary of each granular material experiment \begin{table} \begin{tabular}{l l l l l l} \hline Experiment & Time & Epochs & Validation & Test & \\ \cline{3-5} & & [email protected] & mAP@[.5:.95] & [email protected] & mAP@[.5:.95] \\ \hline Walking Droplet & 63 & 285 & 0.995 & 0.532 & 0.982 & 0.48 \\ Granular Flow & 4.6 & 300 & 0.995 & 0.715 & 0.995 & 0.672 \\ \hline \end{tabular} \end{table} Table 5: Training results We perform this operation in real-time. For each frame in the corresponding video source, we accept a detection is valid only if the detected number of droplets or intruders is equal to the number of droplet(s) or intruder(s) in the related experiment. Moreover, we strictly impose that the confidence score associated with each droplet or intruder detection must be above a certain threshold value. In this work, we set this threshold to 0.45. For example, if we consider three-droplet experiment, we accept a detection valid only if it detects exactly three droplets with a confidence score above 0.45. Note that since the size of the droplet is small compared to the actual frame and the droplet itself is composed of the same fluid as the medium, one may encounter false-positives either on the medium or even on the experiment tray. Although this effect appears to be less pronounced in granular experiments, we would like to maintain relatively high confidence for each intruder as well. As detailed in the next section, one of the primary goals of this paper is to extract the individual droplet or intruder trajectories. Such a criteria helps us avoid false-positive identifications which essentially renders the extracted trajectory unreliable. Using this approach, we count how many frames fulfill this criteria and define _frame detection rate_ (FDR) as the ratio of frames passing this criteria to the total number of frames. The results in 6 and 7 demonstrate a near perfect performance of the YOLOv5 model. Due to the accurate detection capabilities of the model combined with the aforementioned criteria, we do not observe any false-positive identifications in any of the experiments. Readers are highly encouraged to watch the real-time detection videos of each experiment provided in the supplementary material. Figure 1: Training performance of the droplet model **(a)** and granular flow model **(b)**. The yellow solid curve (bottom) represents the performance of [email protected]:0.95 and the blue dashed curve (top) represents the performance of [email protected]. In terms of detection, the model achieves near perfect detection rate in a wide spectrum of experimental settings. We should note that one can attempt to create an extremely challenging experimental setting to drastically lower the model performance or potentially break the process all together to test the limits of the proposed methodology. In the context of our study, we argue that such an attempt is not the focus of the experiments in consideration. To extract proper trajectories of walking droplets and granular materials, experiments should be conducted in favorable conditions such as maintaining even lighting and utilizing a stable camera setup capable of high resolution videos. However, we acknowledge the possibility for external noise in the process of gathering video samples. Thus, we demonstrate that the proposed methodology is well suited for even the most extreme circumstances encountered such as low/bright lighting, variability in resolution and frame rate, nonlinear motion of the objects and so on. The near perfect detection presented in Table-6 and 7 will constitute the most important component of the trajectory extraction task which is the overarching goal of this paper. To emphasize the importance of accurate detection, let us consider the Three Droplets experiment where the model is capable of detecting 8657 out of 9034 frames. The corresponding experiment video is captured approximately 30 frame-per-second (fps) with a length of 5.19 minutes. This means the model is missing only 12.5 seconds out of 5.19 minutes. We carefully note that those undetected frames are scattered across different times in the video. Thus, its effect on trajectory extraction is essentially negligible. Similarly, the model is missing only 24 seconds out of 22.22 minutes in the 2white2black-long granular experiment. Maintaining a near perfect frame detection rate is particularly important for experiments including multiple walkers or intruders. This point will be detailed in Sec-4.3. Some detection samples with bounding boxes and their associated confidence scores are displayed in Fig-2. Samples in the top rows are from the test dataset of the Three Droplets and Two Droplets experiments while the bottom ones are taken from the Lights Off and Faraday experiments In each instance, the droplets are accurately detected with high confidence. Notice the usage of black corrals in all of the experiments except one. We carried out one experiment with a white corral holding a walker in an attempt to test the performance of YOLOv5 in a more demanding setting. As opposed to the previous experiments which are all 30 fps, we also captured this \begin{table} \begin{tabular}{l l l l} \hline Experiment & Total Detections & Total Frames & Frame Detection Rate \\ \hline Control & 7491 & 7494 & 0.9996 \\ Lights Off & 7859 & 7884 & 0.9968 \\ Lights Low & 7956 & 7962 & 0.9992 \\ Lights High & 7391 & 7426 & 0.9953 \\ Two Droplets & 7824 & 8032 & 0.9741 \\ Three Droplets & 8657 & 9034 & 0.9583 \\ Res Mid & 9768 & 9798 & 0.9969 \\ Res Low & 7752 & 7805 & 0.9932 \\ Faraday & 7998 & 8118 & 0.9852 \\ Corral White & 7542 & 7692 & 0.9805 \\ \hline \end{tabular} \end{table} Table 6: Frame detection rates for walking droplet experiments \begin{table} \begin{tabular}{l l l l} \hline Experiment & Total Detections & Total Frames & FDR \\ \hline 3white & 12530 & 12720 & 0.98 \\ 2white2black-short & 17893 & 18918 & 0.95 \\ 2white2black-long & 36259 & 36663 & 0.98 \\ \hline \end{tabular} \end{table} Table 7: Frame detection rates for granular flow experiments Figure 2: Walking droplets detection samples from our implementation of YOLOv5. The red solid square indicates YOLOv5 finding the walker, and the decimal values next to the square indicate the confidence score in the range of 0 to 1. **(a)** and **(b)** Videos selected from the test dataset of the “Three Droplets and Two Droplets” experiments. **(c)** and **(d)** Videos selected from the “Lights Off and Faraday” experiments. experiment at 60 fps. The outlined procedure was still capable of capturing 98% of the frames despite the walker blending into the background. As seen in Fig-3, YOLOv5 has no difficulties in detecting the walker in a white corral, which may be a challenging task even for human vision. Similarly, the model can detect the intruders in the granular material experiments with high confidence. We display some samples in Fig-4. Data preparation is a major bottleneck in supervised machine learning. In particular, manual annotation of training images in object detection problems is a labor-intensive process. For our experimental settings, Table-6 and Table-7 indicate that we can achieve near-perfect detection accuracy with around 120 training images for droplet experiments and 30 training images for granular flow experiments. Note that these are just a tiny fraction of the total number of frames in the corresponding video sources. However, one can determine the approximate number of training images that will likely yield a high detection rate prior to actual model training. Such an investigation would be particularly useful in cutting annotation time if qualitatively similar experiments are to be carried out multiple times. One way of investigating the optimal number of images would be to gradually increase the number of training images by respecting 70%:20% ratio between them and keeping the same number of testing images. We can then plot mean average precision as the function of the number of training images and analyze if the function displays an asymptotic behavior around a mAP score of 1. For example, let us consider the Control experiment for which we initially created 120 training, 34 validation and 17 testing images. We can fix the validation and testing data and train the model by gradually increasing the number of training images from 5 images to 120 images and each time keeping appropriate number of validation images. For each training cycle, we use the very same 17 testing images. Since this is just a preliminary investigation, we set the number of epochs to 50 for each training cycle. Figure-5 displays the results of this investigation for the Control (left) and Three Droplets (right) experiments. In both cases, we can observe the [email protected] values fluctuating around 1 when the number of training images exceeds 60. Based on similar investigations of other experiments, we set the number of training images to a conservative estimate of 120. Recall that we trained a single model for the dataset that is composed of around 120 training images Figure 3: The walker captured by YOLOv5 in the “White Corral” video in two separate frames. The red solid square indicates YOLOv5 finding the walker, and the decimal values next to the square indicate the confidence score in the range of 0 to 1. Figure 4: Granular material detection samples by our YOLOv5 model from experiments with three white intruders (**a, b**) and two white and two black intruders (**c, d**). The red solid square indicates YOLOv5 finding the walker, and the decimal values next to the square indicate the confidence score in the range of 0 to 1. taken from each experiment. However, one can use fewer number of images if the goal is to train a separate model for each experiment. For example, the right panel in Fig-5 indicates that one can employ around 50 training images for Three Droplets experiment without compromising from the model performance. This is indeed correct. With 50 training, 14 validation and 7 testing images, it takes around 3.5 minutes to train the model and the model still achieves 99.28% detection rate. We should also note that the very same model trained for Three Droplets equally performs for Control experiment but not for Lights Off or Corral White. This difference can be explained by the degree of variability between the experiments. Therefore, it may be more practical to train a single model in case of multiple experiments rather then dealing with training multiple models. ### Conventional Detection Methods Although novel deep learning object detection methods such as YOLOv5 might be the most robust, conventional methods may still have their uses. In exchange for cutting-edge performance, these techniques offer simplicity, interpretability, and speed. We consider three categories of computer vision techniques, with each having its own trade-offs: image processing, background subtraction, and the Viola-Jones algorithm. Using the Hough transform algorithm for circle detection to locate droplets forms the basis of our image processing procedures. By performing Canny edge detection prior to the Hough transform, we can eliminate much of the noise within the image and achieve relatively high recall and precision under most experimental conditions. Consistency may be an issue as the Canny edge detection method produces false negatives as the resolution of the droplets varies. Image processing techniques offer high interpretability compared to black box machine learning algorithms and do not require training images. Some parameters need to be tuned but image processing techniques are typically fast to run once set up. Out of the three background subtraction variations we tested, KNN performed well under most circumstances. While not perfect, it demonstrated a higher consistency compared to image processing techniques and remains easy to set up and fast to run without needing labeled training images. Note that the experimental setup must be stable and undisturbed for background subtraction to function properly. The Viola-Jones machine learning algorithm requires training images but could not accurately locate droplets, the reason likely Figure 5: Model performance of the Control **(a)** and three droplet **(b)** experiments. The yellow solid curve (bottom) represents the performance of [email protected]:0.95 and the blue dashed curve (top) represents the performance of [email protected]. being the small size and minimal texture of the droplets. In general, none of the conventional methods could match YOLOv5 in performance nor achieve any form of accuracy when Faraday waves formed and when the corral was white. While the Canny edge detection and KNN background subtraction methods might be applicable in some controlled environmental settings, YOLOv5 is preferred for rigorous studies. For the details of our findings see Appendix A. ### Trajectory Extraction by Object Tracking The ultimate goal of our efforts in this study is to accurately describe the dynamics in walking droplet and granular intruder experiments with minimal human input and to automate this process. In particular, we would like to obtain the trajectories of the individual objects of interests in the experiments described above. In terms of computer vision tasks, this falls under the category of object tracking. Object tracking refers to assigning unique identities to objects in an image and tracking them in the subsequent frames. In computer vision, this task is defined as multiple object tracking (MOT) or single object tracking (SOT) depending on the task. The essential step in object tracking is first detecting the objects. Thus, all the challenges present in object detection problems inherently exist for object tracking as well. Occlusion, nonlinear motion, appearance or disappearance of objects between the frames and similar visual features of the objects are just a few difficulties that make object tracking a challenging task. Failure of a tracker can manifest itself in different forms. Trackers may lose an already tracked object, confuse the initial object identities with other class identities in the subsequent frames resulting in _identity switches (ID switch)_ or fail to track the objects all together. If the ultimate goal of the study is to accurately identify the trajectory of an object, the tracker must be free of these defects to a great extent. Since the development of a new MOT framework is not the focus of this work, the reader is encouraged to consult excellent review articles which give a comprehensive review of MOT methods and approaches [77, 78, 79]. Among the existing methods, the most common approach is tracking-by-detection which works in two stages. In the first stage, the object of interests are located in the scene. In the second stage, these object are associated with already tracked ones by a tracking model. Most of the MOT algorithms are equipped with some or all of the following features; (1) detection, (2) feature extraction (e.g with the help of neural networks) or motion prediction (e.g Kalman Filter), (3) affinity and (4) association. [79] We demonstrated in the previous section the high performance of our YOLOv5 model in (1) for various forms of experiments in consideration. As it is the case in many machine learning tasks, it is essential to integrate domain specific knowledge into the problems we tackle. In the context of our experimental settings, the number of walker or intruders essentially dictate the number of objects detected per frame since disappearance of an existing walker or intruders or appearance of new ones are not allowed. For example, we terminate the experiment if any coalescence between walkers occur. Therefore, false-positive identifications can be handled to a great extend by only considering frames where the detection count matches the actual walker or intruder count and imposing a high confidence score on detections. However, as detailed below, high and accurate detection rates may not guarantee accurate tracking even if one employs a state-of-the-art object tracking framework. Given the accurate detection performance of YOLOv5, the task is to build a tracker which is capable of tracking each individual walker or intruder in our experiments. Several issues remain post-detection that render droplet or intruder tracking a very challenging task. Droplets in our experiments are only 7-10 pixels across, lack rich visual context and exhibit highly nonlinear motion, especially when they collide. A similar challenge is also present in granular intrude experiments as the shape and visual properties of intruders are quite similar to each other. The extremely small size and almost identical appearance of the walkers or intruders combined with chaotic motion are fundamental obstacles that hinder the performance of the tools employed in (2). Even with deep neural networks, feature extraction cannot assist in tracking given nearly indistinguishable droplets. Motion detection fails and can even lead to identity switches when the motion of walkers or intruders follow no perceivable pattern. At this point, we can turn our attention to one of the state-of-the-art tracking libraries and implement some of them for our problem. However, we argue that one can employ a solely motion based tracking method to solve this problem. In this sense, we employ the Hungarian Algorithm which solves the linear assignment problem (sometimes called 2-assignment problem) in polynomial time [80]. Let \(\left\{c_{i}^{(f)}\right\}_{i=1}^{n}\) and \(\left\{\hat{c}_{i}^{(f+1)}\right\}_{i=1}^{n}\) be the center of the bounding boxes representing the location of \(n\) droplets detected in two adjacent frames \(f\) and \(f+1\). In literature, \(\left\{c_{i}^{(j)}\right\}_{i=1}^{n}\) and \(\left\{\hat{c}_{i}^{(j+1)}\right\}_{i=1}^{n}\) are sometimes called _tracks_ and _detections_, respectively. We define the following cost matrix based Euclidean distance \[C(i,j)=\|c_{i}-\hat{c}_{j}\|\text{ for }i,j=1,2,..n \tag{5}\] Hungarian algorithm aims to identify \(n\) unique indices from \(C\) in a way that there is one index in each row and in each column such that the total cost associated with these \(n\) indices is minimized. Once these unique indices are identified, they are added to the tracks and the process is repeated. In the actual implementation of the Hungarian Algorithm, we utilized the Scipy library [81]. The Hungarian Algorithm can be considered as one of the most simplistic approach for MOT problems. Due to its simplicity, it is regarded as a typical base model in object tracking benchmarks. In spite of its simplicity, the Hungarian Algorithm is well suited to our problem setting. As noted above, walkers and intruders in our experiments have almost identical visual appearances. Therefore, the feature extraction stage used by many MOT methods may fail to differentiate those similar objects from one scene to the next. Moreover, motion predictor functionalities in MOT frameworks do not perform well if the object of interests exhibit highly nonlinear motion between consecutive frames. Therefore, we argue that the Hungarian Algorithm, which is not corrupted by potentially negative effects of feature extraction and motion prediction, is an ideal option to track the individual walkers and intruders. To investigate these arguments, we compared the Hungarian Algorithm with StrongSORT [82], which is one of the state-of-the-art object tracking frameworks. We found that while StrongSORT suffers from multiple ID switches, the Hungarian Algorithm is completely free of ID switches in our walking droplet experiments with multiple droplets and our granular flow experiments. StrongSORT's failures usually occur during close interactions of visually similar walkers or intruders within a short distance of each other. This point will be detailed in the Sec-4.4. Note that one can replace the Euclidean norm with Intersection over Union (IOU) similarity in 5 and find the index set that maximizes the cost as well. However, IOU is not a useful measure of similarity because the similar size of the walkers and intruders with their almost identical appearances result in low variability in IOU scores. Figure-6 demonstrates IOU scores for each droplet detection in three subsequent frames of the Three Droplets experiment. Notice that IOU scores are actually the same up to the first significant digit. Thus, for our particular problem, a tracker whose (3) affinity and (4) association stages that solely rely on IOU is highly likely to suffer from multiple misidentification of walkers or intruders. We first present the tracking results regarding the single and multiple droplet cases and extract important information to further analyze the characteristics of droplet motion. Our primary goal is to keep track of the coordinates of the bounding box centers for each individual droplet. In particular, we would like to perform this task in real-time. We present three snapshots from the Control and Lights Off experiments in Fig-7. Since there is only a single droplet in these experiments, an initial ID = 0 is assigned to the droplet and the same ID is tracked in the subsequent scenes. Since our approach is purely motion based, there is no ID switch in any of the single droplet experiments. We also note again that we are capable of tracking 99% of the 7482 frames in the Control experiment and 7869 frames in the Lights Off experiment by using only around 120 training frames for each experiment. For all the other experiments, the reader is highly encouraged to watch the real-time tracking videos provided in the supplementary material. Figure 6: Low variability in the Intersection over Union metric in three subsequent frames for walking droplets. The blue solid square indicates YOLOv5 finding the walker, and the decimal values next to the square indicate the confidence score in the range of 0 to 1. Figure 7: Real time tracking of walking droplets from the Control (**a, b, c**) and Lights Off (**d, e, f**) experiments at three different time snapshots. Note that single droplet tracking is not challenging as long as the detection algorithm accurately identifies the location of the droplet. Once a single location is identified, association with the previous frame is straightforward since there is no another droplet present in the frame. However, multiple droplet experiments pose several characteristics which renders tracking very challenging. One particular challenge is the nonlinear nature of the droplet motion that may include rapid accelerations from one frame to the next. In particular, the moment when two or more walkers approach each other, they generally experience a push stemming from the superposition of pilot waves. This in return causes walkers to rapidly accelerate between two consecutive frames. To avoid ID switches, we heavily rely on the ability of YOLOv5 to resolve the motion, i.e. it keeps detecting the droplets with high confidence. Similar to single droplet experiments, we have not observed any ID switches in multiple droplet experiments. Fig-8 demonstrates several snapshots captured from the Two Droplets and Three Droplets experiments. For ease of visualization, we use a different color for each track representing the trajectory of individual droplets. As mentioned above, these results are essentially observed in real time with no external post-processing. In other words, we save and display the trajectories simultaneously until the end of the corresponding experiment. As expected, the trajectories appear to be chaotic rather than following a regular path. Similarly, we can extract the trajectories of individual intruders in granular flow experiments. Figures in Fig-9 indicate that the intruders exhibit significantly less displacement than walking droplets. Once the locations of the individual walkers or intruders are correctly identified, we can further analyse Figure 8: Real time tracking snapshots from the two **(a, b, c)** and three **(d, e, f)** droplet experiments. The snapshots were taken at \(t=20\)s for **(a)**, \(t=147\)s for **(b)**, \(t=250\)s for **(c)**, \(t=10\)s for **(d)**, \(t=63\)s for **(e)**, and \(t=202\)s for **(e)**. The different colors represent the different individual droplets. Figure 9: Real time tracking snapshots from the two white - two black **(a, b, c)** and two white **(d, e, f)** granular intruder experiments. The snapshots were taken at \(t=17\)s for **(a)**, \(t=240\)s for **(b)**, \(t=760\)s for **(c)**, \(t=20\)s for **(d)**, \(t=180\)s for **(e)**, and \(t=387\)s for **(e)**. The different colors represent the different individual intruders. this data to explore certain characteristics of their motions under different experimental settings. Figure-10 shows the location history overlaid with the flow of the motion regarding the Control, Lights Off, Lights High, and Res Mid experiments. We can also inspect the average speed of the droplets calculated based on its location between two consecutive detection times and create a heatmap based on this information (see Fig-11). One can observe that the droplets in the Lights Off and Res Mid experiments are generally free from rapid accelerations. However, this is not the case for the other experiments as we observe sharp changes in speed indicated by the colorbar. In the Faraday experiment which was carried out above the Faraday threshold, one can visually confirm that the droplet motion occurs in the vicinity of its initial location. As seen in Fig-12, the walker bounces back and forth between certain locations. Relatively sparse and discrete red points surrounded by blue points indicate sharp speed changes (potentially sharp turns) at these locations. Similarly, we can inspect the speed map for the individual intruders in flow experiments. Our results can be seen in Fig-13 and Fig-14. We observe that the motion of the intruders is generally free from rapid distortions. Figure 10: Full location history of the “Control” **(a)**, “Lights Off” **(b)**, “Lights High” **(c)**, and “Res Mid” **(d)** droplet experiments. The solid (red) curves with arrows represent the trajectories of the droplets, and the square and disc (black) markers represents the initial and terminal points of the motion. Figure 11: Speed map of the “Control” **(a)**, “Lights Off” **(b)**, “Lights High” **(c)**, and “Res Mid” **(d)** droplet experiments. The speed of the droplet at each location (shown in Fig. 10) is represented by the color bars. The square and disc (black) markers represents the initial and terminal points of the motion. Figure 12: Location history **(a)** and speed map **(b)** for the above-threshold Faraday droplet experiment. Square and disc (black) markers represents the initial and terminal points of the motion. Figure 14: Speed map for granular material experiment with three white intruders. **(a, b, c)** represent the different intruders. Square and disc (black) markers represents the initial and terminal points of the motion. Figure 13: Speed map for granular material experiment with two black and two white intruders. **(a, b, c, d)** represent the different intruders. Square and disc (black) markers represents the initial and terminal points of the motion. ### Comparison of the Hungarian Algorithm and StrongSORT As detailed above, we utilize possibly the simplest approach to track individual droplets in our problem setting. To further investigate this point, we compare this simple approach with StrongSORT [82], which achieved the state-of-art performance on MOT17 and MOT20 challenges [83, 84]. StrongSORT is a detection-based tracker,i.e, it tracks the object based on _detections_ provided by any object detection model. The earlier work leading to StrongSORT was proposed in the SORT (simple online and real-time tracking) method [85]. SORT utilizes a Kalman filter to predict the location of the objects in consideration and the association from one frame to next is handled by the Hungarian Algorithm. One deficiency of SORT noted is the high number of ID switches, especially during occlusion situations due to ignoring appearance information [35]. To mitigate this issue, the DeepSort method that incorporates appearance information to perform object associations between scenes is proposed in [35]. To further improve the model, DeepSort was equipped with an appearance-free link model (AFLink) to handle objects tracked in short time frames and Gaussian-smoothed interpolation(GSI) to remedy several issues associated with poor detection to produce StrongSORT [82]. We adopt the open-source Pytorch implementation of StrongSORT in [86]. Similar to our approach above, we accept a detection valid if all of the confidence scores associated with these detections are above a threshold value. This is set as 0.45 as we prescribe above to make a fair comparison. However, forcing StrongSORT to maintain exactly the same number of walkers or intruders throughout an experiment appears to break its interval method of processing tracks. Therefore, we will only impose the high confidence constraint and use it as described in the original repository. As expected, StrongSORT successfully tracks the droplets in all single droplet experiments. However, in multiple droplet experiments, we observe that when the droplets start to interact aggressively in a close proximity, StrongSORT suffers from multiple ID switches. To support this claim, we consider one 10 second segment of the Three Droplet experiment where the droplets are closely interacting with each other. We first annotated the frames corresponding to this time window to obtain the ground truth trajectories of each individual droplet. We then separately obtained tracking results for both the Hungarian Algorithm and StrongSORT. In Fig.-15, the background image is the very last frame of the approximately 10 second video segment. The top and bottom figures show the trajectories of two different droplets. Green discs and squares mark the initial and terminal points of the corresponding ground truth trajectory, respectively. Thus, the droplet in consideration overlaps with the green square marker. The figures on the left column display the trajectories obtained via the Hungarian Algorithm while figures on the right column display the StrongSORT trajectories, both of which are overlaid on the ground-truth trajectories. Due to an ID switch, StrongSORT fails to track the corresponding droplet and its trajectories end up at the wrong droplet ID by the end of the video segment. We should carefully note that these ID switches occur multiple times for both the Two Droplets and Three Droplets experiments. One ID switch for each of the Three Droplets and Two Droplets experiments in two consecutive frames is depicted in Fig-16. In each case, StrongSORT was initially capable of tracking each walker. However, it fails to do so in some scenes where droplets are significantly closer to each other than in earlier frames, which were successfully tracked. We have not observed any ID switch when the distance between droplets is of moderate level and not all close contact moments result in ID switch as well. It appears that one reason why StrongSORT suffers from ID switches in multiple droplet experiments is the nonlinear motion of walkers between successive scenes combined with almost identical appearances. The former case is not actually present in granular flow experiments in the sense that the displacement of intruders between two successive frames is significantly smaller than what we observe in droplet experiments. Yet, in granular flow, StrongSORT fails to track intruders in the 2black2white experiment multiple Figure 15: Ground-truth trajectories compared to trajectories produced by both the Hungarian Algorithm **(a, c)** and StrongSORT **(b, d)**. **(a, b)** represent the first droplet, and **(c, d)** represent the second droplet. Figure 16: ID switches by StrongSORT in the Three Droplets experiment **(a, b)** and the Two Droplets experiment **(c, d)**, where **(a, c)** and **(b, d)** represent two consecutive frames. times (see Fig-17) and it fails once in the 3white experiment (see Fig-18). In these figures, the top rows display two successive frames where StrongSORT fails to maintain the same IDs for intruders. The second row in each figure shows the same scenes tracked by the Hungarian Algorithm which successfully maintains the same IDs. Results in this section demonstrate that one should be cautious prior to employing a state-of-the-art tracker to extract the trajectories in wave-particle entities experiments. The black-box nature of those methods may be particularly problematic. While our simple distance-based Hungarian Algorithm approach is free of ID switches in all experiments, StrongSORT fails on numerous occasions for experiments carried out with multiple wave-particle entities despite the fact StrongSORT draws upon the Hungarian algorithm. As noted above, it is nontrivial to identify the source of this failure due to the black-box Figure 17: ID switch in the granular material experiments with two black and two white intruders by StrongSORT **(a, b)**, and a lack thereof by the Hungarian Algorithm **(c, d)**, where **(a, c)** and **(b, d)** represent two consecutive frames. Figure 18: ID switch in the granular material experiments with three white intruders by StrongSORT **(a, b)**, and a lack thereof by the Hungarian Algorithm **(c, d)**, where **(a, c)** and **(b, d)** represent two consecutive frames. nature of StrongSORT. Based on the high detection rates accomplished by YOLO, our tracking approach is interpretable in the context of our experiments. Assuming we have similar detection rates with Table-6 and Table-7, any ID switch in our approach would most likely be caused by large displacements of wave-particle entities between two successive detections as we avoid relying on motion-predictor or feature-extractor for tracking. Recording experiments at a higher frame rate would assist our tracking algorithm in better resolving the motion of wave-particle entities. We demonstrated this approach and the viability of YOLOv5 given higher frame rates on the White Corral experiment captured at 60 fps. A similar method may be applied to correct any potential ID switch that may occur with our tracking algorithm. We lastly note that one can attempt to programmatically identify each ID switch in StrongSORT and correct the trajectories accordingly. However, this is intractable due to various reasons. While it may be possible to verify whether a new ID has been created or an existing one has been lost, it is unfeasible to _correct_ ID switches in real time for experiments with multiple objects, since object IDs are simply a group of integers without any specific structure. Otherwise, the issues of ID switching in multi-object detection problems would be nonexistent. As noted above, an ID switch usually occurs when multiple objects are in close proximity to each other. Thus, correcting misidentifications by inspecting jumps in location due to ID switches is not a feasible attempt. It is important to keep in mind that the difficulty of this attempt increases in proportion to both the duration of the experiment and the number of objects that need to be tracked. ## 5 Conclusion and Future Work We demonstrated a deep learning algorithm that enhances the object tracking pipeline for extracting the trajectories of objects of interest (i.e. walkers and intruders) in wave-particle entities experiments. Our tracking-by-detection pipeline uses YOLOV5 for detection and the Hungarian Algorithm for tracking. In a broad spectrum of the walking droplet and granular intruder experiments, the proposed method identifies the individual walkers and intruders with near-perfect detection accuracy and tracks them over the course of the experiment without any identity switches. Trajectory extraction thereby enables the examination of important characteristics hidden in the dynamics of the wave-particle entities. One of the major goals of this work is to promote data-driven discovery of underlying physics governing the motion of wave-particle entities in classical experiments. An essential component of these efforts is to accurately extract the dynamics in a broad spectrum of experimental settings and to significantly automatize this process. One can then create a vast amount of rich data to serve as a testing and exploration ground for understanding these experiments. Of particular importance is understanding the resulting dynamics that emerge in \(N\)-particle systems driven by wave-particle dynamics. Our future work regarding this study is twofold. As demonstrated in several figures above, wave-particle entities usually exhibit a complex trajectory. Therefore, we are unlikely to describe the full motion of these entities with a single set of governing equations. However, our preliminary results indicate that it could be possible to identify the governing dynamics in short patches using the sparse regression method SINDY [87]. This will constitute the first direction of our future work. The second direction is to investigate the existence of spatio-temporal modes which dominate the evolution of the system. This investigation is based on the hypothesis that the dynamics of this high-dimensional dataset may be described by underlying lower-dimensional patterns. To uncover these patterns, we will investigate several modal decomposition techniques such as proper orthogonal decomposition [88] and dynamic mode decomposition (DMD) [89, 90]. The codes, datasets, and results of this study will be made available in a public repository to promote scientific discovery in this field. Our GitHub repository offers further information for readers [91]. Moreover, we provide detailed step-by-step tutorial on how to adopt our repository for similar problem domains. The method presented in this manuscript will resolve numerous issues concerning tracking the long-time trajectory statistics of wave-particle entities. Even small discrepancies at inopportune times can accumulate to produce incorrect scientific results. Our present pipeline is robust across variations in experimental settings and across complex interactions among several wave-particle entities. We foresee that an improvement in the accuracy of observations will lead to better reproducibility and more transparency in experimental studies. ## Appendix A: Conventional Detection Methods In this section we detail the results of alternative detection methods performed on the walking droplet experiments. Specifically, we compare and contrast the performance of image processing, background subtraction, and the Viola-Jones methods under various experimental conditions. All of these methods, while more simplistic in nature than YOLOv5, come with drawbacks and can fail in some conditions. ### Image Processing Image processing algorithms apply operations or filters to images in order to extract meaningful information. In general, image processing algorithms do not need to be trained but do require a few parameters to be tuned for each experiment. Their simplicity comes at a cost, however. Most of the image processing methods suffer heavily from noise in the image and experimental setup. One procedure stood above the rest with optimistic results. in The combination of edge detection and Hough transform, after parameter tuning, reported high precision and recall on all videos except Res Mid and Faraday. This inconsistency is the trade off made when applying image processing algorithms in exchange for ease of use. #### A.1.1 Hough Transform The simplest of the image processing algorithms we applied is arguably the Hough transform. In Hough transform, shapes in the pixels of an image are identified through a voting procedure. This technique has been shown to detect circles within an image [38, 39, 40]. As our droplets appear mostly circular in the videos we captured, we will apply the Hough transform as a last step to all our image processing methods. We implement the Hough transform for circle detection by utilizing the built-in _imfindcircles_ function in MATLAB. Using parameters _RadiusRange = [2, 5]_, _Sensitivity = 0.95_, and _EdgeThreshold = 0.15_ on the Control video, we obtained a recall of 0.98 and a precision of 0.48 within the corral region out of 50 frames throughout the video. We ignore false positives outside the corral region since we can safely assume droplets never escape the corral region. The parameters we chose maximized recall but introduced many false positives and therefore a low precision. Using parameters _RadiusRange = [2, 5]_, _Sensitivity = 0.96_, _EdgeThreshold = 0.1_ on the Lights Off video, we obtained a recall of 1.00 and a precision of 0.93, which is an improvement over the control setup as much of the noise and imperfections in the background are no longer visible. However, there still remain a signficant number of false positives. Having additional droplets do not seem to decrease the performance of the Hough Transform. In contrast, both lower resolution droplets and a white corral background decimate the performance. Perhaps unsurprisingly, the Hough Transform completely fails when detecting droplets in the Faraday video as the Faraday waves in the background produce too many false positives regardless of the parameters used. It is also important to note that in all of the cases, the region outside of the corral produced many false positives due to imperfections in the 3D printed corral which were not counted. Frames showing detections in various experimental conditions can be seen in Figure 19. Table 8 lists the results from the Hough Transform experiment. #### a.1.2 Top-Hat Transform One additional step we included to improve image processing performance was carrying out morphological operations prior to applying the Hough Transform. We applied the top-hat transform to reduce uneven lighting and to enhance the droplet shapes against the background [41]. The top-hat transform computes and subtracts the morphological opening of an image from itself. Figure 20 shows the results of the transform. The performance of the top-hat transform can be seen in Table 9. Overall, the inclusion of the top-hat transform significantly improved precision while keeping recall about the same (and in the case of Figure 19: The upper left image shows a frame in the control video. In this frame, the droplet to the right of the corral is correctly detected but there is another false positive detection made within the corral region. Additionally, the imperfection to the left of the corral is at times mistaken for a droplet as well. The upper right image shows the Lights Off video, the lower left image shows the Res Low video, and the lower right image shows the Faraday video. the Res Low video both recall and precision were improved). However, the detection is still not perfect, failing when applied to the Faraday video and performing poorly on the Corral White video. Additionally, precision is generally low under extreme lighting (Lights Low and Lights High) and lower resolutions (Res Low and Res Mid). #### a.1.3 Edge Detection We also experimented with applying edge detection before using _imfindcircles_. MATLAB already maintains a built-in _edge_ function that we utilized [42]. After experimenting with available methods, we noted that the Canny method generally outperformed others at ignoring noise and imperfections within the image (see Figure 21). The Canny edge detection method is a multi-stage process that incorporates filters and thresholding to make accurate and reliable edge detections [43]. Again, the performance of edge detection is listed below in Table 10. Incorporating edge detection resulted in a improved performances compared to using the top-hat transform. This procedure only had trouble with the Res Mid and Corral \begin{table} \begin{tabular}{l l l l l l} \hline Experiment & RadiusRange & Sensitivity & EdgeThreshold & Recall & Precision \\ \hline Control & [2 5] & 0.95 & 0.15 & 0.98 & 0.48 \\ Lights Off & [2 5] & 0.96 & 0.1 & 1.00 & 0.93 \\ Lights Low & [2 5] & 0.95 & 0.15 & 0.98 & 0.74 \\ Lights High & [2 5] & 0.95 & 0.15 & 0.96 & 0.40 \\ Two Droplets & [2 5] & 0.95 & 0.15 & 0.98 & 0.66 \\ Three Droplets & [2 5] & 0.95 & 0.15 & 0.97 & 0.87 \\ Res Low & [1 4] & 0.95 & 0.12 & 0.58 & 0.17 \\ Res Mid & [2 5] & 0.955 & 0.1 & 0.88 & 0.43 \\ Faraday & X & X & X & X & X \\ Corral White & [2 5] & 0.95 & 0.25 & 0.46 & 0.49 \\ \hline \end{tabular} \end{table} Table 8: Hough Transform Results Figure 20: Before (left) and after (right) top-hat transform White videos. Note that having a score of 1.00 does not mean perfect detections as only 50 frames were considered in calculating the recall and precision. For example, the Three Droplets vdeo produced false negatives despite having a 1.00 recall. Unfortunately, the Canny method still falls short of making accurate detections on the Faraday video as it is difficult to distinguish Faraday waves from droplets by simply examining edges and circles. While a potentially viable method in controlled experimental conditions, using edge detection may not be the most robust as it dips in performance for the Res Mid video but reported good performances for higher and lower resolutions. #### a.1.4 Watershed Transform The last method we implemented was the watershed transform using MATLAB's built-in function _watershed_. The watershed transform segments an image into separate regions by treating the image as a topographical map [44]. Readers can see Table 11 for the results. The watershed transform produced high \begin{table} \begin{tabular}{l l l l l l l} \hline Experiment & THRadius & RadiusRange & Sensitivity & EdgeThreshold & Recall & Precision \\ \hline Control & 5 & [2 5] & 0.95 & 0.2 & 0.94 & 0.92 \\ Lights Off & 5 & [2 5] & 0.96 & 0.2 & 1.00 & 1.00 \\ Lights Low & 5 & [2 5] & 0.95 & 0.2 & 0.98 & 0.82 \\ Lights High & 5 & [2 5] & 0.95 & 0.2 & 0.94 & 0.87 \\ Two Droplets & 5 & [2 5] & 0.95 & 0.2 & 0.95 & 0.92 \\ Three Droplets & 5 & [2 5] & 0.95 & 0.2 & 0.95 & 0.96 \\ Res Low & 5 & [1 4] & 0.96 & 0.2 & 0.96 & 0.76 \\ Res Mid & 5 & [2 5] & 0.955 & 0.2 & 0.88 & 0.86 \\ Faraday & X & X & X & X & X & X \\ Corral White & 5 & [2 5] & 0.94 & 0.2 & 0.38 & 0.40 \\ \hline \end{tabular} \end{table} Table 9: Top-Hat Filtering Results Figure 21: The default Sobel method in MATLAB (left) picks up on the corral imperfections while the Canny method (right) is able to filter out such imperfections. precision but was the most inconsistent algorithm so far. There is generally a low recall that varied drastically depending on the lighting level. Additionally, the watershed transform was also the only algorithm that failed completely on Res Low and Res Mid videos in addition to Faraday and Corral White videos. Figure 22 demonstrates the watershed transform in action. ### Background Subtraction Background subtraction algorithms build up a model of the static background to extract moving foreground objects in the form of a black and white foreground mask. We compared and contrasted 3 different background subtraction techniques based on Gaussian mixture models (GMM) to detect and segment the moving droplets from the similarly-colored background [45, 46]. The method MOG [47, 48] is implemented in MATLAB and MOG2 [49] and KNN [50] are both implemented in Python using the OpenCV library. For both of the Python algorithms, we first apply a threshold operation to remove any shadows from the foreground mask. Prior to detecting the droplets, we also utilize an erode then a dilate operation to \begin{table} \begin{tabular}{l l l l l l l} \hline Experiment & FudgeFactor & RadiusRange & Sensitivity & EdgeThreshold & Recall & Precision \\ \hline Control & 8 & [2 5] & 0.93 & 0.5 & 1.00 & 0.98 \\ Lights Off & 5 & [2 5] & 0.95 & 0.5 & 1.00 & 1.00 \\ Lights Low & 9 & [2 5] & 0.92 & 0.5 & 0.94 & 1.00 \\ Lights High & 7 & [2 5] & 0.92 & 0.5 & 1.00 & 1.00 \\ Two Droplets & 8 & [2 5] & 0.93 & 0.5 & 1.00 & 1.00 \\ Three Droplets & 8 & [2 5] & 0.93 & 0.5 & 1.00 & 1.00 \\ Res Low & 5 & [1 4] & 0.94 & 0.5 & 1.00 & 1.00 \\ Res Mid & 6 & [2 5] & 0.94 & 0.4 & 0.94 & 0.89 \\ Faraday & X & X & X & X & X & X \\ Corral White & 14 & [2 5] & 0.93 & 0.5 & 0.42 & 0.35 \\ \hline \end{tabular} \end{table} Table 10: Edge Detection Results Figure 22: The watershed transform applied to the Control video (left) and the Res Mid video (right). remove noise from the mask (except for MOG2 and KNN when applied to the Res Low video, KNN when applied to the Res Mid video, and all algorithms when applied to the Corral White video). For the MOG algorithm implemented in MATLAB, we used 5 training frames and a training rate of 0.0001 while only detecting blobs with a minimum area of 10 (3 for Res Low and 5 for Res Mid). Without the erosion and dilation operations, noise within the videos translates to noise in the foreground masks which then induce false positives. We can see this in one frame (frame 1) when the KNN algorithm is applied to the Control video (as shown to the left of Figure 23). Applying the erosion and dilation operations removes most noise within the foreground mask (as shown in the middle of Figure 23). However, some noise have a large enough area such that the morphological operations are unable to remove them from the foreground mask, which then leads to false positive detections. This can be \begin{table} \begin{tabular}{l l l l l l l} \hline Experiment & THRadius & RadiusRange & Sensitivity & EdgeThreshold & Recall & Precision \\ \hline Control & 5 & [2 5] & 0.97 & 0.1 & 0.68 & 1.00 \\ Lights Off & 5 & [2 5] & 0.97 & 0.1 & 0.3 & 1.00 \\ Lights Low & 5 & [2 5] & 0.97 & 0.1 & 0.98 & 1.00 \\ Lights High & 5 & [2 5] & 0.97 & 0.1 & 0.76 & 0.90 \\ Two Droplets & 5 & [2 5] & 0.97 & 0.1 & 0.74 & 1.00 \\ Three Droplets & 5 & [2 5] & 0.97 & 0.1 & 0.63 & 1.00 \\ Res Low & X & X & X & X & X & X \\ Res Mid & X & X & X & X & X & X \\ Faraday & X & X & X & X & X & X \\ Corral White & X & X & X & X & X & X \\ \hline \end{tabular} \end{table} Table 11: Watershed Transform Results Figure 23: From left to right: frame 1 before applying erosion and dilation, frame 1 after applying erosion and dilation, and frame 2. Top images show the video post droplet detection and the bottom images show the foreground mask. seen in another frame (frame 2) of the same experiment (as shown to the right of Figure 23). We can potentially eliminate these false positives by increasing the number of iterations that we apply the erosion and dilation operations, but doing so will drastically decrease the recall since droplets are often erased from the foreground mask as well. As expected, the motion of the Faraday waves caused all background subtraction algorithms to fail for the Faraday video. Movement in the background makes it impossible to extract the foreground based on motion. Otherwise, the KNN algorithm outperformed both MOG and MOG2 with a high recall and precision for all videos except Corral White. Again, note that having a score of 1.00 does not mean perfect detections as only 50 frames were considered in calculating the recall and precision. For example, the KNN algorithm had occasional false positive detections despite a precision score of 1.00 on most videos. While all algorithms failed to detect droplets occasionally, MOG also failed to detect droplets in a large proportion of frames for the Lights Off video. MOG2 was imprecise in general and made a significant amount of false detections in all but the Lights Off and Res Low videos. Compared to image processing techniques, background subtraction methods require less parameter tuning and do not produce false positives on 3D printing imperfections of the corral. The KNN algorithm, while not perfect, generally offers more consistency compared to image processing methods and can be viable in controlled settings. However, background subtraction algorithms do require the camera and setup to remain perfectly fixed and undisturbed throughout the experiment. Additionally, the frames of a video must be processed in chronological order. ### Viola-Jones Algorithm The Viola-Jones Algorithm is a supervised machine learning algorithm used for object detection [53]. Supervised machine learning algorithms detect objects by training on a set of labelled training images in which rectangular bounding boxes are marked around each object of interest, and in our case droplet. We implemented the Viola-Jones Algorithm through MATLAB's _CascadeObjectDetector_ and used roughly 125 labelled training images per video and 6899 negative images [92] (although only about 250 negative samples per droplet were used by the detector at each training stage). The parameters used during training were _NumCascadeStages_, _FalseAlarmRate_, _ObjectTrainingSize_, and _FeatureType_. We kept _TruePositiveRate=0.999_ the same for all videos. The results below in Table 13 were obtained by only keeping bounding boxes below an area of 200. The Viola-Jones algorithm was designed primarily for face detection. Therefore, the low performance \begin{table} \begin{tabular}{l l l l l l l} \hline Experiment & \multicolumn{2}{c}{MOG (MATLAB)} & \multicolumn{2}{c}{MOG2 (Python)} & \multicolumn{2}{c}{KNN (Python)} \\ \cline{2-7} & Recall & Precision & Recall & Precision & Recall & Precision \\ \hline Control & 0.98 & 1.00 & 0.96 & 0.70 & 0.98 & 1.00 \\ Lights Off & 0.74 & 1.00 & 0.98 & 1.00 & 0.98 & 1.00 \\ Lights Low & 1.00 & 1.00 & 0.98 & 0.88 & 1.00 & 1.00 \\ Lights High & 0.98 & 0.98 & 1.00 & 0.65 & 1.00 & 1.00 \\ Two Droplets & 0.98 & 1.00 & 0.98 & 0.87 & 0.99 & 1.00 \\ Three Droplets & 0.99 & 0.99 & 0.97 & 0.88 & 0.99 & 1.00 \\ Res Low & 0.98 & 1.00 & 1.00 & 0.96 & 1.00 & 1.00 \\ Res Mid & 0.98 & 1.00 & 0.96 & 0.8 & 1.00 & 1.00 \\ Faraday & X & X & X & X & X & X \\ Corral White & 0.24 & 1.00 & 0.26 & 0.93 & 0.10 & 1.00 \\ \hline \end{tabular} \end{table} Table 12: Background Subtraction Results is perhaps not a surprise when it comes to droplet detection. Ultimately, the minuscule, low resolution, and minimally textured droplets stand in stark contrast to highly textured faces. We obtained a low recall using the Viola-Jones algorithm for all videos and could not achieve successful detections on the Res Low, Faraday, and Corral White videos. For videos with multiple droplets, only one detection was able to be made on each frame. Although the Viola-Jones algorithm trains much faster than YOLOv5, its low performance makes it unviable as an option for droplet detection. ## Acknowledgements The authors acknowledge support from the National Science Foundation AI Institute in Dynamic Systems (grant number 2112085). JNK further acknowledges support from the Air Force Office of Scientific Research (FA9550-19-1-0011).
2308.10715
Un-inverting the Parisi formula
The free energy of any system can be written as the supremum of a functional involving an energy term and an entropy term. Surprisingly, the limit free energy of mean-field spin glasses is expressed as an infimum instead, a phenomenon sometimes called an inverted variational principle. Using a stochastic-control representation of the Parisi functional and convex duality arguments, we rewrite this limit free energy as a supremum over martingales in a Wiener space.
Jean-Christophe Mourrat
2023-08-21T13:35:56Z
http://arxiv.org/abs/2308.10715v2
# Un-inverting the Parisi formula ###### Abstract. The free energy of any system can be written as the supremum of a functional involving an energy term and an entropy term. Surprisingly, the limit free energy of mean-field spin glasses is expressed as an infimum instead, a phenomenon sometimes called an inverted variational principle. Using a stochastic-control representation of the Parisi functional and convex duality arguments, we rewrite this limit free energy as a supremum over martingales in a Wiener space. ## 1. Introduction and main result Let \(h\in\mathbb{R}\) and \((\beta_{p})_{p\geqslant 2}\) be a sequence of nonnegative real numbers such that the function \(\xi(r):=\sum_{p\geqslant 2}\beta_{p}^{2}r^{p}\) is finite for every \(r\in\mathbb{R}\). For every integer \(N\geqslant 1\), let \((H^{\prime}_{N}(\sigma))_{\sigma\in\mathbb{R}^{N}}\) be the centered Gaussian field with covariance such that, for every \(\sigma,\tau\in\mathbb{R}^{N}\), \[\mathbb{E}[H^{\prime}_{N}(\sigma)H^{\prime}_{N}(\tau)]=N\xi\left(\frac{ \sigma\cdot\tau}{N}\right),\] and for every \(\sigma\in\mathbb{R}^{N}\), let \[H_{N}(\sigma):=H^{\prime}_{N}(\sigma)+h\sum_{i=1}^{N}\sigma_{i}.\] The object of study of this paper is the limit free energy \[f:=\lim_{N\to\infty}\frac{1}{N}\mathbb{E}\log\left(\frac{1}{2^{N}}\sum_{\sigma \in\{-1,1\}^{N}}\exp(H_{N}(\sigma))\right). \tag{1.1}\] An explicit representation for this limit was conjectured in [29, 30, 31, 32, 21] and then proved rigorously in [13, 36, 27, 28]. In order to state this result, we denote by \(\Pr([0,1])\) the set of probability measures over \([0,1]\). For each \(\mu\in\Pr([0,1])\), we define \(\Phi_{\mu}=\Phi_{\mu}(t,x):[0,1]\times\mathbb{R}\to\mathbb{R}\) to be the solution to the backwards parabolic equation \[-\partial_{t}\Phi_{\mu}(t,x)=\frac{\xi^{\prime\prime}(t)}{2}\left(\partial_{x} ^{2}\Phi_{\mu}(t,x)+\mu[0,t]\big{(}\partial_{x}\Phi_{\mu}(t,x)\big{)}^{2} \right), \tag{1.2}\] with terminal condition \[\Phi_{\mu}(1,x)=\log\cosh x. \tag{1.3}\] The _Parisi formula_ states that the limit free energy \(f\) exists and is given by \[f=\inf_{\mu\in\Pr([0,1])}\left\{\Phi_{\mu}(0,h)-\frac{1}{2}\int_{0}^{1}t\xi^{ \prime\prime}(t)\mu[0,t]\,\mathrm{d}t\right\}. \tag{1.4}\] It was shown in [3] that the variational problem in (1.4) admits a unique minimizer. For "most" choices of \(\xi\), this minimizer encodes the asymptotic law of the overlap between two independent samples from the Gibbs measure associated with the energy function \(H_{N}\), see [28, Corollaries 3.1 and 3.4]. Any expression of the form \(\log\mathrm{E}[\exp(f(X))]\) can be rewritten as a supremum over probability measures of an energy term and an entropy term, see for instance [5, Corollaries 4.14 and 4.15]. Such a formulation has a clear interpretation based on physical quantities or large-deviations considerations. It would seem reasonable that aspects of this structure be preserved in the limit of large system size, provided that we understand the asymptotics of the relevant energy and entropy terms. It thus initially came as a surprise that the expression in (1.4) takes the form of an infimum instead, and to this day I am not aware of a compelling physical interpretation of this variational problem. In fact, for more complex models with multiple types, such a variational representation of the limit free energy is no longer valid in general, see [23, Section 6]. The main goal of this paper is to give an alternative representation of the limit free energy \(f\) as a supremum instead. In order to state the result, we define, for every \(x\in\mathbb{R}\), \[\phi(x)\coloneqq\log\cosh(x+h), \tag{1.5}\] and let \(\phi^{*}\) denote the convex dual of \(\phi\), so that for every \(\lambda\in\mathbb{R}\), \[\phi^{*}(\lambda)\coloneqq\sup_{x\in\mathbb{R}}\left(\lambda x-\phi(x)\right) \in\mathbb{R}\cup\{+\infty\}.\] A classical calculation yields that \(\phi^{*}\) is infinite outside of \([-1,1]\), while for every \(\lambda\in[-1,1]\), \[\phi^{*}(\lambda)=\frac{1}{2}\left[(1+\lambda)\log(1+\lambda)+(1-\lambda)\log (1-\lambda)\right]-\lambda h.\] Let \(\mathscr{P}=(\Omega,(\mathcal{F}_{t})_{t\in[0,1]},\mathbf{P})\) be a filtered probability space. The \(\sigma\)-algebras \((\mathcal{F}_{t})_{t\in[0,1]}\) are assumed to be complete, that is, they contain every subset of any null-measure set. We also assume that \(\mathscr{P}\) is sufficiently rich that one can define a Brownian motion \((W_{t})_{t\in[0,1]}\) over it (in particular, the process \(W\) is adapted and has independent increments with respect to the filtration \((\mathcal{F}_{t})_{t\in[0,1]}\)). We denote by \(\mathbf{Mart}\) the space of bounded martingales over \(\mathscr{P}\). **Theorem 1** (un-inverted Parisi formula).: _The limit free energy \(f\) defined in (1.1) is given by_ \[f=\sup_{\alpha\in\mathbf{Mart}}\bigg{\{}\mathbf{E}\left[\alpha_{1 }\int_{0}^{1}\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}-\phi^{*}(\alpha_{1}) \right]\\ -\frac{1}{2}\sup_{t\in[0,1]}\int_{t}^{1}\xi^{\prime\prime}(s)(s- \mathbf{E}[\alpha_{s}^{2}])\,\mathrm{d}s\bigg{\}}. \tag{1.6}\] _Moreover, there exists a unique \(\alpha\in\mathbf{Mart}\) that realizes the supremum in (1.6)._ Theorem 1 is a consequence of a representation of the optimizers of (1.4) and (1.6) as a dual pair in a saddle-point problem. For every \(\mu\in\Pr([0,1])\) and \(\alpha\in\mathbf{Mart}\), we define \[\Gamma(\mu,\alpha):=\mathbf{E}\Bigg{[}\alpha_{1}\int_{0}^{1} \sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}-\phi^{*}(\alpha_{1})\\ -\frac{1}{2}\int_{0}^{1}\xi^{\prime\prime}(t)\mu[0,t](t-\alpha_{ t}^{2})\,\mathrm{d}t\Bigg{]}. \tag{1.7}\] **Theorem 2** (saddle-point problem for optimizers).: _Let \(\overline{\mu}\in\Pr([0,1])\) denote the unique minimizer of (1.4), and let \(\overline{\alpha}\in\mathbf{Mart}\) denote the unique maximizer of (1.6). The limit free energy (1.1) is given by_ \[f=\Gamma(\overline{\mu},\overline{\alpha})=\inf_{\mu\in\Pr([0,1])}\sup_{\alpha \in\mathbf{Mart}}\Gamma(\mu,\alpha)=\sup_{\alpha\in\mathbf{Mart}}\inf_{\mu\in \Pr([0,1])}\Gamma(\mu,\alpha). \tag{1.8}\] _Moreover, the following two properties of \(\overline{\mu}\) and \(\overline{\alpha}\) are valid and, taken together, characterize the pair \((\overline{\mu},\overline{\alpha})\) uniquely in \(\Pr([0,1])\times\mathbf{Mart}\)._ _(1) The support of \(\overline{\mu}\) is a subset of the set of maximizers of the mapping_ \[\left\{\begin{array}{rcl}[0,1]&\to&\mathbb{R}\\ t&\mapsto&\int_{t}^{1}\xi^{\prime\prime}(s)(s-\mathbf{E}[\overline{\alpha}_{s}^ {2}])\,\mathrm{d}s.\end{array}\right. \tag{1.9}\] _(2) Letting \((X_{t})_{t\in[0,1]}\) be the strong solution to_ \[\left\{\begin{array}{l}X_{0}=h,\\ \,\mathrm{d}X_{t}=\xi^{\prime\prime}(t)\overline{\mu}[0,t]\partial_{x}\Phi_{ \overline{\mu}}(t,X_{t})\,\mathrm{d}t+\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d }W_{t},\end{array}\right. \tag{1.10}\] _we have, for every \(t\in[0,1]\),_ \[\overline{\alpha}_{t}=\partial_{x}\Phi_{\overline{\mu}}(t,X_{t})=\partial_{x }\Phi_{\overline{\mu}}(0,h)+\int_{0}^{t}\sqrt{\xi^{\prime\prime}(s)}\partial_{ x}^{2}\Phi_{\overline{\mu}}(s,X_{s})\,\mathrm{d}W_{s}. \tag{1.11}\] For each measure \(\mu\in\Pr([0,1])\), the Parisi formula (1.4) can be used to obtain an upper bound on the limit free energy \(f\). In particular, we can write a first replica-symmetric upper bound obtained by imposing the support of \(\mu\) to be a singleton, and then progress along the hierarchy of replica symmetry breaking by allowing the support of \(\mu\) to contain two elements, then three elements, etc. Using the characterization above, we can associate to each \(\mu\in\Pr([0,1])\) a corresponding lower bound on the free energy, as described in the next theorem. This result could for instance facilitate the construction of certified numerical approximations of the limit free energy. **Theorem 3** (RSB lower bound).: _For every \(\mu\in\Pr([0,1])\), we have_ \[0\leqslant\Phi_{\mu}(0,h)-\frac{1}{2}\int_{0}^{1}t\xi^{\prime\prime}(t)\mu[0,t ]\,\mathrm{d}t-f\leqslant\frac{1}{2}\int\left(g_{\mu}(t)-\inf_{[0,1]}g_{\mu} \right)\mathrm{d}\mu(t), \tag{1.12}\] _where the function \(g_{\mu}\) is such that, letting \((X_{t})_{t\in[0,1]}\) be the strong solution to_ \[\left\{\begin{array}{l}X_{0}=h,\\ \mathrm{d}X_{t}=\xi^{\prime\prime}(t)\mu[0,t]\partial_{x}\Phi_{\mu}(t,X_{t}) \,\mathrm{d}t+\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t},\end{array}\right. \tag{1.13}\] _we have for every \(t\in[0,1]\) that_ \[g_{\mu}(t):=\int_{t}^{1}\xi^{\prime\prime}(s)\left(\mathbf{E}\left[\left( \partial_{x}\Phi_{\mu}(s,X_{s})\right)^{2}\right]-s\right)\,\mathrm{d}s. \tag{1.14}\] _Moreover, the right-hand side of (1.12) is zero if and only if \(\mu\in\Pr([0,1])\) is the unique minimizer to (1.4)._ I expect that similar results can be proved for models with soft spins, using for instance [24, Proposition 3.2] and [25, Corollary 1.3]. The formulas for the limit free energy given in [26] and [25] in this case are of a form similar to the infimum in (1.4), but with an additional parameter dependence that we then maximize over; this last step is meant to control the norm of the configuration \(\sigma\in\mathbb{R}^{N}\) and is in line with classical large-deviations theory. A generalization of Theorem 1 to this case would allow us to write the limit free energy as a simple supremum instead. The present paper is inspired by a number of earlier works which we now briefly review. In [19], a dual problem was identified for the ground state energy of spherical models, and then extended to a similar representation for the free energy at any temperature in [20]. In this context, the Parisi formula for the limit free energy can be rewritten in a form called the Crisanti-Sommers formula. This formula is simpler to manipulate than (1.4), and in particular, it is manifestly convex in the variable \(\mu\). The dual problem identified in [19, 20] takes the form of an obstacle problem. Another series of inspiring works relates to the design of efficient algorithms that identify spin configurations whose energy is as high as possible. Important progress on this question was achieved in [35] for spherical models. In this context, the energy value reached by the algorithm can be written explicitly as a simple function of \(\xi\). The approach was then generalized to the models we consider here in [22, 10, 33], in the form of an approximate message passing algorithm with some tunable parameters. The asymptotic energy value reached by the algorithm can be described by an explicit formula, and optimizing over the free parameters gives us a variational formula for the best possible value ALG reached by this class of algorithms. This optimization problem has a structure similar to that in (1.6). In particular, a key insight of [35] is to consider algorithms that proceed via orthogonal increments, which in the limit yield martingales which we seek to optimize over as in (1.6). The ground state energy can be represented as a Parisi-type formula analogous to (1.4), see [4], and it was shown in [10] that the original formulation of \(\mathsf{ALG}\) as a supremum admits a dual representation that is similar to this Parisi formula for the ground state, the only difference being that the minimization is carried over a larger set. Closely related works include [16, 17] where it is shown that a large class of algorithms will fail to find a configuration with energy exceeding \(\mathsf{ALG}\). A method for sampling the Gibbs measure that may also be related to the present work was introduced in [11]. Other related works include [1, 14, 15], where these optimization and sampling problems were investigated in detail for more analytically tractable models whose thermodynamics was derived in [9, 7, 8]. The starting point for the proofs of Theorem 1 and 2 is a representation of the function \(\Phi_{\mu}\) as the value function of a problem of stochastic control, which was introduced and developed in [6, 3, 18]. One of the main features of this representation is that it makes the convexity of the mapping \(\mu\mapsto\Phi_{\mu}\) transparent, while I am not aware of any alternative method for verifying this convexity property. We then seek a dual to the minimization problem in (1.4), in the sense of convex analysis, and the functional \(\Gamma\) naturally appears. The interchange of the infimum and supremum in (1.8) requires some care, since for fixed \(\mu\), the functional \(\Gamma(\mu,\cdot)\) is not concave in general (although it is in the high-temperature regime \(\xi^{\prime\prime}(1)\leqslant 1\)). We proceed by enlarging the probability space so as to "randomize" our choice of martingale, as in the construction of Nash equilibria with mixed strategies. Justifying that the supremum in (1.6) is achieved requires yet another enlargement of the probability space. However, we can then identify this optimizer explicitly according to (1.11), and in particular observe that it is measurable with respect to the filtration of Brownian motion, and so we can ultimately conclude that these enlargements of the probability space were not necessary after all. ## 2. Proofs We start by stating the representation of \(\Phi_{\mu}\) alluded to above and due to [6, 3, 18]. In order to do so, we let \[\mathscr{W}:=\big{(}C([0,1]),(\mathcal{F}_{t})_{t\in[0,1]},\mathrm{P}\big{)} \tag{2.1}\] be the canonical Wiener space, with canonical random variable \((W_{t})_{t\in[0,1]}\) and with associated expectation \(\mathrm{E}\). The \(\sigma\)-algebras \((\mathcal{F}_{t})_{t\in[0,1]}\) are assumed to be complete. We denote by \(\mathsf{Mart}\) the space of bounded martingales over \(\mathscr{W}\), and by \(\mathsf{Prog}\) the space of bounded progressive processes over \(\mathscr{W}\). By [6, 3, 18], the quantity \(\Phi_{\mu}(0,h)\) defined in (1.2)-(1.3) admits the variational representation \[\Phi_{\mu}(0,h)=\sup_{\alpha\in\mathbf{Prog}}\mathrm{E}\Big{[}\phi \left(\int_{0}^{1}\xi^{\prime\prime}(t)\mu[0,t]\alpha_{t}\,\mathrm{d}t+\int_{0}^ {1}\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}\right)\\ -\,\frac{1}{2}\int_{0}^{1}\xi^{\prime\prime}(t)\mu[0,t]\alpha_{t} ^{2}\,\mathrm{d}t\Big{]}, \tag{2.2}\] and as a consequence, the Parisi formula (1.4) can be rewritten as \[f=\inf_{\mu\in\mathrm{Pr}([0,1])}\sup_{\alpha\in\mathbf{Prog}} \mathrm{E}\Big{[}\phi\left(\int_{0}^{1}\xi^{\prime\prime}(t)\mu[0,t]\alpha_{t} \,\mathrm{d}t+\int_{0}^{1}\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}\right) \\ -\,\frac{1}{2}\int_{0}^{1}\xi^{\prime\prime}(t)\mu[0,t]\left( \alpha_{t}^{2}+t\right)\,\mathrm{d}t\Big{]}. \tag{2.3}\] The supremum in (2.2) can be interpreted as a stochastic control problem. Using this interpretation, it was shown in [6, 3, 18] that the supremum in (2.2) is achieved, and an explicit description of the unique maximizer was given. It is relatively classical to verify that these results remain valid even if we enlarge the probability space and possibly allow for the control \(\alpha\) to depend on additional randomness. Notice however that it is less classical to verify that the supremum on the right side of (1.6) does not depend on the choice of probability space, or to verify that the supremum is achieved, so it will be important for our purposes to be careful about such aspects. In the next lemma, we therefore restate the identity (2.2) in the generic probability space \(\mathscr{P}\) (not necessarily the Wiener space \(\mathscr{W}\)), and describe precisely the identity of the maximizer. We denote by \(\mathbf{Prog}\) the space of bounded progressive processes over the probability space \(\mathscr{P}\). **Lemma 2.1** (variational representation of \(\Phi_{\mu}\)[6, 3, 18]).: _For every \(\mu\in\mathrm{Pr}([0,1])\), we have_ \[\Phi_{\mu}(0,h)=\sup_{\alpha\in\mathbf{Prog}}\mathbf{E}\Big{[} \phi\left(\int_{0}^{1}\xi^{\prime\prime}(t)\mu[0,t]\alpha_{t}\,\mathrm{d}t+ \int_{0}^{1}\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}\right)\\ -\,\frac{1}{2}\int_{0}^{1}\xi^{\prime\prime}(t)\mu[0,t]\alpha_{t }^{2}\,\mathrm{d}t\Big{]}. \tag{2.4}\] _Moreover, let \((X_{t})_{t\in[0,1]}\) be the strong solution to_ \[\left\{\begin{array}{l}X_{0}=h,\\ \mathrm{d}X_{t}=\xi^{\prime\prime}(t)\mu[0,t]\partial_{x}\Phi_{\mu}(t,X_{t}) \,\mathrm{d}t+\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}.\end{array}\right. \tag{2.5}\] _A stochastic process \(\alpha\in\mathbf{Prog}\) achieves the supremum in (2.4) if and only if it satisfies, for almost every \(t\in[0,1]\) with \(\mu[0,t]>0\),_ \[\alpha_{t}=\partial_{x}\Phi_{\mu}(t,X_{t})=\partial_{x}\Phi_{\mu}(0,h)+\int_{0 }^{t}\sqrt{\xi^{\prime\prime}(s)}\partial_{x}^{2}\Phi_{\mu}(s,X_{s})\,\mathrm{ d}W_{s}. \tag{2.6}\] Proof.: We learn from [3, Theorem 3] or [18, Lemma 18] that the representation (2.4) is valid if we replace \(\mathbf{Prog}\) by the space of processes that are progressively measurable with respect to the \(\sigma\)-algebra generated by the Brownian motion \(W\); or equivalently, if the probability space \(\mathscr{P}\) is the canonical Wiener space \(\mathscr{W}\). An examination of the proofs given there in fact also yields the identity (2.4) in the more general context considered here. For the reader's convenience, we briefly justify this here as well. Since the supremum in (2.4) is over a potentially larger set than the one considered in [3, 18], it suffices to show that for every \(\alpha\in\mathbf{Prog}\), we have \[\Phi_{\mu}(0,h)\geqslant\mathbf{E}\bigg{[}\phi\left(\int_{0}^{1} \xi^{\prime\prime}(t)\mu[0,t]\alpha_{t}\,\mathrm{d}t+\int_{0}^{1}\sqrt{\xi^{ \prime\prime}(t)}\,\mathrm{d}W_{t}\right)\\ -\frac{1}{2}\int_{0}^{1}\xi^{\prime\prime}(t)\mu[0,t]\alpha_{t}^{ 2}\,\mathrm{d}t\bigg{]}. \tag{2.7}\] We therefore fix \(\alpha\in\mathbf{Prog}\) and for every \(t\in[0,1]\), set \[Y_{t}\coloneqq h+\int_{0}^{t}\xi^{\prime\prime}(s)\mu[0,s]\alpha_{s}\,\mathrm{ d}s+\int_{0}^{t}\sqrt{\xi^{\prime\prime}(s)}\,\mathrm{d}W_{s}.\] Using the regularity properties of \(\Phi_{\mu}\) shown in [18, Theorem 4], we can apply Ito's formula to get that \[\Phi_{\mu}(1,Y_{1})=\Phi_{\mu}(0,h)+\int_{0}^{1}\partial_{t}\Phi_ {\mu}(t,Y_{t})\,\mathrm{d}t+\int_{0}^{1}\partial_{x}\Phi_{\mu}(t,Y_{t})\, \mathrm{d}Y_{t}\\ +\frac{1}{2}\int_{0}^{1}\partial_{x}^{2}\Phi_{\mu}(t,Y_{t})\, \mathrm{d}\langle Y\rangle_{t}.\] Taking the expectation, we obtain that \[\mathbf{E}\big{[}\Phi_{\mu}(1,Y_{1})\big{]}=\Phi_{\mu}(0,h)+ \mathbf{E}\int_{0}^{1}\Big{(}\partial_{t}\Phi_{\mu}(t,Y_{t})\\ +\frac{\xi^{\prime\prime}(t)}{2}\left(2\mu[0,t]\alpha_{t}\partial_ {x}\Phi_{\mu}(t,Y_{t})+\partial_{x}^{2}\Phi_{\mu}(t,Y_{t})\right)\Big{)}\, \mathrm{d}t.\] By (1.2) and [18, Theorem 1], for almost every \(t\in[0,1]\), we have for every \(x\in\mathbb{R}\) that \[-\partial_{t}\Phi_{\mu}(t,x) =\frac{\xi^{\prime\prime}(t)}{2}\left(\partial_{x}^{2}\Phi_{\mu} (t,x)+\mu[0,t]\big{(}\partial_{x}\Phi_{\mu}(t,x)\big{)}^{2}\right)\] \[=\frac{\xi^{\prime\prime}(t)}{2}\sup_{a\in\mathbb{R}}\left( \partial_{x}^{2}\Phi_{\mu}(t,x)+2\mu[0,t]a\partial_{x}\Phi_{\mu}(t,x)-\mu[0, t]a^{2}\right) \tag{2.8}\] \[\geqslant\frac{\xi^{\prime\prime}(t)}{2}\left(\partial_{x}^{2} \Phi_{\mu}(t,x)+2\mu[0,t]\alpha_{t}\partial_{x}\Phi_{\mu}(t,x)-\mu[0,t]\alpha _{t}^{2}\right).\] Combining this with the previous display, we infer that \[\mathbf{E}\big{[}\Phi_{\mu}(1,Y_{1})\big{]}\leqslant\Phi_{\mu}(0,h)+\frac{1}{ 2}\int_{0}^{1}\xi^{\prime\prime}(t)\mu[0,t]\mathbf{E}[\alpha_{t}^{2}]\, \mathrm{d}t.\] In view of (1.3) and (1.5), this is (2.7), so the proof of (2.4) is complete. The second identity in (2.6) is a consequence of Ito's formula and the regularity properties of \(\Phi_{\mu}\) given in [18, Theorem 4]. The results of [3, Theorem 3] and [18, Lemma 18] only state that (2.6) is a sufficient condition for optimality, but again the proofs in fact yield that they are also necessary. This is also apparent from the argument given above, by an examination of the case of equality in (2.8). One key point of the representation in (2.4) is that it is manifestly convex in \(\mu\). In our next step, we rewrite this convex function as a supremum of affine functions. **Lemma 2.2** (Supremum of affine functionals).: _For every \(\mu\in\Pr([0,1])\), we have_ \[\Phi_{\mu}(0,h)\\ =\sup_{\alpha\in\mathbf{Mart}}\mathbf{E}\bigg{[}\frac{1}{2}\int_ {0}^{1}\xi^{\prime\prime}(t)\mu[0,t]\alpha_{t}^{2}\,\mathrm{d}t+\alpha_{1} \int_{0}^{1}\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}-\phi^{*}(\alpha_{1}) \bigg{]}. \tag{2.9}\] _Moreover, there exists a unique maximizer to the variational problem in (2.9). Letting \((X_{t})_{t\in[0,1]}\) be the solution to (2.5), this unique maximizer \(\alpha\in\mathbf{Mart}\) must satisfy (2.6) for every \(t\in[0,1]\)._ Proof.: We decompose the proof into two steps. _Step 1_.: Let \(X\in L^{2}(\mathscr{P})\). In this step, we show that \[\mathbf{E}[\phi(X)]=\sup_{\lambda\in L^{\infty}(\mathscr{P})}\mathbf{E}\left[ \lambda X-\phi^{*}(\lambda)\right], \tag{2.10}\] and that the supremum in (2.10) is achieved at \(\lambda\in L^{\infty}(\mathscr{P})\) if and only if \(\lambda=\phi^{\prime}(X)\). By the biconjugation theorem, we have that for every \(x\in\mathbb{R}\), \[\phi(x)=\sup_{\lambda\in\mathbb{R}}\left(\lambda x-\phi^{*}(\lambda)\right). \tag{2.11}\] In particular, we have for every \(x,\lambda\in\mathbb{R}\) that \[\phi(x)+\phi^{*}(\lambda)\geqslant\lambda x, \tag{2.12}\] and thus \[\mathrm{E}[\phi(X)]\geqslant\sup_{\lambda\in L^{\infty}(\mathscr{W})}\mathrm{E }\left[\lambda X-\phi^{*}(\lambda)\right].\] As a supremum of convex and lower semi-continuous functions, the function \(\phi^{*}\) is convex and lower semi-continuous. Since \(\phi^{*}\) is in fact strictly convex on its effective domain \([-1,1]\), for each \(x\in\mathbb{R}\), there exists a unique \(\lambda\in\mathbb{R}\) such that \[\phi(x)+\phi^{*}(\lambda)=\lambda x. \tag{2.13}\] Since the derivative of \(\phi^{*}\) diverges at the endpoints of the interval \([-1,1]\), this unique \(\lambda\) must lie in the open interval \((-1,1)\). Similarly, for each \(\lambda\in(-1,1)\), there exists a unique \(x\in\mathbb{R}\) such that the identity (2.13) is realized, and recall that we always have (2.12) in general. In short, the condition (2.13) is equivalent to the statement that \(\phi^{\prime}(x)=\lambda\), and in particular, for every \(x\in\mathbb{R}\), \[\phi(x)=\phi^{\prime}(x)x-\phi^{*}(\phi^{\prime}(x)).\] This yields (2.10) and the fact that the unique maximizer is \(\lambda=\phi^{\prime}(X)\). _Step 2._ We let \((X_{t})\) be the solution to (2.5), and for every \(t\in[0,1]\), we let \[\widehat{\alpha}_{t}:=\partial_{x}\Phi_{\mu}(t,X_{t})=\partial_{x}\Phi_{\mu}(0, h)+\int_{0}^{t}\sqrt{\xi^{\prime\prime}(s)}\partial_{x}^{2}\Phi_{\mu}(s,X_{s}) \,\mathrm{d}W_{s}.\] Using (1.3) and (1.5), we notice that \[\widehat{\alpha}_{1}=\phi^{\prime}(X_{1}-h)=\phi^{\prime}\left(\int_{0}^{1} \xi^{\prime\prime}(t)\mu[0,t]\widehat{\alpha}_{t}\,\mathrm{d}t+\int_{0}^{1} \sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}\right). \tag{2.14}\] By the result of the previous step and Lemma 2.1, we can rewrite \(\Phi_{\mu}(0,h)\) in the form of \[\sup_{\alpha\in\mathbf{Prog}}\sup_{\lambda\in L^{\infty}(\mathscr{ P})}\mathbf{E}\Big{[}\lambda\left(\int_{0}^{1}\xi^{\prime\prime}(t)\mu[0,t] \alpha_{t}\,\mathrm{d}t+\int_{0}^{1}\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W _{t}\right)\\ -\phi^{*}(\lambda)-\frac{1}{2}\int_{0}^{1}\xi^{\prime\prime}(t) \mu[0,t]\alpha_{t}^{2}\,\mathrm{d}t\Big{]}. \tag{2.15}\] Moreover, the choice of \(\alpha=\widehat{\alpha}\) and \(\lambda=\widehat{\alpha}_{1}\) realizes the supremum. Also, by Lemma 2.1, the random variable \[\int_{0}^{1}\xi^{\prime\prime}(t)\mu[0,t]\alpha_{t}\,\mathrm{d}t+\int_{0}^{1} \sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}\] does not depend on the choice of optimizer \(\alpha\) in (2.15). The result of the previous step and (2.14) therefore also yield that the choice of \(\lambda=\widehat{\alpha}_{1}\) is necessary for any maximizing pair \((\alpha,\lambda)\) of (2.15). If we also impose the maximizing pair to be such that \(\alpha\in\mathbf{Mart}\) and \(\alpha_{1}=\lambda\), then by the martingale property this yields that \(\alpha=\widehat{\alpha}\). In the case when these additional constraints are imposed, the functional we optimize over in (2.15) reduces to that in (2.9), so the proof is complete. By (1.4) and Lemma 2.2, this already shows that \[f=\inf_{\mu\in\mathrm{Pr}([0,1])}\sup_{\alpha\in\mathbf{Mart}}\Gamma(\mu, \alpha). \tag{2.16}\] Since the minimization problem over \(\mu\) is in fact the same as that in (1.4), it is achieved at the same optimal \(\mu\) and only there, by [3]. Moreover, once \(\mu\) is fixed, the maximization over \(\alpha\) is achieved at the optimal \(\alpha\) described by Lemma 2.2 and only there. Our next goal is to justify the interversion of the infimum and supremum in (2.16). For every bounded measurable \(f:[0,1]\to\mathbb{R}\), we observe the integration by parts \[\int_{0}^{1}f(s)\mu[0,s]\,\mathrm{d}s=\int_{0}^{1}f(s)\int\mathbf{ 1}_{\{t\leqslant s\}}\,\mathrm{d}\mu(t)\,\mathrm{d}s\\ =\int\!\int_{t}^{1}f(s)\,\mathrm{d}s\,\mathrm{d}\mu(t). \tag{2.17}\] It will be convenient to use this observation and rewrite (2.16) as \[f=\inf_{\mu\in\Pr([0,1])}\sup_{\alpha\in\mathbf{Mart}}\mathbf{E} \left[\alpha_{1}\int_{0}^{1}\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}-\phi^{ *}(\alpha_{1})\right.\\ +\left.\frac{1}{2}\int\int_{t}^{\,1}\xi^{\prime\prime}(s)(\alpha_{ s}^{2}-s)\,\mathrm{d}s\,\mathrm{d}\mu(t)\right]. \tag{2.18}\] **Lemma 2.3** (Interchanging \(\inf\) and \(\sup\)).: _Let_ \[\mathsf{K}_{0}:=\Bigg{\{}\Big{(}\mathbf{E}\left[\alpha_{1}\int_{ 0}^{1}\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}-\phi^{*}(\alpha_{1})\right],(\mathbf{E}[\alpha_{t}^{2}])_{t\in[0,1]}\Big{)}\ \big{|}\\ \alpha\in\mathbf{Mart},\ \mathbf{E}[\phi^{*}(\alpha_{1})]<+\infty \Bigg{\}}, \tag{2.19}\] _and let \(\mathsf{K}\) be the closure of the convex hull of \(\mathsf{K}_{0}\) with respect to the topology of the space \(\mathbb{R}\times L^{1}([0,1])\). The limit free energy (1.1) is given by_ \[f=\sup_{(\chi,\gamma)\in\mathsf{K}}\Bigg{\{}\chi+\frac{1}{2}\inf_{t\in[0,1]} \int_{t}^{\,1}\xi^{\prime\prime}(s)(\gamma_{s}-s)\,\mathrm{d}s\Bigg{\}}. \tag{2.20}\] Proof of Lemma 2.3.: We can rewrite (2.18) in the form of \[f=\inf_{\mu\in\Pr([0,1])}\sup_{(\chi,\gamma)\in\mathsf{K}_{0}}\left(\chi+\frac {1}{2}\int\int_{t}^{\,1}\xi^{\prime\prime}(s)(\gamma_{s}-s)\,\mathrm{d}s\, \mathrm{d}\mu(t)\right). \tag{2.21}\] Let us call \(G(\mu,\chi,\gamma)\) the functional between the large parentheses in (2.21). For each \(\mu\), the functional \(G(\mu,\cdot,\cdot)\) is affine, so the supremum in (2.21) is not changed if we replace \(\mathsf{K}_{0}\) by its convex hull. For every \(\gamma,\rho\in L^{1}([0,1])\), \[\sup_{t\in[0,1]}\left|\int_{t}^{\,1}\xi^{\prime\prime}(s)(\gamma_ {s}-s)\,\mathrm{d}s-\int_{t}^{\,1}\xi^{\prime\prime}(s)(\rho_{s}-s)\,\mathrm{d }s\right|\\ \leqslant\xi^{\prime\prime}(1)\int_{0}^{1}|\gamma_{s}-\rho_{s}| \,\mathrm{d}s. \tag{2.22}\] The functional \(G(\mu,\cdot,\cdot)\) is therefore continuous for the topology of \(\mathbb{R}\times L^{1}([0,1])\). We deduce that the supremum in (2.21) is not changed if we replace \(\mathsf{K}_{0}\) by \(\mathsf{K}\), that is, \[f=\inf_{\mu\in\Pr([0,1])}\sup_{(\chi,\gamma)\in\mathsf{K}}\left(\chi+\frac{1} {2}\int\int_{t}^{\,1}\xi^{\prime\prime}(s)(\gamma_{s}-s)\,\mathrm{d}s\, \mathrm{d}\mu(t)\right). \tag{2.23}\] We now argue that the set \(\mathsf{K}_{0}\) is precompact. First, for every \(\alpha\in\mathsf{Mart}\), the condition \(\mathrm{E}[\phi^{*}(\alpha_{1})]<+\infty\) is equivalent to the condition that \(\alpha_{1}\) takes values in \([-1,1]\) almost surely. As a result, if \((\chi^{(n)},\gamma^{(n)})\) denotes a sequence of elements of \(\mathsf{K}_{0}\), we have that \(\chi^{(n)}\) and \(\sup_{t\in[0,1]}|\gamma^{(n)}(t)|\) are bounded uniformly over \(n\). We can therefore find a subsequence along which \(\chi^{(n)}\) and \(\gamma_{t}^{(n)}\) converge for each \(t\in\mathbb{Q}\cap[0,1]\). Using also that for each \(n\), the mapping \(t\mapsto\gamma_{t}^{(n)}\) is non-decreasing, we deduce that \((\chi^{(n)},\gamma^{(n)})\) converges in \(\mathbb{R}\times L^{1}([0,1])\) along the subsequence. This shows that \(\mathsf{K}_{0}\) is precompact, and therefore that its closed convex hull \(\mathsf{K}\) is compact, by [2, Theorem 5.35]. Endowing the space of probability measures \(\Pr([0,1])\) with the topology of weak convergence turns this space into a compact set. Using again (2.22), we see that \(G\) is jointly continuous. We have already used that for each fixed \(\mu\in\Pr([0,1])\), the mapping \(G(\mu,\cdot,\cdot)\) is affine; and for each fixed \((\chi,\gamma)\in\mathbb{R}\times L^{1}([0,1])\), the mapping \(G(\cdot,\chi,\gamma)\) is affine. By the minimax theorem [12, 34], we can therefore exchange the infimum and the supremum in (2.23) and obtain that \[f=\sup_{(\chi,\gamma)\in\mathsf{K}}\inf_{\mu\in\Pr([0,1])}\left(\chi+\frac{1}{ 2}\int\int_{t}^{1}\xi^{\prime\prime}(s)(\gamma_{s}-s)\,\mathrm{d}s\,\mathrm{d} \mu(t)\right).\] The infimum is achieved for measures \(\mu\) that are supported on minimizers of the mapping \[t\mapsto\int_{t}^{1}\xi^{\prime\prime}(s)(\gamma_{s}-s)\,\mathrm{d}s.\] This yields the announced result. While Lemma 2.3 does indeed perform some interchange of infimum and supremum, the expression of the optimization as a supremum over the set \(\mathsf{K}\) is not very satisfactory. The goal of the next lemma is to revert this back to an optimization problem over the space of martingales. The proof of this lemma will be simplified if we assume that the \(\sigma\)-algebra \(\mathcal{F}_{0}\) of the probability space \(\mathscr{P}\) is sufficiently rich, so that we can perform a "randomization" operation on the space of martingales, as in the construction of Nash equilibria. We could also show the next lemma directly without this assumption, by showing that the "randomized" martingales we use can be approximated arbitrarily closely by martingales that are measurable with respect to the filtration generated by the Brownian motion \(W\). Since we will ultimately show that the supremum is in fact achieved at a martingale that is measurable with respect to the filtration generated by \(W\), there is ultimately no loss in making this additional assumption here. **Lemma 2.4**.: _Suppose that the \(\sigma\)-algebra \(\mathcal{F}_{0}\) of the probability space \(\mathscr{P}\) is sufficiently rich that there exists an \(\mathcal{F}_{0}\)-measurable random variable that is uniformly distributed over \([0,1]\). Then the identity (1.6) is valid._ Proof.: Lemma 2.3 clearly implies that \[f\geqslant\sup_{\alpha\in\mathbf{Mart}}\left\{\mathbf{E}\left[ \alpha_{1}\int_{0}^{1}\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}-\phi^{*}( \alpha_{1})\right]\right.\\ -\frac{1}{2}\sup_{t\in[0,1]}\int_{t}^{1}\xi^{\prime\prime}(s)(s- \mathbf{E}[\alpha_{s}^{2}])\,\mathrm{d}s\right\}\!, \tag{2.24}\] so we only need to show the converse bound. Notice carefully that the definition of \(\mathsf{K}_{0}\) in (2.19) depends on the identity of the probability space \(\mathscr{P}\). For the bound converse to (2.24), we will appeal to Lemma 2.3 applied to the case when the underlying probability space is the canonical Wiener space \(\mathscr{W}\). We recall that we denote by \(\mathrm{E}\) the expectation over \(\mathscr{W}\), and by \(\mathsf{Mart}\) the space of bounded martingales over \(\mathscr{W}\), while we denote by \(\mathbf{Mart}\) the space of bounded martingales over \(\mathscr{P}\). There is a canonical injection from \(\mathsf{Mart}\) to \(\mathbf{Mart}\) given by \(\alpha\mapsto\alpha\circ W\); but the space \(\mathbf{Mart}\) may be larger than \(\mathsf{Mart}\) in general. We let \(\mathsf{K}_{0}\) be as in (2.19) but with the underlying probability space being \(\mathscr{W}\), we let \(\mathsf{K}_{1}\) be the convex hull of \(\mathsf{K}_{0}\), and \(\mathsf{K}\) be the closure of \(\mathsf{K}_{1}\). By (2.22), the representation in (2.20) is still valid if we replace \(\mathsf{K}\) by \(\mathsf{K}_{1}\). Moreover, every element of \((\chi,\gamma)\in\mathsf{K}_{1}\) can be represented in the form \[(\chi,\gamma)=\sum_{i=1}^{n}c_{i}\left(\mathrm{E}\!\left[\alpha_{1}^{(i)}\int_ {0}^{1}\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}-\phi^{*}(\alpha_{1}^{(i)} )\right],\left(\mathrm{E}\!\left[(\alpha_{t}^{(i)})^{2}\right]\right)_{t\in[0,1]}\right),\] were \(c_{1},\ldots,c_{n}\in[0,1]\) are such that \(\sum_{i}c_{i}=1\) and \(\alpha^{(1)},\ldots,\alpha^{(n)}\in\mathsf{Mart}\) take values in \([-1,1]\). We identify the martingales \(\alpha^{(1)},\ldots,\alpha^{(n)}\) with elements of \(\mathbf{Mart}\) through the injection \(\alpha\mapsto\alpha\circ W\) mentioned above. Notice that by construction, the martingales \(\alpha^{(1)},\ldots,\alpha^{(n)}\) are independent of \(\mathcal{F}_{0}\). Under our assumption on \(\mathscr{P}\), there exists an \(\mathcal{F}_{0}\)-measurable random variable \(N\) such that for every \(i\in\{1,\ldots,n\}\), \[\mathrm{P}\left[N=i\right]=c_{i}.\] For every \(t\in[0,1]\), we set \[\beta_{t}:=\alpha_{t}^{(N)}.\] Since \(N\) is \(\mathcal{F}_{0}\)-measurable and the martingales \(\alpha^{(1)},\ldots,\alpha^{(n)}\) are independent of \(\mathcal{F}_{0}\), we see that \(\beta\in\mathbf{Mart}\), and a direct calculation gives that \[\left(\mathbf{E}\!\left[\beta_{1}\int_{0}^{1}\sqrt{\xi^{\prime\prime}(t)}\, \mathrm{d}W_{t}-\phi^{*}(\beta_{1})\right],\left(\mathbf{E}\!\left[(\beta_{t} )^{2}\right]\right)_{t\in[0,1]}\right)=(\chi,\gamma).\] So we have shown that every pair \((\chi,\gamma)\in\mathsf{K}_{1}\) can be represented in the form above. Since we have also observed that \[f=\sup_{(\chi,\gamma)\in\mathsf{K}_{1}}\Bigg{\{}\chi+\frac{1}{2}\inf_{t\in[0,1 ]}\int_{t}^{1}\xi^{\prime\prime}(s)(\gamma_{s}-s)\,\mathrm{d}s\Bigg{\}},\] and recalling also (2.24), we obtain the result. We continue by showing the existence of a martingale that achieves the supremum in (1.6), at the cost of possibly changing the probability space again. **Lemma 2.5** (Existence of maximizing martingale).: _There exists a probability space \(\mathscr{P}\) such that the identity (1.6) is valid and the supremum appearing there is achieved._ Proof.: Let \(\mathscr{P}\) be a probability space satisfying the assumption of Lemma 2.4, and let \((\alpha^{(n)})_{n\in\mathbb{N}}\) be a sequence of elements of \(\mathbf{Mart}\) such that \[f=\lim_{n\to\infty}\bigg{\{}\mathbf{E}\Big{[}\alpha_{1}^{(n)} \int_{0}^{1}\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}-\phi^{*}\big{(}\alpha_ {1}^{(n)}\big{)}\Big{]}\\ +\frac{1}{2}\inf_{t\in[0,1]}\int_{t}^{1}\xi^{\prime\prime}(s) \big{(}\mathbf{E}\big{[}(\alpha_{s}^{(n)})^{2}\big{]}-s\big{)}\,\mathrm{d}s \bigg{\}}.\] For \(n\) sufficiently large, the martingale \(\alpha^{(n)}\) must take values in \([-1,1]\). In particular, for each \(t\in[0,1]\), the family of random variables \((\alpha_{t}^{(n)})_{n\in\mathbb{N}}\) is tight. By Prokhorov's theorem, we can therefore find a subsequence \((k_{n})_{n\in\mathbb{N}}\), a probability space \(\overline{\mathscr{P}}=(\overline{\Omega},\overline{\mathcal{F}},\overline{ \mathbf{P}})\) and random variables \((W,(\alpha_{t})_{t\in\mathbb{Q}\cap[0,1]})\) over \(\overline{\mathscr{P}}\) such that for every integer \(\ell\geqslant 1\), \(t_{1},\ldots,t_{\ell}\in\mathbb{Q}\cap[0,1]\), and bounded continuous function \(G:C([0,1])\times\mathbb{R}^{\ell}\to\mathbb{R}\), \[\lim_{n\to\infty}\mathbf{E}\Big{[}G(W,\alpha_{t_{1}}^{(k_{n})},\ldots,\alpha_ {t_{\ell}}^{(k_{n})})\Big{]}=\overline{\mathbf{E}}\left[G(W,\alpha_{t_{1}}, \ldots,\alpha_{t_{\ell}})\right]. \tag{2.25}\] Without loss of generality we assume from now on that the convergence above is valid along the full sequence, that is, we can take \(k_{n}=n\). For every \(t\in[0,1]\), we let \(\overline{\mathcal{F}}_{t}\) be the \(\sigma\)-algebra on \(\overline{\mathscr{P}}\) generated by the random variables \((W_{s})_{s\leqslant t}\) and \((\alpha_{s})_{s\in\mathbb{Q}\cap[0,t]}\). The process \(W\) is a Brownian motion with respect to this filtration, and \((\alpha_{t})_{t\in\mathbb{Q}\cap[0,1]}\) is a martingale with respect to \((\overline{\mathcal{F}}_{t})_{t\in\mathbb{Q}\cap[0,1]}\). We extend \(\alpha\) by setting, for every \(t\in[0,1]\), \[\alpha_{t}:=\overline{\mathbf{E}}\big{[}\alpha_{1}\mid\overline{\mathcal{F}}_{ t}\big{]}\,.\] This turns \(\alpha\) into a martingale with respect to the filtration \((\overline{\mathcal{F}}_{t})_{t\in[0,1]}\). Since \(\alpha_{1}^{(n)}\) takes values in \([-1,1]\) and \(\phi^{*}\) is continuous over this interval, we have that \[\lim_{n\to\infty}\mathbf{E}\big{[}\phi^{*}\big{(}\alpha_{1}^{(n)}\big{)}\big{]} =\overline{\mathbf{E}}\left[\phi^{*}(\alpha_{1})\right].\] In order to control the continuity of the linear mapping \(W\mapsto\int_{0}^{1}\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}\), we introduce the notation \[\zeta(t):=\sqrt{\xi^{\prime\prime}(t)}.\] We observe that \(\zeta\) is continuously differentiable on \([0,1]\) whenever \(\xi^{\prime\prime}(0)\neq 0\) or \(\xi^{\prime\prime}(0)=\xi^{\prime\prime\prime}(0)=0\), while in all cases, we can find a constant \(C<\infty\) such that for every \(t\in(0,1]\), \[\zeta^{\prime}(t)\leqslant Ct^{-1/2}.\] Integrating by parts (or applying Ito's formula), we have that \[\zeta(1)W_{1}=\int_{0}^{1}\zeta^{\prime}(t)W_{t}\,\mathrm{d}t+\int_{0}^{1} \zeta(t)\,\mathrm{d}W_{t},\] and in particular, for a possibly larger value of the constant \(C<\infty\), we have that for every \(W\in C([0,1])\), \[\left|\int_{0}^{1}\zeta(t)\,\mathrm{d}W_{t}\right|\leqslant C\|W\|_{L^{\infty}( [0,1])}.\] From this, (2.25), the fact that \(\alpha_{1}^{(n)}\) takes values in \([-1,1]\), and an approximation argument, we deduce that \[\lim_{n\to\infty}\mathbf{E}\Big{[}\alpha_{1}^{(n)}\int_{0}^{1}\sqrt{\xi^{\prime \prime}(t)}\,\mathrm{d}W_{t}\Big{]}=\overline{\mathbf{E}}\Big{[}\alpha_{1} \int_{0}^{1}\sqrt{\xi^{\prime\prime}(t)}\,\mathrm{d}W_{t}\Big{]}.\] It also follows from (2.25) that for every \(t\in\mathbb{Q}\cap[0,1]\), \[\lim_{n\to\infty}\mathbf{E}\big{[}(\alpha_{t}^{(n)})^{2}\big{]}=\overline{ \mathbf{E}}[\alpha_{t}^{2}].\] Since the paths \(t\mapsto\mathbf{E}\big{[}(\alpha_{t}^{(n)})^{2}\big{]}\) and \(t\mapsto\overline{\mathbf{E}}[\alpha_{t}^{2}]\) are non-decreasing, one can extend this convergence to all points of continuity of the latter mapping, which form a subset of \([0,1]\) of full measure. Using also (2.22), we deduce that \[\lim_{n\to\infty}\inf_{t\in[0,1]}\int_{t}^{1}\xi^{\prime\prime}(s)\big{(} \mathbf{E}\big{[}(\alpha_{s}^{(n)})^{2}\big{]}-s\big{)}\,\mathrm{d}s=\inf_{t \in[0,1]}\int_{t}^{1}\xi^{\prime\prime}(s)\big{(}\overline{\mathbf{E}}[\alpha _{s}^{2}]-s\big{)}\,\mathrm{d}s.\] In conclusion, we have constructed a martingale \(\alpha\) over \(\overline{\mathscr{P}}\) such that \[f=\overline{\mathbf{E}}\Big{[}\alpha_{1}\int_{0}^{1}\sqrt{\xi^{\prime\prime}( t)}\,\mathrm{d}W_{t}-\phi^{*}(\alpha_{1})\Big{]}+\frac{1}{2}\inf_{t\in[0,1]}\int_{t} ^{1}\xi^{\prime\prime}(s)(\overline{\mathbf{E}}[\alpha_{s}^{2}]-s)\,\mathrm{ d}s.\] Since the converse inequality (2.24) is valid on every probability space, this completes the proof. We are now ready to prove the main results of this paper. Proof of Theorems 1 and 2.: Let \(\mathscr{P}\) be a probability space such that the conclusion of Lemma 2.5 is valid. We denote by \(\overline{\alpha}\in\mathbf{Mart}\) a martingale that achieves the supremum in (1.6), and we denote by \(\overline{\mu}\in\Pr([0,1])\) the probability measure that achieves the infimum in (1.4). We have already observed in (2.16) and in the paragraph below it that by [3], there is exactly one such choice of \(\overline{\mu}\), and \[f=\inf_{\mu\in\Pr([0,1])}\sup_{\alpha\in\mathbf{Mart}}\Gamma(\mu,\alpha)=\sup_ {\alpha\in\mathbf{Mart}}\Gamma(\overline{\mu},\alpha),\] while Lemma 2.5 and the integration by parts (2.17) state that \[f=\sup_{\alpha\in\mathbf{Mart}}\inf_{\mu\in\Pr([0,1])}\Gamma(\mu,\alpha)=\inf _{\mu\in\Pr([0,1])}\Gamma(\mu,\overline{\alpha}).\] In particular, for every \(\mu\in\Pr([0,1])\) and \(\alpha\in\mathbf{Mart}\), we have \[\Gamma(\overline{\mu},\alpha)\leqslant f\quad\text{ and }\quad f\leqslant \Gamma(\mu,\overline{\alpha}),\] and thus \(f=\Gamma(\overline{\mu},\overline{\alpha})\) and \[\Gamma(\overline{\mu},\alpha)\leqslant\Gamma(\overline{\mu},\overline{\alpha })\leqslant\Gamma(\mu,\overline{\alpha}). \tag{2.26}\] The first optimality condition in (2.26) and Lemma 2.2 imply that \(\overline{\alpha}\) must be as described in the statement of Theorem 2. Notice that this martingale is measurable with respect to the filtration generated by the Brownian motion \(W\). In other words, there exists \(\widehat{\alpha}\in\mathbf{Mart}\) such that the martingale \(\overline{\alpha}\in\mathbf{Mart}\) is the image of \(\widehat{\alpha}\) under the canonical injection \(\alpha\mapsto\alpha\circ W\) from \(\mathbf{Mart}\) to **Mart** (recall that \(\mathsf{Mart}\) denotes the space of bounded martingales over the Wiener space \(\mathscr{W}\)). The martingale \(\widehat{\alpha}\) has a canonical image in the space of bounded martingales of every probability space \(\mathscr{P}\) under consideration in Theorem 1. Recalling also from (2.24) that the converse inequality is valid in every probability space, this completes the proof of Theorem 1. The fact that the support of \(\overline{\mu}\) is a subset of the set of maximizers of the mapping in (1.9) is a consequence of the second optimality condition in (2.26) and the integration by parts in (2.17). To show that the properties (1) and (2) in the statement of Theorem 2 characterize the pair \((\overline{\mu},\overline{\alpha})\) uniquely, we observe that these conditions imply that (2.26) is valid for very \(\mu\in\Pr([0,1])\) and \(\alpha\in\mathbf{Mart}\), using again the integration by parts (2.17) and Lemma 2.2, so we must have \[f=\inf_{\mu\in\Pr([0,1])}\sup_{\alpha\in\mathbf{Mart}}\Gamma(\mu,\alpha) \geqslant\inf_{\mu\in\Pr([0,1])}\Gamma(\mu,\overline{\alpha})=\Gamma( \overline{\mu},\overline{\alpha})\] as well as \[f=\sup_{\alpha\in\mathbf{Mart}}\inf_{\mu\in\Pr([0,1])}\Gamma(\mu,\alpha) \leqslant\sup_{\alpha\in\mathbf{Mart}}\Gamma(\overline{\mu},\alpha)=\Gamma( \overline{\mu},\overline{\alpha}).\] Hence \(f=\Gamma(\overline{\mu},\overline{\alpha})\), the probability measure \(\overline{\mu}\) must be the unique minimizer of (1.4), and the martingale \(\overline{\alpha}\) must be the unique maximizer of (1.6). Proof of Theorem 3.: Let \(\mu\in\Pr([0,1])\), let \((X_{t})_{t\geqslant 0}\) be the strong solution to (2.5), let \(\alpha\in\mathbf{Mart}\) be such that (2.6) holds for every \(t\in[0,1]\), and let \(g_{\mu}\) be such that, for every \(t\in[0,1]\), \[g_{\mu}(t):=\int_{t}^{1}\xi^{\prime\prime}(s)(\mathbf{E}[\alpha_{s}^{2}]-s)\, \mathrm{d}s.\] By Lemma 2.2, we have that \[\Phi_{\mu}(0,h)=\mathbf{E}\left[\frac{1}{2}\int_{0}^{1}\xi^{\prime\prime}(t) \mu[0,t]\alpha_{t}^{2}\,\mathrm{d}t+\alpha_{1}\int_{0}^{1}\sqrt{\xi^{\prime \prime}(t)}\,\mathrm{d}W_{t}-\phi^{*}(\alpha_{1})\right].\] Combining this with Theorem 1 yields that \[f\geqslant\Phi_{\mu}(0,h)-\frac{1}{2}\int_{0}^{1}\xi^{\prime\prime}(t)\mu[0,t ]\mathbf{E}[\alpha_{t}^{2}]\,\mathrm{d}t+\frac{1}{2}\inf_{[0,1]}g_{\mu}.\] The integration by parts (2.17) therefore yields the second inequality in (1.12); we recall that the first one follows from (1.4). By Theorem 2, if we choose \(\mu\) to be the minimizer of (1.4), then the support of \(\mu\) is a subset of the set of minimizers of \(g_{\mu}\), so the right-hand side of (1.12) vanishes in this case. Conversely, if \(\mu\in\Pr([0,1])\) is such that the right-hand side of (1.12) vanishes, then it must clearly be the unique minimizer of (1.4).
2310.15427
Fluid-gravity correspondence and causal first-order relativistic viscous hydrodynamics
The fluid-gravity correspondence is a duality between anti-de Sitter Einstein gravity and a relativistic fluid living at the conformal boundary. We show that one can accommodate the causal first-order viscous hydrodynamics recently developed by Bemfica, Disconzi, Noronha, and Kovtun in this framework, by requiring a set of natural conditions for the geometric data at the horizon. The latter hosts an induced Carrollian fluid, whose equations of motion are shown to be tightly tied to the ones describing the fluid at the boundary. Functional expressions for the transport coefficients are found --with those associated to viscosity and heat flux uniquely determined--, satisfying a set of known causality requirements for the underlying equations of motion.
Luca Ciambelli, Luis Lehner
2023-10-24T00:47:01Z
http://arxiv.org/abs/2310.15427v2
# Fluid-gravity correspondence and causal first-order relativistic viscous hydrodynamics ###### Abstract The fluid-gravity correspondence is a duality between anti-de Sitter Einstein gravity and a relativistic fluid living at the conformal boundary. We show that one can accommodate the causal first-order viscous hydrodynamics recently developed by Bemfica, Disconzi, Noronha, and Kovtun in this framework, by requiring a set of natural conditions for the geometric data at the horizon. The latter hosts an induced Carrollian fluid, whose equations of motion are shown to be tightly tied to the ones describing the fluid at the boundary. Functional expressions for the transport coefficients are found -with those associated to viscosity and heat flux uniquely determined-, satisfying a set of known causality requirements for the underlying equations of motion. ## I Introduction In a series of works by Bemfica, Disconzi, Noronha, and Kovtun (BDNK), a formulation for viscous, relativistic hydrodynamics has been introduced where dissipative corrections are accounted for via first-order derivatives of the energy density and flow velocity [1; 2; 3], see also [4; 5; 6], and where causality of the resulting equations of motion is achieved when choosing transport coefficients within particular bounds. Such a formulation, at first sight, is in tension with standard results where at least second-order corrections are required to account for viscous relativistic hydrodynamics [7; 8; 9; 10; 11]. A key observation is that such results require a strictly non-negative entropy change, while the first-order formulation does so up to higher order in gradients. Arguably, this is not necessarily a significant shortcoming as such higher order terms should be subleading in the effective field theory regime where such theory can be written. Further, another key aspect of BDNK is formulating such theory in a more general frame than the typical Landau or Eckart frames where causality is violated in the first order formulation. The Landau frame [12] was introduced requiring that the heat current vanishes, such that the fluid velocity is an eigenvector of the energy-momentum tensor. On the other hand, in the Eckart frame [13] the fluid velocity is aligned with the particle number flux, such that the equations are similar to those of non-relativistic hydrodynamics. As pointed out by BDNK, the frame discussed above should not be chosen driven by aesthetics, but instead requiring that the resulting hydrodynamic equations lead to well-posed problems, thus the equations of motions should be hyperbolic and causal. In a parallel development, the celebrated fluid-gravity correspondence [14; 15; 16; 17; 18; 19] has linked the behavior of perturbed black holes (with asymptotically anti-de Sitter boundary conditions) to viscous relativistic hydrodynamics in one lower dimension. This remarkable correspondence, was fully developed to second order in gradients, but specialized in the Landau frame by judicious choices made when solving Einstein equations for a perturbed black brane. Under restricted assumptions on the bulk duals, the Landau frame was abandoned in [20; 21; 22], where the heat current was considered in the fluid-gravity correspondence. In the current work, with the aim of shedding light on the connection between the fluid-gravity correspondence and the BDNK first order, viscous relativistic hydrodynamics, we first show that the fluid-gravity correspondence is well-suited to accommodate the full first order hydrodynamic frame spectrum. This freedom was already present in [14], but the correspondence was fully developed in the Landau frame. Given this freedom, are there reasonable choices -in particular, from a gravity perspective- leading to BDNK? To answer this question, we study the properties of the bulk projected on the horizon, which is a null hypersurface. It is known since the original "membrane paradigm" [23; 24] that the Einstein equations projected to the horizon are conservation equations of a fluid, which has been recently understood to be a Carrollian fluid [25; 26; 27; 28; 29; 30; 31]. We show that the Carrollian equations of motion, at first order, are equal to those of perfect fluid conservation for a conformal fluid1. We also observe that requiring the null vector generating the horizon to be aligned with the fluid velocity at first order in the derivative expansion selects exactly the BDNK form of the heat current. Similarly, the energy density at first order is the one used by BDNK if it is proportional to the horizon expansion. Under these assumptions we derive the values induced by Einstein gravity of most transport coefficients and a condition on a remaining one for a conformal equation of state. We find that the transport coefficients than can be fully fixed this way are within the causality and stability bounds. These observations may open the door to unrav eling a deeper connection between the horizon Carrollian fluid and relativistic conformal fluids. The rest of the manuscript is organized as follows. In section II we review the construction of the fluid-gravity correspondence of [14], and extrapolate the boundary energy-momentum tensor using the holographic prescription [32; 33]. In section III we discuss the horizon geometry using the framework of [34], and study it at zeroth and first order derivative expansion. We then make our geometrical choices, that we show in section IV lead to BDNK for the boundary fluid. We conclude with final remarks in section V. ## II Setup Following the presentation in [14], we consider a boosted black brane in 4+1 dimensions with asymptotically anti-de Sitter boundary conditions. In the stationary case -zeroth order in the gradient expansion-, the spacetime metric is given by \[\mathrm{d}s^{2}=-2u_{a}\mathrm{d}x^{a}\mathrm{d}r+r^{2}(P_{ab}-f(br)u_{a}u_{b} )\mathrm{d}x^{a}\mathrm{d}x^{b}\,; \tag{1}\] with \(f(r)=1-r^{-4}\), \(u^{a}u^{b}\eta_{ab}=-1\) and \(P_{ab}=\eta_{ab}+u_{a}u_{b}\) the projector orthogonal to \(u^{a}\), \(u^{a}P_{ab}=0\). The vector \(u^{a}\) is constant and defines the boost, while the function \(f(br)\) describes a black brane with radius \(r_{H}=b^{-1}\). Perturbed solutions, in terms of a gradient expansion (also known as derivative expansion) can be obtained by considering \((b,u^{a})\) as slowly varying functions of \(x^{a}\)2, inserting into Einstein equations and solving for non-trivial functions of \(r\) (see [14]). Footnote 2: We use bulk coordinates \(x^{\mu}=(r,x^{a})\), such that \(x^{a}\) are coordinates of fixed-\(r\) hypersurfaces, and in particular of the boundary. As opposed to the treatment in that work, we refrain from adopting the specific choice of vanishing "zero-modes" (as also considered in [6]). In other terms, we allow for small coordinate changes in the \(x^{a}\) sector, which we capture in terms of a scalar function \(\chi(x^{a})\) and a vector \(q^{a}(x^{b})\) transverse to \(u^{a}\), \(u^{a}q_{a}=0\). The resulting solution, at first order in the gradient expansion is3 Footnote 3: Using the convention \(A_{(ab)}=\frac{1}{2}(A_{ab}+A_{ba})\). \[\mathrm{d}s^{2} = -2u_{a}\mathrm{d}x^{a}\mathrm{d}r+r^{2}(P_{ab}-f(br)u_{a}u_{b}) \mathrm{d}x^{a}\mathrm{d}x^{b} \tag{2}\] \[-\frac{2}{b^{4}r^{2}}u_{(a}q_{b)}\mathrm{d}x^{a}\mathrm{d}x^{b}+ \frac{\chi}{b^{4}r^{2}}u_{a}u_{b}\mathrm{d}x^{a}\mathrm{d}x^{b}\] \[+2r^{2}bF(br)\sigma_{ab}\mathrm{d}x^{a}\mathrm{d}x^{b}+\frac{2}{ 3}r\theta u_{a}u_{b}\mathrm{d}x^{a}\mathrm{d}x^{b}\] \[-2ru_{(a}a_{b)}\mathrm{d}x^{a}\mathrm{d}x^{b},\] where \(F(br)=\frac{1}{br}-\frac{1}{4b^{4}r^{4}}\), and we introduced the shear, expansion, and acceleration of \(u^{a}\), \[\sigma^{ab}=P^{c(a}\partial_{c}u^{b)}-\frac{1}{3}P^{ab}\partial_{c }u^{c} \tag{3}\] \[a_{a}=u^{b}\partial_{b}u_{a}\,,\qquad\theta=\partial_{a}u^{a}, \tag{4}\] satisfying \(u^{a}\sigma_{ab}=0\), \(\sigma_{a}{}^{a}=0\) (\(\eta_{ab}P^{ab}=3\)), \(u^{a}a_{a}=0\). Armed with this solution, we can now obtain the stress tensor induced at the timelike asymptotic boundary. To do so, we follow the holographic prescription discussed in [32; 33], of which we now recap the salient ingredients. Given a bulk metric \(g_{\mu\nu}\), we introduce a hypersurface at fixed \(r\) and its projector \(h_{\mu\nu}\). Using the normalized normal form \(N=N_{\mu}\mathrm{d}x^{\mu}=\frac{\mathrm{d}r}{\sqrt{g^{\mu\nu}}}\), the projector reads \[h_{\mu\nu}=g_{\mu\nu}-N_{\mu}N_{\nu}\quad h_{\mu}{}^{\nu}N_{\nu}=0. \tag{5}\] The extrinsic curvature (second fundamental form) is defined as \[K_{ab} = h_{a}{}^{\mu}h_{b}{}^{\nu}\frac{1}{2}\pounds_{N}g_{\mu\nu} \tag{6}\] \[= h_{a}{}^{\mu}h_{b}{}^{\nu}\frac{1}{2}(\nabla_{\mu}N_{\nu}+ \nabla_{\nu}N_{\mu}), \tag{7}\] where \(\nabla\) is the bulk Levi-Civita connection. The induced inverse metric is \[\overline{g}^{ab}=h_{\mu}{}^{a}h_{\nu}{}^{b}g^{\mu\nu}, \tag{8}\] which can be used to define the trace on the hypersurface \[K=\overline{g}^{ab}K_{ab}. \tag{9}\] The traceless part of the extrinsic curvature defines the boundary stress tensor \[T^{a}{}_{b}=-2\lim_{r\rightarrow\infty}r^{4}\left(K^{a}{}_{b}-\frac{K}{4} \delta^{a}_{b}\right), \tag{10}\] where the \(r\) pre-factor comes from the holographic dictionary and ensures its finiteness approaching the boundary. Applying this procedure to our line element (2), the final result is \[T_{ab}=3\frac{1+\chi}{b^{4}}u_{a}u_{b}+\frac{1+\chi}{b^{4}}P_{ab}-\frac{2}{b^{3 }}\sigma_{ab}-\frac{8}{b^{4}}u_{(a}q_{b)}. \tag{11}\] This is the stress tensor of a conformal viscous fluid with fluid velocity \(u^{a}\), which accounts for heat-flux through \(q^{a}\), and a correction to the perfect fluid energy density through \(\chi\). Since a generic stress tensor is decomposed as \[T_{ab}=\mathcal{E}u_{a}u_{b}+\mathcal{P}P_{ab}-2\eta\sigma_{ab}+u_{a}Q_{b}+Q_{ a}u_{b}, \tag{12}\] one has, \[\mathcal{E} = 3\frac{1+\chi}{b^{4}},\quad\mathcal{P}=\frac{1+\chi}{b^{4}}\] \[\eta = \frac{1}{b^{3}}\,,\quad Q_{a}=-\frac{4}{b^{4}}q_{a}. \tag{13}\] We note that \(\mathcal{E}=3\mathcal{P}\) is the conformal equation of state implied by the asymptotic conformal symmetry, streaming from the vanishing of the stress tensor trace. Notice that at equilibrium the temperature is given by \(T=\frac{1}{b}\). From here, one could straightforwardly identify conditions for \((\chi,q^{a})\) to recover BDNK. This would amount to using a particular formulation of viscous-relativistic hydrodynamics to fix conditions on the gravitational sector. However, our goal is to go in the opposite way, namely to consider arguably natural choices on the gravitational sector -specifically at the horizon- and explore what they correspond to in the hydrodynamic side. ## III Choices To consistently deal with a degenerate metric at the null hypersurface describing the horizon, we adopt the null Rigging formalism described in [34]. The horizon is generically located at \(r=r_{H}(x)\), and thus the one-form normal to the horizon is \[\underline{n}=\tilde{\alpha}\mathrm{d}(r-r_{H}(x)), \tag{14}\] and we adopt \(\tilde{\alpha}=1\) in the following. Next, we introduce the vector \(k=\partial_{r}\) with the defining properties \[\underline{n}(k)=1\,,\qquad\underline{k}(k)=0. \tag{15}\] This vector is called the _null Rigging vector_. We can then define the Rigging projector as \[\Pi_{\mu}{}^{\nu}=\delta^{\nu}_{\mu}-n_{\mu}k^{\nu}, \tag{16}\] such that \[\Pi_{\mu}{}^{\nu}n_{\nu}=0\qquad k^{\nu}\Pi_{\nu}{}^{\mu}=0. \tag{17}\] The Rigging projector projects to the null hypersurface, since indeed the form \(\underline{n}\) and the vector \(k\) are normal to it. The bulk metric duals \(n\) and \(\underline{k}=-u_{a}\mathrm{d}x^{a}\) satisfy \[\Pi_{\mu}{}^{\nu}k_{\nu}=k_{\mu}\qquad\ell^{\mu}=n^{\nu}\Pi_{\nu}{}^{\mu}. \tag{18}\] Furthermore, the projected metric is given by \[q_{\mu\nu} = \Pi_{\mu}{}^{\rho}\Pi_{\nu}{}^{\sigma}g_{\rho\sigma}=g_{\mu\nu}- n_{\mu}k_{\nu}-k_{\mu}n_{\nu}. \tag{19}\] The components intrinsic to the hypersurface, \((k_{a},\ell^{a},q_{ab})\), form the ruled Carrollian structure discussed in [31] (with the same conventions). In particular, \(\ell^{a}\) is the Carrollian vector field, \(k_{a}\) is the Ehresmann connection, and \(q_{ab}\) is the degenerate Carrollian metric satisfying \[\ell^{a}q_{ab}=0 \tag{20}\] at the horizon. The other relevant quantities for the horizon physics are the surface gravity, expansion, Hajicek connection, and acceleration. They are defined in the bulk by, (1) Surface gravity:4 Footnote 4: This quantity should be called inaffinity, but for non-expanding horizons these two concepts coincide. Here, by construction at zeroth order and as a consequence of the equations of motion at first order, the horizon expansion vanishes so we are in this framework. \[\ell^{\mu}\nabla_{\mu}\ell^{\nu}=\kappa\ell^{\nu},\qquad k_{\nu}\ell^{\mu} \nabla_{\mu}\ell^{\nu}=\kappa\,; \tag{21}\] (2) Expansion: \[\Theta=q_{\nu}{}^{\mu}\nabla_{\mu}n^{\nu}\,; \tag{22}\] (3) Hajicek connection: \[\pi_{\mu}=q_{\mu}{}^{\nu}k_{\rho}\nabla_{\nu}n^{\rho}\,; \tag{23}\] (4) Acceleration \[\varphi_{\mu}=n^{\nu}\nabla_{[\mu}k_{\nu]}\,. \tag{24}\] We now proceed to compute these quantities, for clarity we do so first in the stationary solution (zeroth order), and then in the first order perturbed case. ### Zeroth Order At this order, the location of the horizon, and associated normal form and vector are: \[r_{H}=\frac{1}{b}\,,\qquad\underline{n}=\mathrm{d}r\,,\qquad k=\partial_{r}\,. \tag{25}\] The bulk metric duals are \[n=r^{2}f(br)\partial_{r}+u^{a}\partial_{a}\,,\qquad\underline{k}=-u_{a} \mathrm{d}x^{a} \tag{26}\] and thus the Carrollian vector is exactly given by the boundary fluid congruence (which is constant at zeroth order) \[\ell^{\mu}=n^{\nu}\Pi_{\nu}{}^{\mu}=u^{a}\delta^{\mu}_{a}. \tag{27}\] This implies \[\ell^{\mu}k_{\mu}=-u^{a}u_{a}=1. \tag{28}\] The degenerate metric on the null surface is \[q_{\mu\nu}=\begin{pmatrix}0&0\\ 0&r^{2}(P_{ab}-f(br)u_{a}u_{b})\end{pmatrix}\xrightarrow{r=r_{H}}\begin{pmatrix} 0&0\\ 0&\frac{P_{ab}}{b^{2}}\end{pmatrix} \tag{29}\] which indeed satisfies at the horizon \[\ell^{\mu}q_{\mu\nu}=u^{a}q_{ab}\delta^{b}_{\nu}\xrightarrow{r=r_{H}}u^{a} \frac{P_{ab}}{b^{2}}=0. \tag{30}\] With the above quantities, it is straightforward to obtain: \[\kappa=\frac{2}{b},\quad\Theta=0,\quad\pi_{\mu}=0,\quad\varphi_{\mu}=0. \tag{31}\] Exactly like the relativistic conformal fluid at the boundary, the Carrollian fluid at the horizon is a perfect fluid at zeroth order. Delving into the properties of the Carrollian fluid on the horizon and its connection to the boundary fluid would bring us too afield from the subject of this manuscript. We leave this exploration to a future work. ### First Order We now perturb the stationary solution using the first order gradient expansion. Details on how to establish the location of the perturbed horizon are in [15] (in particular subsection 2.3), so we just report the result here. At first order, the horizon and associated normal form are \[r_{H}=\frac{1}{b}+\frac{\theta}{6}+\frac{\chi}{4b}-\frac{u^{a}\partial_{a}b}{2b },\qquad\underline{n}=\mathrm{d}r+\frac{\mathrm{d}b}{b^{2}}, \tag{32}\] where \(\theta\), \(\chi\), and \(\mathrm{d}b\) are first order quantities. Following the steps described above, we gather \[\underline{k}=-u_{a}\mathrm{d}x^{a}, \tag{33}\] \[\ell^{a}=u^{a}-ba^{a}-q^{a}+P^{ab}\partial_{b}b\,,\quad\ell^{r}=- u^{a}\frac{\partial_{a}b}{b^{2}}; \tag{34}\] where the indices of the various quantities (\(a^{a}\), \(q^{a}\), and \(P_{ab}\)) are raised using \(\eta_{ab}\), and we note that \(\ell^{r}\) is non-vanishing due to the fact that the horizon position is now a function of \(x^{a}\). With the above, through a direct, but naturally more involved calculation, one obtains: \[\kappa = \frac{2}{b}-\frac{2\theta}{3}+\frac{\chi}{2b}+\frac{u^{a}\partial _{a}b}{b} \tag{35}\] \[\Theta = \theta-3\frac{u^{a}\partial_{a}b}{b}\] (36) \[\pi_{a} = 2\frac{q_{a}}{b}-\frac{P_{a}{}^{b}\partial_{b}b}{b}\] (37) \[\varphi_{a} = a_{a}. \tag{38}\] With these, we are now ready to argue for some particular choices. First, one could demand that at first order, the component of the null vector \(\ell^{\mu}\) orthogonal to \(r\) should be aligned with \(u^{a}\) (just as in the zeroth order case). This allows one to still identify the Carrollian vector with the boundary fluid velocity, even at first order. Such a choice implies \[q^{a}=-ba^{a}+P^{ab}\partial_{b}b\,. \tag{39}\] This, as we shall discuss below, is precisely in line with the hydrodynamic choice in BDNK. Before such discussion, we must address the choice of \(\chi\). First, note that \(r_{H}\) can be re-expressed as, \[r_{H}=\frac{1}{b}+\frac{1}{6}\Theta+\frac{\chi}{4b}. \tag{40}\] We shall now show that, to first order, \(\Theta=0\) as a consequence of the Einstein equations projected on the horizon, specifically the Raychaudhuri equation. Thus, the choice \(\chi\propto\Theta\), conveniently keeps \(r_{H}=1/b\) on-shell. Note that, with this choice, \(\kappa\) receives non-trivial first-order corrections. We discuss the consequences of choosing it to remain unchanged in appendix V.1. Since \(\kappa\) depends on the generators' parameterization, we regard keeping \(r_{H}\) unchanged as the more natural choice. To see that \(\Theta=0\) to first order, let us recall that Raychaudhuri's and Damour's equations in vacuum are, \[(\pounds_{\ell}+\Theta)[\Theta]=\mu\Theta-\sigma_{a}{}^{b}\sigma _{b}{}^{a}, \tag{41}\] \[q_{a}{}^{b}\left(\pounds_{\ell}+\Theta\right)[\pi_{b}]+\Theta \varphi_{a}=(\overline{D}_{b}+\varphi_{b})(\mu q_{a}{}^{b}-\sigma_{a}{}^{b}), \tag{42}\] where \(\pounds_{\ell}\) is the Lie-derivative along \(\ell\), \(\overline{D}_{a}\) the (Carrollian) covariant derivative associated to \(q_{ab}\), \(\mu=\Theta/2+\kappa\) and we used the conventions of [31]. Since here we will be interested only in the first order expression, where most terms in these equations vanish, we refer to this reference for an explanation of all the quantities involved in general. Notice \(\kappa\) has an order \(0\) contribution, so, eq. (41) implies that at first order \(\Theta=0\). A similar analysis of eq. (42) implies \(q_{a}{}^{b}\partial_{b}\kappa+\varphi_{a}\kappa=0\), where \(q_{a}{}^{b}\) is the projector orthogonal to \(\ell^{a}\) at the horizon, which at zeroth order is simply \(P_{a}{}^{b}\), and thus (using (38)) this equation is equal to \(a_{a}=P_{a}{}^{b}\frac{\partial_{b}b}{b}\). These observations have several consequences. First, since to the order we work, \(\Theta=0\), the choice stated above for \(\chi\) indeed implies \(r_{H}=1/b\) to this order. Further, and importantly, they indicate that at first order Raychaudhuri's and Damour's equations are exactly equal to the conservation of the boundary perfect fluid stress tensor. Indeed, one can easily show that \(\partial_{a}T^{a}{}_{b}=0\), using (11) and the relationships (35), (36), and (38), gives exactly the Raychuadhuri's and Damour's equations, once projected on \(u^{a}\) and \(P_{a}{}^{b}\), respectively. This is ultimately tied to the fact that these equations all come from the bulk Einstein equations and their particular hierarchical structure arising from the characteristic treatment along a timelike-null foliation of the spacetime. To summarize then, examining the resulting structure at the horizon, our choices are: \[\ell^{a}=u^{a}\Leftrightarrow q^{a}=-ba^{a}+P^{ab}\partial_{b}b \tag{43}\] \[r_{H}=\tfrac{1}{b}+(2+3\alpha)\,\tfrac{\Theta}{12}\Leftrightarrow \chi=\alpha\,b\,\Theta\,, \tag{44}\] with \(\alpha\) a proportionality function that remains to be specified. We reported these results off-shell of the conservation laws discussed above. If we now impose these conservation laws, we obtain \(q^{a}=0\) and \(\chi=0\). This is precisely the outcome of the intrinsic hydrodynamic BDNK analysis for a conformal relativistic fluid: the heat current and the first-order correction to the energy that implement causality are zero on-shell of the first order conservation law. In what follows, we discuss in detail the structure implied by the geometrical identifications/choices on the resulting hydrodynamical equations. ## IV Consequences We can now examine the consequences of these choices on the thermodynamic quantities obtained in (13). First, note that \[\mathcal{E}^{(0)}=3\mathcal{P}^{(0)} \equiv e=\frac{3}{b^{4}} \tag{45}\] \[\mathcal{E}^{(1)}=3\mathcal{P}^{(1)} = \frac{3\alpha}{b^{3}}\left(\partial_{a}u^{a}-\frac{3}{b}u^{a} \partial_{a}b\right)\] (46) \[Q^{a} = \frac{4}{b^{3}}\left(a^{a}-\frac{Pa^{ab}\partial_{b}b}{b}\right), \tag{47}\] where we introduced \(\{e,p=\frac{e}{3}\}\) to denote the zeroth order expressions for energy and pressure respectively. The first order expressions can be re-expressed in terms of \(e\) and \(p\) as, \[\mathcal{E}^{(1)} = \frac{3\alpha}{b^{3}}\left(\partial_{a}u^{a}+\frac{u^{a}\partial_ {a}e}{(e+p)}\right) \tag{48}\] \[Q^{a} = \frac{4}{b^{3}}\left(a^{a}+\frac{P^{ab}\partial_{b}e}{3(e+p)}\right) \tag{49}\] (the expressions for the pressure is trivially set by the conformal condition). We can now compare with the expressions adopted by BDNK for the conformal case, as this is the one that corresponds to our case [3]. Namely, denoting with an overbar their choices, \[\bar{\mathcal{E}}^{(1)} = \left(\chi_{2}\,\partial_{a}u^{a}+\chi_{1}\,\frac{u^{a}\partial_ {a}e}{(e+p)}\right) \tag{50}\] \[\bar{Q}^{a} = \lambda\left(a^{a}+\frac{P^{ab}\partial_{b}e}{3(e+p)}\right), \tag{51}\] with \(\lambda_{,}\chi_{i}\) transport coefficients5 that are chosen to ensure causality of the underlying equations, together with \(\eta\) defined in (12). Footnote 5: Which include \(\{\chi_{3},\chi_{4}\}\) analogously introduced for the first-order pressure \(\mathcal{P}^{(1)}\). Remarkably, the functional form for the first order corrections are in excellent agreement with the proposed terms in [3]. Moreover, our choices motivated by considerations at the horizon also imply for the transport coefficients (for \(\eta\) we recall (13)), \[\eta=\tfrac{1}{b^{3}}\,,\,\lambda=\tfrac{4}{b^{3}}\,,\] \[\chi_{1}=\chi_{2}=3\chi_{3}=3\chi_{4}=\tfrac{3\alpha}{b^{3}}\,, \tag{52}\] where \(\{\chi_{3},\chi_{4}\}\) are linked to \(\{\chi_{1},\chi_{2}\}\) by conformality. Not only do the transport coefficients have the temperature dependency of \(T^{3}\) as expected from kinetic theory [3], but the shear viscosity and heat transport coefficients are uniquely determined6. In particular, they satisfy the criteria for causality \(\lambda\geq\eta\) identified in [3]. Notice however our expressions make the transport coefficients \(\chi_{i}\) all proportional to each other but do not completely fix them, nor provide bounds for them which need not be surprising. Namely, conditions on \(\chi_{i}\) determined by the causality analysis of [3], effectively, come from the high frequency limit (through the standard analysis within PDE theory). This can be seen by examining the dispersion relations for the shear and sound modes and their dependency on \(\{\eta,\lambda,\alpha\}\). Their roles appear at order \(k^{2}\), \(k^{4}\) and \(k^{6}\) respectively. On the other hand, the fluid-gravity correspondence is obtained in the long wavelength regime of perturbed black holes in General Relativity -which is a causal theory-, thus it is natural to expect that in the regime where the duality can be established, conditions on relevant parameters on the hydrodynamic side can be obtained implying such property. Footnote 6: The value of the viscous transport coefficient is tied to the lowest-lying quasinormal modes of the perturbed black brane (see, e.g. [35]). For the unfixed parameter, we only demand \(\alpha>0\), as this choice ensures equilibrium can be reached, i.e. \(\mathcal{E}^{(0)}+\mathcal{E}^{(1)}\rightarrow\mathcal{E}^{(0)}\) within a timescale given by \(\alpha\). Of course, one can choose a suitable value for \(\alpha\) such that the full set of requirements for causality are satisfied (e.g. \(\alpha=4/3\), so that \(\chi_{\{1,2\}}=3\chi_{\{3,4\}}=\lambda\)) but there is no geometric reason at this order we can demand to argue for a specific value. ## V Final Words In this work we examined from a gravitation angle how the BDNK first order formulation of relativistic, viscous hydrodynamics is connected to the fluid-gravity correspondence. Such a formulation, which in practice is simpler to deal with than standard, second order viscous formulations [36; 37], has received significant attention in recent years both at the theoretical level [2; 3; 4; 5; 38] and also in incipient numerical investigations (e.g. [39; 40; 41]). The results obtained also revealed new connections between relativistic and Carrollian hydrodynamics as well as with gravity. Our analysis unearthed a natural way to motivate the BDNK formulation from a gravitational perspective. Further, the expected functional dependence of transport coefficients was obtained and, for the viscous and heat-flux coefficients, a unique expression was found. As well, our analysis revealed a connection between the effective Carrollian hydrodynamic description of null surfaces and the asymptotic relativistic fluid that is identified at the timelike infinity of perturbed black branes in AdS. Such connection implies that, at leading order, Raychaudhuri's and Damour's equations encode the conservation of a conformal perfect fluid. The analysis of higher orders and the exploration of Carrollian hydrodynamics from this perspective is an interesting task which we defer to future work. In a similar vein, it would be interesting to explore the horizon physics deeper, as results there would also hold for asymptotically flat spacetimes. Importantly, in the latter case, there is also an interesting relation between the structure of interior null surfaces (like the horizon), and future null infinity. However, the relationship between the horizon membrane paradigm and the asymptotic (e.g. \(\mathcal{I}^{+}\)) null boundary Carrollian fluid is still largely unexplored. The latter fluid however enjoys Weyl symmetry, which makes it special. This could also help motivate a fluid interpretation of (particular) quasi-normal modes in asymptotically flat spacetimes. Another avenue for exploration is to consider a potential entropy current, both for the relativistic fluid at the boundary and the horizon Carrollian fluid. This current could help us connect with its microscopic origin and inform standing questions on Carrollian hydrodynamics. Finally, a deeper understanding of potential connections between phenomena in non-linear gravity and hydrodynamics can motivate new avenues to identify and study non-linear gravitational behavior (e.g. [42; 43; 44; 45; 46; 47; 48; 49]). ###### Acknowledgements. We thank F. Bemfica, M. Disconzi, L. Freidel, S. Giri, R. E. Hoult, P. Kovtun, R. Leigh, J. Noronha, M. Petropoulos, E. Poisson, F. Pretorius, M. Rangamani, and J. Senovilla for discussions. This work was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. L. L. thanks financial support via the Carlo Fidani Rainer Weiss Chair at Perimeter Institute. L. L. receives additional financial support from the Natural Sciences and Engineering Research Council of Canada through a Discovery Grant and CIFAR. L. L. thanks KITP - UC Santa Barbara for its hospitality during "The Many Faces of Relativistic Fluid Dynamics" Program, where this work's initial stages were completed. ### Alternative choice for \(\chi\) An alternative choice for \(\chi\), which fixes it completely, would be to demand that to first order \(\kappa=2/b\). This would imply \[\chi = 2\left(\frac{2b}{3}\partial_{a}u^{a}-u^{a}\partial_{a}b\right) \tag{53}\] \[= \frac{2b}{3}\left(2\partial_{a}u^{a}+\frac{u^{a}\partial_{a}e}{( e+p)}\right),\] and as a consequence, \(\chi_{1}=3\chi_{3}=2/b^{3}\), \(\chi_{2}=3\chi_{4}=4/b^{3}\). These values however, (complemented by \(\lambda=4/b^{3},\eta=1/b^{3}\)) are not within the causality bounds of [3]. Further, on-shell dynamical solutions have an associated energy density at first order which differs from that at zeroth order.
2307.10190
Summary of the 3rd BINA Workshop
BINA-3 has been the third workshop of this series involving scientists from India and Belgium aimed at fostering future joint research in the view of cutting-edge observatories and advances in theory. BINA-3 was held at the Graphic Era Hill University, 22-24 March 2023 at Bhimtal (near Nainital), Uttarakhand, India. A major event was the inauguration of the International Liquid-Mirror Telescope (ILMT), the first liquid mirror telescope devoted exclusively to astronomy. BINA-3 provided impressive highlights encompassing topics of both general astrophysics and solar physics. Research results and future projects have been featured through invited and contributed talks, and poster presentations.
Eugene Semenko, Manfred Cuntz
2023-07-08T14:12:46Z
http://arxiv.org/abs/2307.10190v1
# Summary of the 3\({}^{\text{rd}}\) BINA Workshop ###### Abstract BINA-3 has been the third workshop of this series involving scientists from India and Belgium aimed at fostering future joint research in the view of cutting-edge observatories and advances in theory. BINA-3 was held at the Graphic Era Hill University, 22-24 March 2023 at Bhimtal (near Nainital), Uttarakhand, India. A major event was the inauguration of the International Liquid-Mirror Telescope (ILMT), the first liquid mirror telescope devoted exclusively to astronomy. BINA-3 provided impressive highlights encompassing topics of both general astrophysics and solar physics. Research results and future projects have been featured through invited and contributed talks, and poster presentations. Keywords -- _photometry, spectroscopy, telescopes, instrumentation, stars, galaxies_ ## 1 Indo-Belgian collaboration in Space and Time Without comprehensive international collaborations, it is difficult to imagine sustainable scientific progress in the modern age. In astronomy and astrophysics, such collaborations enabled the operation of observational facilities in the best places on the ground and in space. In big international cooperations like the European Southern Observatory, we can see how the technology exchange and mobility of human resources promote research on all levels, from universities to international institutions. Especially promising collaborations pertain to India, the world's most populous country according to the United Nations (UN DESA Policy Brief No. 153, 2023), with exceptionally rapid economic growth. The Belgo-Indian Network for Astronomy and Astrophysics, or BINA, was initialized in 2014 to foster the existing contacts between Indian and Belgian researchers, mostly from the Aryabhatta Research Institute of Observational Sciences (ARIES) and the Royal Observatory of Brussels (ROB), and to expand this collaboration on the nation-wide scale in both countries. The third BINA workshop, which we have the pleasure of summarizing, marks the end of this project. Two previous workshops were held in 2016 in Nainital (India) and 2018 in Brussels (Belgium). We believe that our summary would not be complete without a brief comparison of the third workshop with the two preceding ones. This will help us to better understand BINA's importance and outcome. The first workshop (BINA-1) took place in Nainital on 15-18 November 2016. According to available statistics (De Cat et al., 2018), 107 astronomers from eight countries participated in the meeting, giving 36 oral talks and presenting 42 posters. Eighty-eight people from twelve partner institutes represented the Indian astronomical community, whereas six Belgian institutions sent ten representatives. The meetings' agenda focused primarily on the instrumentation of the newly commissioned 3.6-m Devastal Optical Telescope (DOT) and on the future of the 4-m International Liquid-Mirror Telescope (ILMT). The scientific talks covered a wide range of subjects, from solar system studies to individual stars, stellar clusters, exoplanets and extra-galactic astronomy. The second BINA workshop (BINA-2) was held two years later, in 2018, in Brussels; it was aimed to further expand the existing collaborations. Despite the significantly smaller number of participants (i.e., 69 registered researchers from seven countries), the conference's scientific programme was rich in oral talks, totalling 44. Furthermore, there were eight poster presentations (De Cat et al., 2019). The scientific programme of the second workshop largely mirrored the agenda of the first meeting, accentuating the scientific application of the Belgo-Indian telescopes. A highly notable aspect of the second workshop's scientific programme was the presence of the review talks. In terms of participation and the number of oral talks, BINA-3, the final workshop, resembles the previous events, although, fortunately, a significant increase in participation and contributions occurred. Nearly one hundred fifty scientists from eleven countries participated in BINA-3, with the lion's share from India and Belgium. A total of 37 talks (10: invited, 27: contributory) talks were given in the main programme, and 21 contributory talks were given in the solar physics sessions. There have been 81 poster presentations; many of those were led by graduate and undergraduate students. There is significant progress hiding behind the numbers. Since 2016, the Belgo-Indian network has grown to involve new institutes from both partner countries. The members published numerous scientific papers with results obtained on the Belgo-Indian telescopes. Many of these were based on PhD theses pursued within BINA. The content of these proceedings, during 2016-2023, also reveals that many young researchers changed their affiliation, moving to new places and thus expanding the network of research contacts. Progress in instrumentation and scientific collaboration within BINA and with external institutes worldwide gave new impulses to solar and general physics studies. In general, we can count the significantly increased number of telescopes and instruments as the major indicator of progress achieved within the BINA project. The list of available instruments has been highly influential on BINA-3. In the following sections, we briefly summarize its scientific programme. ## 2 Observational Techniques and Instrumentation Telescopes and their instruments were in the spotlight of all BINA workshops. The ILMT has become the central theme of the current meeting. From a number of oral talks and poster presentations, one could get a comprehensive view of such telescopes' operation principles. It was particularly interesting to find out about the data reduction, calibration and access to the processed images obtained with the ILMT. Numerous results of the first observations with the ILMT, shown mostly in the poster presentations, have demonstrated a wide range of possible scientific applications of zenith telescopes with liquid mirrors. Given the short time that has passed since the beginning of the operation and obtained results, we can confirm that the ILMT has proven its scientific concept and significantly strengthened the observational facilities for the current and future Indo-Belgian projects. The Indo-Belgian 3.6-m Devastal Optical Telescope (DOT) remains Asia's largest so far fully steerable optical telescope, which has been in operation since 2016. Yet, accurately by the time of BINA-3, a park of Indian telescopes received strengthening with the commissioning of the 2.5-m telescope, which was built by the Advanced Mechanical and Optical Systems (AMOS) in Belgium for the Physical Research Laboratory (PRL) in Ahmedabad and installed at Mt Abu, Rajasthan, India. The development of new instruments and the upgrade of existing facilities was the central theme of the instrumentation section of the current conference. Notably, by 2028, the TIFR-ARIES Multi-Object Optical to Near-infrared Spectrograph (TA-MOONS) will bring new capabilities useful for the studies of stars in star formation regions, open clusters, and extended sources with DOT. Also, for this telescope, adding the polarimetric mode to the Aries-Devasthal Faint Object Spectrograph & Camera (ADFOSC), the existing device for observations of faint objects, will enable both linear and circular polarimetry. This new regime is of critical importance to the study of processes in star-forming regions, interacting stellar systems, supernovae, active galactic nuclei, and beyond. A spectropolarimetric mode might be a case to think of for the creators of the PRL Advanced Radial Velocity Abu Sky Search-2 (PARAS-2), a high-resolution spectrograph at the 2.5-m PRL telescope at Mt Abu. This highly stable device has been developed for precise measurements of radial velocities while providing very high spectral resolution. Due to the geographical location of Mt Abu, PARAS-2 can play a critical role in the continuous monitoring of radial velocities for a wide variety of relatively bright objects; however, with a spectropolarimetric mode being implemented (like HARPSpol at the High Accuracy Radial velocity Planet Searcher (HARPS); Piskunov et al. 2011), PARAS-2 can take its niche in observations of hot magnetic stars, either within Indo-Belgian collaboration or in third-party projects like MOBSTER (David-Uraz et al., 2019). (MOBSTER is an acronym for Magnetic OB[A] Stars with TESS: probing their Evolutionary and Rotational properties; it is a collaboration of more than 60 scientists from over the world.) With the completion of a High-Resolution Spectrograph for the 3.6-m Devastal Optical Telescope (DOT-HRS), the astronomical community of ARIES will possess the ability to independently carry out studies in the fields of asteroseismology and stellar abundances. Again, like in the case of PARAS-2, spectropolarimetry with DOT-HRS is expected to increase the list of potential applications of this device and could further expand the ongoing Nainital-Cape survey of pulsating early-type stars (Ashoka et al., 2000; Martinez et al., 2001; Joshi et al., 2003, 2006, 2009, 2010, 2012, 2016, 2017, 2022). The rising number of telescopes in India poses questions about the most adequate time allocation policies and the optimal distribution of observational proposals between existing astronomical facilities. We found that the analysis of the time allocation for the 3.6-m DOT regarding the last six observational cycles, as presented at the workshop, indicated that it was particularly useful and appropriate for all facilities of ARIES -- especially considering that the ILMT has started its operation and the upcoming arrival of the next-generation instruments for the 3.6-m DOT. From our perspective, in addition to the proposed improvements, we would also recommend the organisation of regular (e.g., on a yearly basis) conferences of the telescope's users under the auspices of the Time Allocation Committee (TAC), where the existing and potential applicants would be able to present their proposals or give feedback on the approved or running programmes. Such mini-conferences could be held online, speeding up communication between the TAC and the astronomical community. Naturally, this experience could be applied to other instruments in India and beyond as well. The theme of small telescopes has been raised in several talks. The Belgium-made High-Efficiency and high-Resolution Mercator Echelle Spectrograph (HERMES), operated at the 1.25-m Mercator telescope in La Palma (Spain), proved its effectiveness in studies of the chemical composition of single and multiple stars. This spectrograph is used for existing bilateral projects. Complimentary opportunities for high-resolution spectroscopy with the 1-m-class telescopes and the perspectives of affordable implementation of adaptive optics on small and moderate-size telescopes have been considered in BINA-3. The interest in these problems highlights the importance of small, properly equipped telescopes for big programmes complementary to missions like the Transiting Exoplanet Survey Satellite (TESS). Main Programme Session BINA provides access to a wide variety of observational facilities located worldwide (De Cat et al., 2019). The observational component mostly determined the agenda of the BINA-3. Comets, planets, asteroids, and orbital debris were in the third BINA workshop's spotlight, though other topics such as stars, including stellar multiplicity, and compact objects have been discussed. The selection of objects is largely determined by the areas where optical spectroscopy and photometry are most effective with small and medium-sized telescopes. The exception is the study of planetary atmospheres using the method of stellar occultations. Similar techniques require bigger apertures, and being implemented in a 3-6-m class of telescopes can be very beneficial. The 3.6-m DOT is among those few instruments on the planet which have regularly been used for observation of such events (e.g., Sicardy et al., 2021; Sharma et al., 2022). Various instruments available within the Indo-Belgian collaboration promote the comprehensive study of processes occurring in star formation regions and during the ongoing evolution of stars. The efficiency of multi-wavelength observations was demonstrated in the example of the study of the star formation H ii region Sh 2-305. However, this is not a unique case where the Indian telescopes exploring the Universe in optical, radio, and X-ray domains were successfully combined. We cannot pass by the numerous results of the study of massive binary stars, stars with discs and circumstellar envelopes, introduced in the BINA-3 workshop. Stellar multiplicity runs the golden thread through many talks given in Bhimtal during the workshop. As companions significantly influence stellar lifes at all stages of evolution, proper accounting and evaluation of the companions' properties are crucial. In this regard, work with the catalogues of binary stars or their extensive study within the ongoing or future Indo-Belgian projects must receive high priority. In such programmes, high-resolution optical spectroscopy of binary and multiple stars must take a special place. Another problem passing through the scientific content of BINA-3 is stellar magnetism. As pointed out in the workshop, magnetic fields are ubiquitous on and beyond the main sequence, with their strengths varying substantially. Magnetic fields are responsible for different kinds of stellar activity and can impact stellar evolution. Besides the theoretical aspects pertaining to the physics of these processes, we would like to attract attention to the lack of observational facilities in the Asian region suitable to direct observations of stellar magnetic fields and processes. The worldwide selection of medium-sized and big telescopes equipped with sensitive spectropolarimetric devices is very limited, and Indian telescopes could fill this gap. Through the study of chemical composition, one can explore the evolution of individual stars, groups of stars, and the Galaxy at large. The last is the central task of galactic archaeology. Pursuing this task depends on the availability of spectra and proper modelling. Despite the various observational results presented in BINA-3, we find a lack of interactions between the BINA members and groups working, e.g., in the U.S., Sweden or Germany, on the theoretical aspects of abundance analysis. We believe tighter cooperation with the institutes outside of BINA would take the research of stellar abundances to a qualitatively new level. In contrast to the previous workshops, asteroseismology, a powerful tool for probing stellar interiors and validating stellar parameters, appears underrepresented in BINA-3. (On a lighter note, a superb cultural show successfully compensated for the lack of "music of the stars" in the conference programme.) This fact looks surprising to us as the Belgian groups in Brussels and Leuven are famous for their proficiency in this field. Apart from galactic archaeology, which deals with the evolution of chemical composition, probing the Galactic structure is another important direction of work within BINA. Even now, after decades of extensive exploration of the Galaxy using different methods, our knowledge of its structure is incomplete. Optical polarimetry helps to reveal the detailed fine structure of dust clouds in the star formation regions or in the areas of young open clusters. Indian astronomers are experienced in this kind of work, and their results, both published (e.g., Uppal et al., 2022) and presented during BINA-3, deserve special attention. We look forward to further expanding this direction of galactic studies on a new technical level. ## 4 Solar Physics Session The mainframe of the solar physics programme has been the study of small-scale structure, waves, flares as well as coronal mass ejections (CMEs). Science opportunities are often directly associated with instruments such as the Extreme Ultraviolet Imager (EUI) onboard of the Solar Orbiter. The EUI provides a crucial link between the solar surface, on the one hand, and the corona and solar wind, on the other hand, that ultimately shapes the structure and dynamics of the interplanetary medium. Several contributions focused on wave propagation, including their relevance to small-scale structures of the solar chromosphere, transition region and corona, such as flares, spicules and loop systems. This kind of research considered both observations and theoretical work, such as ab-initio simulations for standing waves and slow magneto-acoustic waves. Studies of the outer solar atmosphere also utilized the Interface Region Imaging Spectrograph (IRIS) and the Atmospheric Imaging Assembly (AIA), both onboard of the Solar Dynamics Observatory (SDO). In alignment with previous studies given in the literature, the potential of spectral lines, including line asymmetries, for the identification of solar atmospheric heating processes has been pointed out and carefully examined. Clearly, this approach is relevant to both solar physics and studies of solar-type stars of different ages and activity levels; it allows to embed solar studies into a broader context. Regarding CMEs, a major driver of space weather and geomagnetic stars, attention has been paid the EUropean Heliosphere FORcasting Information Asset (EUHFORIA), which is relevant for MHD modelling and the study of the evolution of CMEs in the heliosphere. In this regard, a pivotal aspect is the study of thermodynamic and magnetic properties of CMEs as well as CME forward-modeling, aimed at predicting CME breakouts as well as CME topologies and magnitudes. Relevant spectral line features include Fe XIV and Fe XI data, obtained with existing instruments or available in the archive. Another notable item has been the presentation of long-term variations of solar differential rotation and the solar cycle; the latter still poses a large set of unanswered scientific questions. ## 5 Retrospective and Recommendations A key element of BINA-3 is the future availability of the ILMT. The science goals of ILMT include cosmological research such as the statistical determination of key cosmological parameters through surveying quasars and supernovae as well as photometric variability studies of stars, transiting extra-solar planets and various types of transient events. Another aspect consists in the search for faint extended objects like low-surface brightness and star-forming galaxies. The pronounced use of ILMT, typically in conjunction with other available facilities, requires the ongoing pursuit of international collaborations; this activity is pivotal for future success. Another key aspect is the significance of theoretical studies. Regarding solar physics research, previous work encompasses the study of MHD waves and small-scale transients, with a focus on the solar chromosphere, transition region and corona. Some of this work made extensive use of the EUI onboard of the Solar Orbiter. The study of outer solar atmosphere fine structure utilized the IRIS and the AIA, both onboard of the SDO. Time-dependent coronal studies, especially CMEs, are of great significance for the Earth, such as the onset of geomagnetic storms and the safety of equipment, including those associated with satellite communication1. Further advances in this field are expected to benefit from additional observational studies as well as advances in theory, particularly the interface of those two. Regarding theoretical work, ongoing and future efforts should continue to focus on 3-D magneto-hydrodynamics studies in conjunction with the adequate inclusion of radiative transfer and statistical phenomena, as well as aspects of chaos theory. Footnote 1: See [https://www.swpc.noaa.gov](https://www.swpc.noaa.gov) for further information. There are other items with the potential for future successful developments. Asteroseismology has been underrepresented in BINA-3. This is a powerful tool in the context of stellar evolution studies and the validation and improvement of stellar parameters; the latter is also relevant in the context of extrasolar planet investigations. Further important aspects concern the study of stellar magnetism and activity. Besides elementary stellar studies, these topics are also of critical importance regarding circumstellar habitability and astrobiology at large (e.g., Lammer et al., 2009, and subsequent work). Moreover, studies of AGNs and GRBs are cardinal topics beyond solar and stellar physics; they have gained considerable steam within the scientific community. Processes in the extragalactic objects are characterized by high energy and rich spectra. Among the variety of works presented during BINA-3, studies of active galactic nuclei (AGN) and different transients like gamma-ray bursts (GRB) continue to deserve special attention. The members of BINA have an exhaustive set of instruments available for multi-wavelength observations of these extragalactic sources, yet there is still room for improvement. Considerable advances are attainable both in instrumentation and in techniques of analysis. In the study of intra-night variability of blazars presented in the workshop's programme (Abbasi et al., 2023), we noted the lack of international contributors, although these types of objects are in the spotlight of groups working, e.g., at the 6-m telescope of the Special Astrophysical Observatory, located in the North Caucasus region of Russia (Shablovinskaya and Afanasiev, 2019). Given the absence of polarimetric devices for observation with the 3.6-m DOT at the moment, such cooperation could open new opportunities. Connections established on the personal level between the member institutions of BINA and observatories operating big telescopes would facilitate future studies in extragalactic astronomy where the aperture matters. Similarly, we would recommend establishing collaborations with the institutes operating robotic telescopes for the observation of transients. However, a more radical future step might be an expansion of Indian observational facilities towards other continents, especially South America. A small network of medium-sized fully-robotic telescopes could provide easy access to observations and be used for educational purposes. It would reduce the dependence on astronomical monitoring occurring in South Asia -- in consideration of possible drawbacks due to the regional climates. Last but not least, in the field of data analysis, the leitmotif now is the use of machine learning (ML) and artificial intelligence (AI). This theme was raised several times during the workshop, but we believe that it could find broader applications in projects related to the classification of light curves and spectra. At the same time, we would recommend researchers using ML and AI in their work not to ignore advances in theory, as without proper constraints and background information, these methods might lead to impractical results, especially if based on small samples. #### Acknowledgments The authors are grateful to the scientific and local organizing committees of BINA-3 for inviting them to summarize the workshop and for further assistance in preparing these proceedings. **ORCID identifiers of the authors** 0000-0002-1912-1342 -- Eugene Semenko 0000-0002-8883-2930 -- Manfred Cuntz **Author contributions** Both authors equally contributed to this publication. **Conflicts of interest** The authors declare no conflict of interest.
2304.09191
Sample Variance in Cosmological Observations with a Narrow Field-of-View
Surveys with a narrow field-of-view can play an important role in probing cosmology, but inferences from these surveys suffer from large sample variance, arising from random fluctuations around the cosmic mean. The standard method for computing the sample variance is based on two key approximations: treating perturbations linearly and the survey geometry as a box. We demonstrate that it can lead to a significant underestimate of the sample variance in narrow surveys. We present a new method for accurately computing the sample variance and apply our method to the recent observations of the warm-hot intergalactic medium (WHIM) based on spectroscopic measurements of blazars. We find that the sample variances in these surveys are significantly larger than the quoted measurement errors; for example, the cosmic mean baryon density contained in the WHIM could be lower by $54\%$ at $1\text{-}\sigma$ fluctuation than estimated in one observation. Accurately quantifying the sample variance is essential in deriving correct interpretations of the measurements in surveys with a small field-of-view.
Peter Espenshade, Jaiyul Yoo
2023-04-18T18:00:01Z
http://arxiv.org/abs/2304.09191v2
# Sample Variance in Cosmological Observations with a Narrow Field-of-View ###### Abstract Surveys with a narrow field-of-view can play an important role in probing cosmology, but inferences from these surveys suffer from large sample variance. The standard method for computing the sample variance is based on two key approximations, and we demonstrate that it can lead to a significant underestimate of the sample variance in narrow surveys. We present a new method for accurately computing the sample variance and apply our method to the recent observations of the warm-hot intergalactic medium (WHIM) based on spectroscopic measurements of blazars. We find that the sample variances in these surveys are significantly larger than the quoted measurement errors; for example, the cosmic mean baryon density contained in the WHIM could be lower by \(54\%\) at 1-\(\sigma\) fluctuation than estimated in one observation. Accurately quantifying the sample variance is essential in deriving correct interpretations of the measurements in surveys with a small field-of-view. 0000-0002-8807-7088]Peter Espenshade 0000-0002-4882-7880]Jaiyul Yoo ## 1 Introduction Large-scale surveys aim to derive cosmological parameters that determine the evolution of the Universe, and a larger survey volume is naturally preferred to obtain measurements of a better representative sample of the Universe (see, e.g., York et al., 2000; Colless et al., 2001; DES Collaboration, 2005; Dawson et al., 2012; DESI Collaboration, 2013). In particular, since the initial conditions were given as a random realization around the cosmic mean value, there exist fluctuations (or a sample variance) in any given sample of finite volume (see, e.g., Peebles, 1980; Peacock, 1998; Weinberg, 2008; Dodelson & Schmidt, 2020), and it is important to take into account the sample variance in deriving cosmological interpretations from a survey (Feldman et al., 1994; Meiksin & White, 1999). The sample variance always exists, even when the survey volume covers the whole observable Universe; the sample variance in this case is often referred to as the cosmic variance (Hu & Kravtsov, 2003). Naturally, the sample variance frequently dominates the error budget in a survey with a small volume like a pencil-beam survey (see, e.g., Kaiser & Peacock, 1991). In contrast, astronomical observations often target individual objects, and a preference in observational strategy is naturally given to a better spatial resolution than to a larger field-of-view. Care must be taken, however, if the goal of such astronomical observations with a small field-of-view is to derive cosmological interpretations or cosmic mean values. There exist three methods extensively used in the literature for computing the sample variance of a given cosmological survey. First, numerical simulations can provide mock samples of the Universe with precise survey geometries (e.g., Norberg et al., 2009), but they are costly to implement and depend on how baryon physics or the halo occupation model is adopted (e.g., Berlind & Weinberg, 2002; Giri & Schneider, 2021). Second, using sub-samples to estimate the sample variance is usually possible for large-scale surveys (e.g., Hill et al., 2010; Driver & Robotham, 2010), but for surveys with a narrow field-of-view, this may not be feasible. The third method is based on simple analytical calculations (e.g., Newman & Davis, 2002; Driver et al., 2003; Somerville et al., 2004; Trenti & Stiavelli, 2008; Moster et al., 2011) and is hence widely adopted in the literature. However, this method depends on two simplifying assumptions about the survey geometry and the underlying fluctuations, and it can lead to a significant underestimate of the sample variance in a survey with a narrow field-of-view. Here we present a new method to improve the standard analytical calculations without any restriction to the simplifying assumptions in the standard method. As an application of our new method, we consider cosmological observations with a narrow field-of-view to find the missing baryons in the local Universe (Fukugita et al., 1998; see Bregman, 2007 for a recent review). The baryon density in the Universe can be observationally estimated by inferring the baryon mass contained in stars, hot/cold gas, and the intergalactic medium (see Fukugita et al., 1998; Shull et al., 2012 and the references therein). While consistent at high redshift, these estimates in the local neighborhood, however, fall short by about 30%, compared to the baryon density parameter prediction \(\Omega_{b}=0.049\) from Big Bang nucleosynthesis and CMB observations (see, e.g., Pitrou et al., 2018; PLANCK Collaboration 2020). A large amount of theoretical and numerical work (Dave et al., 2001; Fang et al., 2002; Maller & Bullock, 2004; Fang et al., 2012, Shull et al., 2012; Driver, 2021) has been performed to show that the missing baryons in the local Universe are shock heated, residing in the WHIM, and X-ray or far UV observations are needed to confirm their presence. In 2005 and 2018, X-ray spectra of blazars were obtained to measure the intervening absorption lines from highly ionized oxygen contained in the WHIM by using the Low Energy Transmission Grating (LETG) on Chandra (Nicastro et al., 2005) and the Reflection Grating Spectrometer (RGS) on the X-ray Multi-Mirror (XMM-Newton) mission (Nicastro et al., 2018). We will refer to these two observations as the 2005 and 2018 WHIM observations, respectively. The estimates of the ionized oxygen column densities are then used to infer the amount of baryons in the WHIM, following the method of Savage et al. (2002). Both observations found that the WHIM contains the right amount of baryons missing in the local neighborhood (see, e.g., Dai et al., 2010; Kovacs et al., 2019 for other observations). With observations along two line-of-sights, their estimates of the cosmic mean value of baryons in the WHIM are naturally subject to large sample variance errors. Here we compute the sample variance for these surveys and find the sample variance errors are in fact larger than the reported measurement uncertainties. For comparison and reference, we ran ten dark matter-only simulations using GADGET-2 (Springel, 2005). Our simulations were set up with \(512^{3}\) particles and \(16~{}h^{-1}\)kpc softening lengths in a \(300~{}h^{-1}\)Mpc periodic box by using 2LPTic code (Croce et al., 2006) at \(z=49\). The transfer function was computed using the Boltzmann solver CLASS (Blas et al., 2011), and we adopt the \(\Lambda\)CDM parameters from PLANCK Collaboration (2020): Hubble constant \(h=0.67\), baryon density parameter \(\Omega_{b}=0.049\), dark matter density parameter \(\Omega_{\text{DM}}=0.27\), the spectral index \(n_{s}=0.97\), and its primordial fluctuation amplitude \(\ln(10^{10}A_{s})=3.0\). ## 2 Analytical Modeling of Sample Variance Here we present our analytical method for modeling the sample variance of cosmological observations in a given survey, characterized by the opening angle \(\alpha\) and the redshift range \([z_{\text{min}},z_{\text{max}}]\). Measurements of distant sources in the survey yield an observable \(\mathcal{O}(\lambda,\hat{n})\) such as the flux of a source, where \(\lambda\) is the wavelength of the measurements and \(\hat{n}\) is the angular position of the source within the survey. The main observable is often constructed by averaging \(\mathcal{O}\) over the angular position or integrating \(\mathcal{O}\) over time, but this mean value of the observable has the fluctuation around the true value, due to the intrinsic stochasticity. Splitting the observable \(\mathcal{O}\equiv\overline{\mathcal{O}}+\delta\mathcal{O}\) into the background \(\overline{\mathcal{O}}\) and the fluctuation \(\delta\mathcal{O}\), the (dimensionless) sample variance can be written (Peebles, 1980) as \[\sigma^{2}\equiv\left\langle\left(\frac{\delta\mathcal{O}}{\overline{ \mathcal{O}}}\right)^{2}\right\rangle=\frac{1}{V^{2}}\int dV_{a}dV_{b}~{}\xi_{ \delta\mathcal{O}}(r_{ab})\;, \tag{1}\] where \(V\) is the survey volume, \(\xi_{\delta\mathcal{O}}\) is the two-point correlation function of the observable \(\mathcal{O}\), and \(r_{ab}\) is the separation between any two points inside the survey. The sample variance arises from the fluctuation in \(\mathcal{O}\) and its correlation over the survey volume \(V\). The standard analytical method in the literature is outlined in Moster et al. (2011), which computes the sample variance \(\sigma_{m}\) in Equation (1) from dark matter fluctuations \[\sigma_{m}^{2}=\int\frac{d^{3}k}{(2\pi)^{3}}~{}P_{m}(k)W^{2}(\mathbf{k})\;, \tag{2}\] where \(P_{m}(k)\) is the linear matter power spectrum and \(W(\mathbf{k})\) is the survey window function. In deriving Equation (2), we assumed that the observable is a dark matter fluctuation \(\delta\mathcal{O}=\delta_{m}\), its two-point correlation function is just a function of separation \(r_{ab}\), and the survey geometry is a rectangular box, whose depth \(\Delta L_{z}\) is set by the redshift range, and width \(\Delta L\) is determined at the mean redshift by the opening angle, such that the window function is \(W(\mathbf{k})=j_{0}\left(k_{x}\Delta L/2\right)j_{0}\left(k_{y}\Delta L/2\right)j_ {0}\left(k_{z}\Delta L_{z}/2\right)\), where \(j_{0}\) is a spherical Bessel function. In this work, we will adopt the same assumptions as in the standard method, but improve on two aspects in computing the sample variance: We will use the nonlinear two-point correlation function and account for the correct geometry of a given survey. Here we model the survey geometry as a light cone without any holes within the boundary set by the opening angle \(\alpha\). To facilitate the computation of the sample variance in a given geometry, we rewrite Equation (1) as \[\sigma^{2}=\int dr_{ab}~{}\xi_{\delta\mathcal{O}}(r_{ab})\mathcal{P}(r_{ab})\;, \tag{3}\] in terms of the probability distribution \(\mathcal{P}\) of a pair separation \(r_{ab}\) in a given geometry. The difficulty in the volume integral over a given geometry in Equation (1) is now replaced with finding the probability distribution \(\mathcal{P}(r_{ab})\) in Equation (3), while the one-dimensional integral is far simpler to perform than the six-dimensional volume integral. Computation of the sample variance with our method needs two ingredients. First, our \(N\)-body simulations are used to compute the nonlinear two-point correlation function of dark matter fluctuations. Second, the probability distribution \(\mathcal{P}(r_{ab})\) is obtained by a Monte Carlo method in a given geometry unless an exact solution is known for the probability distribution, as in the case of a sphere (see, e.g., Tu & Fischbach, 2002) or a box (J. Philip, 2007). In particular, we are interested in the geometry for describing the spectroscopic measurements of blazars along the line-of-sight direction, in which the probability distribution can be readily derived as \[\mathcal{P}(r_{ab})=-\frac{3r_{ab}^{5}}{5R^{6}}+\frac{6r_{ab}^{2}}{R^{3}}-\frac{9r _{ab}}{R^{2}}+\frac{18}{5R}\, \tag{4}\] where \(R\) is the maximum pair separation. The line-of-sight we consider is constructed by taking a cone geometry, which has a uniform distribution of points inside, and taking the limit that the cone's opening angle becomes infinitesimal. For simplicity, we assume that the WHIM observable \(\mathcal{O}\) traces the dark matter fluctuation exactly, but we also used the Giri and Schneider (2021) emulator to model the total matter fluctuation including baryonic effects. Furthermore, as pointed out in Yoo et al. (2019), the observer position is not a random position, but a special position (i.e., Milky Way halo), such that it contributes to the sample variance in terms of additional two-point and three-point correlations with one point anchored at the observer position. However, we verified from our \(N\)-body simulations that these extra contributions missing in the literature are negligible for the low-redshift surveys considered in this work. ## 3 Application to the WHIM observations We are now ready to apply our analytical method to the missing baryon problem in the local neighborhood. To model the geometry of the 2005 and 2018 observations (Nicastro et al., 2005, 2018), we treat each survey as a narrow cone with the observer at the vertex, extending to the distant background source, and the WHIM absorbers are contained within the volume of the cone. Each observation has found two WHIM absorbers against the background blazars at \(z=0.0308\) and \(z=0.49\), respectively. In contrast, the standard procedure in the literature is to approximate the survey geometry as a box, instead of a cone, with various boundaries in redshift. We consider six different geome Figure 1: Probability distribution of separations \(r_{ab}\) (left) and contributions to the sample variance (right) for cone-shaped surveys in the 2018 observation. We consider various opening angles \(\alpha\). In the left panel, we plot the probability distributions for infinitesimal \(\alpha\to 0\) and then four logarithmically spaced values from \(\alpha=0.1\) to \(\alpha=100\). The left panel determines the effect of the survey geometry on the sample variance for a given separation \(r_{ab}\) and opening angle \(\alpha\). The right panel shows the dark matter two-point correlation function \(\xi_{m}\), either the nonlinear case (thick) or linear case (thin), weighted by the probability distribution. Integration of these curves yields the sample variance according to Equation (3). tries for each observation, and their redshift ranges are listed in Table 1. All six geometries share the common opening angle \(\alpha\) that we vary with values comparable to the field-of-view of the XMM-Newton RGS, which is \(5\) arcminutes. Since the WHIMs are detected by spectroscopic observations of the blazars, the true geometry is close to the cone in the limit \(\alpha\to 0\), which is referred to as _line-of-sight_ geometry in Table 1. The left panel in Figure 1 shows the probability distribution of pair separations \(r_{ab}\) for the 2018 observation described by the cone geometry in Table 1, as a function of opening angle \(\alpha\). At a given separation \(r_{ab}\), the probability distribution is fully determined by the survey geometry. Let \(r_{*}\) be the cone's base radius at the maximum redshift of the survey, which is approximately located at the peak of the cone's probability distribution. For large pair separations \(r_{ab}\gg r_{*}\), only pairs separated along the line-of-sight direction can be contained in the geometry. Therefore, the probability distributions behave in the same way independent of the opening angle \(\alpha\). For the opposite case of small pair separations \(r_{ab}\ll r_{*}\), the survey geometry becomes irrelevant, and the probability distributions exhibit a self-similar behavior with \(r_{ab}^{2}\) scaling due to pairs of points being uniformly distributed with a probability distribution that is proportional to the surface area of a sphere. The probability distribution for box-shaped surveys in Table 1 exhibits a similar behavior upon replacing \(r_{*}\) with the box's side length tangential to the line-of-sight. The right panel of Figure 1 shows the contribution to the sample variance: the two-point correlation weighted by the probability distribution as a function of \(r_{ab}\). According to Equation (3), it is the two-point correlation weighted by the probability distribution, not the probability distribution alone, that contributes to the sample variance in a given survey. The thick and thin curves show contributions to the sample variance obtained by using the nonlinear dark matter two-point correlation function \(\xi_{\text{NL}}\) from \(N\)-body simulations and the linear matter correlation function \(\xi_{\text{lin}}\) from CLASS, respectively. Given that \(\xi_{\text{NL}}\) deviates from \(\xi_{\text{lin}}\) on small scales \(r_{ab}<5\ h^{-1}\text{Mpc}\) and the probability distribution of pair separations \(r_{ab}\) is determined by \(r_{*}\) (or \(\alpha\)), the difference in contributions to the sample variance is negligible for surveys with large opening angles, but is especially pronounced for surveys with small opening angles. Indeed, the boost in the sample variance is quite dramatic for narrow surveys (\(\alpha<1\) degree), which illustrates the significance of the nonlinearity and survey geometry in computing the sample variance in narrow surveys. Figure 2 plots the sample variance for five different survey geometries considered for the 2018 observation. Again, thick and thin curves represent the sample variance computed by using the nonlinear dark matter correlation function \(\xi_{\text{NL}}\) and the linear dark matter correlation function \(\xi_{\text{lin}}\), respectively. In general, the sample variance in all cases increases as the survey volume decreases (or \(\alpha\) decreases). However, the difference in the sample variance between the thick and thin curves arises, not only due to the differences in \(\xi_{\text{NL}}\) and \(\xi_{\text{lin}}\), but also due to the difference in the probability distributions for pair separations, as shown in Figure 1. The standard method for estimating the sample variance approximates the survey geometry as a box, whose lower and upper bounds of the WHIM observations are rather ill-defined, as opposed to galaxy redshift surveys for example. Four different cases of the box geometry in Table 1 represent the standard method in which the base length is fixed by the opening angle at the average redshift, as described in Section 2. For the 2018 observation, the standard method predicts \(\sigma=0.20\) (long-dashed thin line) for Box 1 in Table 1 at \(\alpha=1\) arcmin while a more realistic cone model predicts \(\sigma=0.69\) (solid thick line). Note that the survey volumes for Box 1 and the cone geometries are roughly equal. If \(\xi_{\text{NL}}\) is adopted for the standard method, it predicts \(\sigma=0.55\) (long-dashed thick line). For a fixed opening angle and fixed two-point correlation function model, Boxes 2, 3, and 4, respectively, have a sample variance that is increasingly greater than the sample variance for Box 1, mostly due to their decreasing volumes. For our best model, "line-of-sight" geometry, the sample variance increases to \(\sigma=1.19\) (solid thick line at \(\alpha=0\)). As given in Nicastro et al. (2018), the amount of baryons contained in the WHIM is between \(9\%\) to \(40\%\) of \(\Omega_{b}\). If we rewrite this range as approximately \(\Omega_{\text{WHIM}}/\Omega_{b}=0.245\ (1\pm\sigma)\), then their quoted error corresponds to the dimensionless Figure 2: Sample variance \(\sigma\) for five different survey geometries computed with a nonlinear (thick) or linear (thin) dark matter two-point correlation function for the 2018 observation as a function of opening angle \(\alpha\). In the limit \(\alpha\to 0\), the cone geometry reduces to the line-of-sight geometry. The horizontal stars, which do not depend on \(\alpha\), are the 2018 observation error quoted in Nicastro et al. (2018). error \(\sigma=0.63\) (horizontal stars). In the same way, we analyzed the 2005 observation. Our estimate of the sample variance for the 2005 observation is again larger than the quoted error, and the discrepancy in the sample variance estimates between the standard method and our method is also stronger, as the survey volume for the 2005 observation is smaller compared to the 2018 observation. Since the overall trends are similar for the 2005 observation, we do not include additional plots like Figures 1 and 2. Assuming that our model with the line-of-sight geometry, obtained by taking a cone geometry in the limit of opening angle \(\alpha\to 0\), best describes the WHIM observations, we estimate the sample variance as \(\sigma=1.19\) and \(\sigma=4.45\) for the 2018 and 2005 observations, respectively. These numbers are significantly larger than the observational errors \(\sigma=0.63\) and \(\sigma=0.78\) found in Nicastro et al. (2018) and Nicastro et al. (2005), respectively. The standard method described as the box cases in Table 1 fails to capture the correct geometry and, if used with a linear two-point correlation function, significantly underpredicts the sample variance as shown in Figure 2. Our estimates of the sample variance change very little when we use the total matter fluctuations, including baryonic effects (Giri & Schneider, 2021). ## 4 Conclusion and Discussion In this Letter, we have computed the sample variance of a survey with a small field-of-view, accounting for the nonlinearity of the underlying fluctuations and the correct geometry of the survey. Our method is based on Monte Carlo simulations and is readily applicable to any survey geometry. In contrast, the standard method for computing the sample variance ignores the nonlinearity on small scales and approximates the survey geometry as a box. Compared to the standard method, we have found that these two factors play a dramatic role in computing the sample variance and the correct sample variance is greatly underestimated in a survey with a small field-of-view such as spectroscopic observations of the intergalactic medium against remote sources. To demonstrate the impact on interpreting observations, we have applied our method to the 2005 and 2018 WHIM observations (Nicastro et al., 2005, 2018) and found that the sample variance is indeed larger by a factor two than the quoted error for the 2018 observation (and close to a factor six for the 2005 observation). Our estimates of the sample variance are based on the critical assumption that the WHIM fluctuates around the background in the same way as dark matter fluctuates (i.e., a linear bias \(b=1\)). Though the WHIM is expected to trace dark matter on large scales similarly to baryons, the exact relation between the WHIM and dark matter distributions on small scales is certainly more complicated than assumed in this work. With large sample variances in two observations, the amount of baryons found in the WHIM observations is statistically unlikely to represent the cosmic mean value for baryons contained in the WHIM. For example, a \(1\)-\(\sigma\) fluctuation present in the WHIM can match the amount found in the 2018 observation with the cosmic mean density \(\Omega_{\rm WHIM}/\Omega_{b}=0.112\), lower by \(54\%\) than the estimated amount in the 2018 observation. The 2005 observation has a larger sample variance than the 2018 observation and can match a lower cosmic mean density. Though these observations are critical in solving the missing baryon problem, it is indeed challenging to derive proper interpretations of cosmological observations in surveys with a narrow geometry, and our application demonstrates the impact of accurately quantifying the sample variance. This work is supported by the Swiss National Science Foundation Grant CRSII5_173716.
2303.02063
Physics-Informed Deep Learning For Traffic State Estimation: A Survey and the Outlook
For its robust predictive power (compared to pure physics-based models) and sample-efficient training (compared to pure deep learning models), physics-informed deep learning (PIDL), a paradigm hybridizing physics-based models and deep neural networks (DNN), has been booming in science and engineering fields. One key challenge of applying PIDL to various domains and problems lies in the design of a computational graph that integrates physics and DNNs. In other words, how physics are encoded into DNNs and how the physics and data components are represented. In this paper, we provide a variety of architecture designs of PIDL computational graphs and how these structures are customized to traffic state estimation (TSE), a central problem in transportation engineering. When observation data, problem type, and goal vary, we demonstrate potential architectures of PIDL computational graphs and compare these variants using the same real-world dataset.
Xuan Di, Rongye Shi, Zhaobin Mo, Yongjie Fu
2023-03-03T16:32:37Z
http://arxiv.org/abs/2303.02063v2
# Physics-Informed Deep Learning For Traffic State Estimation: A Survey and the Outlook ###### Abstract For its robust predictive power (compared to pure physics-based models) and sample-efficient training (compared to pure deep learning models), physics-informed deep learning (PIDL), a paradigm hybridizing physics-based models and deep neural networks (DNN), has been booming in science and engineering fields. One key challenge of applying PIDL to various domains and problems lies in the design of a computational graph that integrates physics and DNNs. In other words, how physics are encoded into DNNs and how the physics and data components are represented. In this paper, we provide a variety of architecture designs of PIDL computational graphs and how these structures are customized to traffic state estimation (TSE), a central problem in transportation engineering. When observation data, problem type, and goal vary, we demonstrate potential architectures of PIDL computational graphs and compare these variants using the same real-world dataset. _Impact Statement--_Despite benefits of physics-informed deep learning (PIDL) for robust and efficient pattern learning, how to inject physics into neural networks (NN) remains under-explored and varies case by case and across domains. We will summarize and establish a systematic design pipeline for hybrid computational graphs that facilitates the integration of physics and NNs. The insights into the modular design of the hybrid PIDL paradigm as well as the established visualization tool will not only be useful to guide transportation researchers to pursue PIDL, but can also facilitate researchers at large to better understand and decompose a PIDL problem when applied to their own application domains. It will hopefully open up the conversation across domains about establishing a unified PIDL paradigm. Physics-informed deep learning (PIDL), Computational graph, Uncertainty quantification ## I Introduction Physics-informed deep learning (PIDL) [1], also named "theory-guided data science" [2], "model-informed machine learning" [3], or "physics-informed machine learning" [4], has gained increasing traction in various scientific and engineering fields. Its underlying rationale is to leverage the pros of both physics-based and data-driven approaches while compensating the cons of each. Physics-based approach refers to scientific hypotheses of what underlying physics govern observations, like the first principle. Normally, scientists or engineers first come up with a prior assumption of how a quantity of interest is computed from other physics quantities. Then laboratory or field experiments are designed to collect data that are used to calibrate the involved parameters. In contrast, data-driven approach does not bear any prior knowledge of how things work and how different quantities are correlated. Instead, they rely on machine learning (ML) techniques such as deep learning (DL) to learn and infer patterns from data. The former is data-efficient and interpretable but may not be generalizable to unobserved data, while the latter is generalizable at cost of relying on huge amounts of training samples and may be incapable of offering deductive insights. Thus, the PIDL paradigm opens up a promising research direction that leverages the strengths of both physics-based and data-driven approaches. Fig. 1 summarizes the amounts of data (in x-axis) and scientific theory (in y-axis) used for each paradigm. Model based approaches heavily rely on the scientific theory discovered from the domain knowledge and little data for system discovery, while machine learning approaches mostly rely on data for mechanism discovery, In contrast, PIDL employs a small amount of data (i.e., "small data" [4]) for pattern discovery while leveraging a certain amount of scientific knowledge to impose physically consistent constraints. The "sweetspot" of PIDL lies in the small data and partial knowledge regime. In other words, PIDL achieves the best performance in accuracy and robustness with small data and partial domain knowledge. In this paper, we will validate the advantage of PIDL using transportation applications, and present a series of experiments using the same real-world dataset against conventional physics-based models. There could be a variety of structure designs for PIDL. Here we primarily focus on the PIDL framework that is represented by a _hybrid computational graph_ (HCG) consisting of two graphs: a physics-informed computational graph (PICG) and a physics-uninformed neural networks (PUNN). Despite numerous benefits of PIDL, there are still many open questions that need to be answered, including the construction of the Fig. 1: Comparison of the pure physics based, the data-driven, and the hybrid paradigms (adapted from [2]). HCG, the choice of the physics and the architecture of the PICG, the architecture of the PUNN, the loss function, and the training algorithms. Among them, how to encode physics into (deep) neural networks (DNN) remains under-explored and varies case by case and across domains. In this paper, we will primarily establish a systematic design pipeline for hybrid computational graphs that would facilitate the integration of physics and DL. To this end, we will review the state-of-the-art of the PIDL architecture in traffic state estimation (TSE) problem. This survey paper can be used as a guideline for researchers at large when they consider using PIDL for problems at hand. We have seen a growing number of studies that apply PIDL to physical and biological systems [4, 5]. However, its feasibility for social systems, in particular, human decision making processes, such as driving behaviors, remains largely unexploited. Human's decision-making involves complex cognitive processes compounded by perception errors, noisy observations, and output randomness. Moreover, human driving behaviors exhibit highly unstable and nonlinear patterns, leading to stop-and-go traffic waves and traffic congestion [6, 7, 8]. Accordingly, neither model-driven nor data-driven approach alone suffices to predict such behaviors with high accuracy and robustness. Thereby, we strongly believe that a hybrid method, which leverages the advantage of both model-driven and data-driven approaches, are promising [9, 10]. There is a vast amount of literature for TSE [11, 12] and for PIDL [4, 5], respectively. To distinguish from other survey papers in TSE, here we primarily focus on data-driven approaches, PIDL in particular. Since PIDL for TSE is less studied than physics-based models and the existing literature is focused on single roads, we will primarily examine TSE along links and leave that on a road network, which includes both link and node models, for future research. To distinguish from other PIDL surveys, we primarily focus on the modular design of the hybrid PIDL paradigm and show how to customize various designs for accurate and robust identification of traffic dynamic patterns. Here the _modular design_ refers to the architecture design of each component in the graph and how these components are wired, in other words, how physics laws are injected into DNNs. The generic architecture of a PIDL consists of two computational graphs: one DNN (i.e., the data-driven component) for predicting the unknown solution, while the other (i.e., the physics-driven component), in which physics are represented, for evaluating whether the prediction aligns with the given physics. The physics encoded computational graph can be treated as a regularization term of the other deep neural network to prevent overfitting, i.e., high-variance. In summary, the hybrid of both components overcomes high-bias and high-variance induced by each individual one, rendering it possible to leverage the advantage of both the physics-based and data-driven methods in terms of model accuracy and data efficiency. Application of PIDL to TSE is a relatively new area. We hope that the insights into the modular design of the hybrid PIDL paradigm as well as the established visualization tool will not only be useful to guide transportation researchers to pursue PIDL, but also facilitate researchers at large to better understand a PIDL pipeline when applied to their own application domains. Overall, this paper offers a comprehensive overview of the state-of-the-art in TSE using PIDL, while striving to provide insights into the pipeline of implementing PIDL, from architecture design, training, to testing. In particular, 1. propose a computational graph that visualizes both physics and data components in PIDL; 2. establish a generic way of designing each module of the PIDL computational graphs for both predication and uncertainty quantification; 3. benchmark the performance of various PIDL models using the same real-world dataset and identify the advantage of PIDL in the "small data" regime. The rest of the paper is organized as follows: Section II introduces the preliminaries of TSE and its state-of-the-art. Section III-A lays out the framework of PIDL for TSE. Two types of problems for TSE, namely, deterministic prediction and uncertainty quantification, are detailed in Sections III-IV, respectively. Section V concludes our work and projects future research directions in this promising arena. ## II Preliminaries and Related Work ### _Pidl_ **Definition II.1**.: **Generic framework for physics-informed deep learning.** Define location \(x\in[0,L]\) and time \(t\in[0,T]\) and \(L,T\in\mathbb{R}^{+}\). Then the spatiotemporal (ST) domain of interest is a continuous set of points: \(\mathcal{D}=\{(x,t)|x\in[0,L],t\in[0,T]\}\). Denote the state as \(\mathbf{s}\) and its observed quantity as \(\hat{\mathbf{s}}\). Denote the (labeled) observation \(\mathcal{O},\mathcal{B},\mathcal{I}\) and the (unlabeled) collocation points \(\mathcal{C}\) below: \[\left\{\begin{array}{l}\mathcal{O}=\{(\mathbf{x}^{(i)},t^{(i)};\hat{\mathbf{ s}}^{(i)})\}_{i=1}^{N_{i}}:\text{within-domain observation},\\ \mathcal{B}=\{t^{(i)};\hat{\mathbf{s}}^{(i)}\}_{i=1}^{N_{i}}:\text{boundary observation},\\ \mathcal{I}=\{\mathbf{x}^{(i_{0})};\hat{\mathbf{s}}^{(i_{0})}\}_{i=1}^{N_{0}}: \text{initial observation},\\ \mathcal{C}=\{(\mathbf{x}^{(i)},t^{(i)})\}_{i=1}^{N_{c}}:\text{collocation points},\end{array}\right. \tag{1}\] where, \(i\) and \(j\) are the indices of observation and collocation points, respectively. \(i_{b},i_{0}\) are the indices of boundary and initial data, respectively. The numbers of observed data, boundary and initial conditions, and collocation states are denoted as \(N_{o},N_{b},N_{0},N_{c}\), respectively. The subscripts \(b,0\) represents boundary and initial condition indices, respectively. We design a _hybrid computational graph_ (HCG) consisting of two computational graphs: (1) a PUNN, denoted as \(f_{\theta}(\mathbf{x},t)\), to approximate mapping \(\mathbf{s}^{(i)}\), and (2) a PICG, denoted as \(f_{\lambda}(\mathbf{x},t)\), for computing traffic states of \(\mathbf{s}^{(j)}\) from collocation points. In summary, a general PIDL model, denoted as \(f_{\theta,\lambda}(\mathbf{x},t)\), is to train an optimal parameter set \(\theta^{*}\) for PUNN and an optimal parameter set \(\lambda^{*}\) for the physics. The PUNN parameterized by the solution \(\theta^{*}\) can then be used to predict a traffic state \(\hat{\mathbf{s}}_{new}\) on a new set of observed ST points \(\mathcal{O}_{new}\subseteq\mathcal{D}\), and \(\lambda^{*}\) is the most likely model parameters that describe the intrinsic physics of observed data. One important application of PIDL is to solve generic partial differential equations (PDE). We will briefly introduce the underlying rationale. Define a PDE over the ST domain as: \[\mathbf{s}_{t}(x,t)+\mathcal{N}_{x}[\mathbf{s}(x,t)]=0,(x,t)\in \mathcal{D}, \tag{2}\] \[\mathcal{B}[\mathbf{s}(x,t)]=0,\;(x,t)\in\partial\mathcal{D},\] \[\mathcal{I}[\mathbf{s}(x,0)]=0,\] where, \(\mathcal{N}_{x}(\cdot)\) is the nonlinear differential operator, \(\mathcal{B},\mathcal{I}\) are boundary and initial condition operators, respectively, \(\partial\mathcal{D}=\{(0,t)|t\in[0,T]\}\cup\{(L,t)|t\in[0,T]\}\) is the set of ST points on the boundary of the domain \(\mathcal{D}\), \(\mathbf{s}(x,t)\) is the exact solution of the PDE. Now we will approximate the PDE solution, \(\mathbf{s}(x,t)\), by a DNN parametrized by \(\theta\), \(f_{\theta}(x,t)\), which is PUNN. If this PUNN is exactly equivalent to the PDE solution, then we have \[f_{\theta}(x,t)+\mathcal{N}_{x}[f_{\theta}(x,t)]=0,(x,t)\in D. \tag{3}\] Otherwise we define a residual function \(r_{c}(x,t)=[f_{\theta}(x,t)]_{t}+\mathcal{N}_{x}[f_{\theta}(x,t)]\). If PUNN is well trained, the residual needs to be as close to zero as possible. Fig. 2 describes the schematic of using PUNN to approximate PDE solutions. Physics are usually encoded as _governing equations_, _physical constrains_, or _regularity terms_ in loss functions when training PUNNs [13]. When "soft regularization" is implemented [14], the loss of PIDL is a weighted sum of two distance measures: one for the distance between observed action and predicted one (aka. _data discrepancy_) and the other for the distance between computed action from physics and predicted one (aka. _physics discrepancy_). Its specific forms will be detailed in Secs. III-IV. ### _Traffic state estimation_ Traffic state inference is a central problem in transportation engineering and serves as the foundation for traffic operation and management. **Definition II.2**.: **Traffic state estimation (TSE)** infers traffic states in single-lane or multi-lane along a stretch of highway or arterial segments over a period of time, represented by traffic density (_veh/kn/lane_, denoted by \(\rho\)), traffic velocity (_kn/lane/hour_, denoted by \(u\)), and traffic flux or volume (_veh/lane/hour_, denoted by \(q\)) using noisy observations from sparse traffic sensors [12]. The ultimate goal of TSE is traffic management and control building on the inferred traffic states. _Remark_.: 1) Three traffic quantities are connected via a universal formula: \[q=\rho u.\] (4) Knowing two of them automatically derives another. Thus, in this paper, we will primarily focus on \(\rho,u\), and \(q\) can be derived by Eq. 4. 2) \(q\in[0,q_{max}],\rho\in[0,\rho_{jam}],u\in[0,u_{max}]\), where \(q_{max}\) is the capacity of a road, \(\rho_{jam}\) is the jam density (i.e., the bumper-to-bumper traffic scenario), and \(u_{max}\) is the maximum speed (and usually represented by speed limit). How to calibrate these parameters are shown in Tab. I. TSE is essentially end-to-end learning from the ST domain to labels. Denote a mapping parameterized by \(\theta\) as \(f_{\theta}(\cdot)\), from a ST domain \((\mathbf{x},t)\in\mathcal{D}\) to traffic states \(\mathbf{s}=\{\rho,u\}\): \[f_{\theta}:(\mathbf{x},t)\longrightarrow\mathbf{s}=\{\rho,u\}. \tag{5}\] The key question is to find a set of parameters \(\theta^{*}\) and the functional form \(f_{\theta}(\cdot)\) that fit observational data the best. _Remark_.: TSE can be implemented using supervised learning as presented in this paper, where the physics-based models are used to regularize the training of the data-driven models. TSE can also be formulated as unsupervised learning such as matrix/tensor completion that estimates unknown traffic states [15, 16]. Instead of using physics-based models, matrix/tensor completion methods use prior knowledge such as low-rank property to regularize the estimation. We would like to pinpoint here that such prior knowledge regularization can also be integrated into our framework by including the rank of the predicted traffic state matrix in the PIDL loss function. Traffic sensors range from conventional ones placed on roadside infrastructure (in Eulerian coordinate), such as inductive loop detectors, roadside closed-circuit television (CCTV) or surveillance cameras, to in-vehicle sensors (in Lagrangian coordinate), including Global Positioning System (GPS), onboard cameras, LiDARs, and smart phones. The emerging traffic sensors mounted on connected and automated vehicles (CAVs) are expected to generate terabytes of streaming data [17], which can serve as "probe vehicles" or "floating cars" for traffic measurements. Future cities will be radically transformed by the Internet of Things (IoT), which will provide ubiquitous connectivity between physical infrastructure, mobile assets, humans, and control systems [18] via communication networks (e.g., 5G [19], DSRC [20, 21], x-haul fiber networks [22], edge-cloud [23] and cloud servers). Fig. 3 illustrates the observational data types for TSE. Building on individual vehicle trajectories, we can aggregate velocity and density for each discretized cell and time interval. With the availability of high-resolution multi-modality data, we should also consider developing disaggregate methods for TSE that can directly use individual trajectories or images as inputs. Fig. 3: Data types for TSE (adapted from [12], including fixed location sensors (in blue hexagon), roadside camera, and collocation points (in black cross)). Fig. 2: Schematic of PDE solution approximation. In the TSE problem, the physics-based approach refers to the scientific hypotheses about the evolution of traffic flow on micro-, meso-, and macro-scale; while the ML approach refers to data-driven models that mimic human intelligence using deep neural networks, reinforcement learning, imitation learning, and other advanced data science methods [6]. #### Ii-B1 Physics-based models The field of transportation has been buttressed by rich theories and models tracing back to as early as the 1930s [24]. A large amount of theories have since been successfully developed to explain real-world traffic phenomena, to prognose and diagnose anomalies for operation and management, and to make predictions for planning and management. These models make ample use of scientific knowledge and theories about transportation systems, ranging from closed-form solutions to numerical models and simulations. Transportation models have demonstrated their analytical and predictive power in the past few decades. For example, microscopic car-following models and macroscopic traffic flow models succeed to capture transient traffic behavior, including shock waves and stop-and-go phenomenon. Model based approach relies on traffic flow models for single- or multi-lane, single- or multi-class traffic flow. Traffic models include the first-order models like Lighthill-Whitham-Richards (LWR) [25, 26], and the second-order models like Payne-Whitham (PW) [27, 28] and Aw-Rascle-Zhang (ARZ) [29, 30]. The first constitutive law that needs to satisfy is a conservation law (CL) or transport equation, meaning that inflow equals outflow when there is no source or sink. Mathematically, \[(CL)\qquad\rho_{t}+(\rho u)_{x}=0,\ (x,t)\in\mathcal{D}. \tag{6}\] A second equation that stipulates the relation between \(\rho,u\) can be a fundamental diagram (FD) (for the first-order traffic flow model) or a moment equation (for the second-order traffic flow model): \[(FD)\quad\ u=U(\rho),\ (\text{first-order}) \tag{7}\] \[u_{t}+uu_{x}=g(U(\rho)),\ (\text{second-order}).\] where \(U(\cdot)\) is a fundamental diagram, a mapping from traffic density to velocity, and \(g(U(\rho))\) is a nonlinear function of \(U(\rho)\). A fundamental diagram can also be a mapping from density to volume/flux, as exemplified in Fig. 4 calibrated by a real-world traffic dataset. In the literature, several FD functions are proposed and interested readers can refer to [31] for a comprehensive survey. When a traffic flow model is selected as the underlying dynamical system, data assimilation is employed to find "the most likely state" and use observations to correct the model prediction, including extended Kalman filter (EKF) [32, 33, 34], unscented KF [35], ensemble KF [36]), and particle filter [37]. To quantify the uncertainty of TSE problems, the model-based approach usually makes a prior assumption about the distribution of traffic states by adding a Brownian motion on the top of deterministic traffic flow models, leading to Gaussian stochastic traffic flow models. There is the other school of literature that derives intrinsic stochastic traffic flow models with more complex probabilistic distributions [38, 39, 40, 34]. With a stochastic traffic flow model, large population approximation or fluid limit is applied to extract the first and second moments of the stochastic processes to facilitate the application of filtering methods. #### Ii-B2 Data-driven approach The data-driven approach aims to learn traffic dynamics directly from data. Machine learning extracts knowledge, pattern and models, automatically from large volumes of data. DL, especially DNN, has revived the interest of the scientific community since 2006 [41]. Using ML for TSE is primarily focused on data imputation leveraging temporal or spatial correlation, including autoregressive integrated moving average [42], probabilistic graphical models [43], k-nearest neighbors [44], principal component analysis [45, 46], long short term memory model [47]. The majority of these methods assume that a traffic quantity at a time interval or within a cell depends on its historical or neighboring values, regardless of the physical characteristics of traffic flow. Accordingly, data-driven approach is not as popular as the model-based one and does not achieve as a high accuracy as the latter [12]. More recent ML techniques aim to model non-linearity in traffic dynamics, leveraging deep hidden layers together with sparse autoregressive technique [48] and fuzzy neural network [49]. With the advantage of both model and data based approaches, it is natural to consider a hybrid one in TSE. #### Ii-B3 Pidl In the pioneering work [50, 51], PIDL was proposed as an alternative solver of PDEs. Since its inception, PIDL has become an increasingly popular tool for data-driven solution or discovery of nonlinear dynamical systems in various engineering areas [52, 53, 54, 55, 56]. While PIDL has increasingly demonstrated its predictive power in various fields, transportation modeling is lagging behind in combining both physics and data aspects. ### _Two classes of problems_ The existing literature on PIDL aims to solve two classes of problems: (1) PDE solution inference, and (2) uncertainty quantification. In the next two sections, we will detail these two problems one by one. ## III PIDL for deterministic TSE ### _PIDL for traffic state estimation (PIDL-TSE)_ **Definition III.1**.: **PIDL for traffic state estimation (PIDL-TSE)** aims to infer the spatiotemporal fields of traffic states by integrating physics based and deep learning methods. Define the (labeled) observation set as \(\mathcal{O}_{d},\mathcal{O}_{p}\), the boundary and initial observation sets as \(\mathcal{B},\mathcal{I}\), and the (unlabeled) collocation point set as \(\mathcal{C}\) below: Fig. 4: Fundamental diagram (in red line) with data (blue dots). (8) where, \(i_{s},i_{m}\) are the indices of data collected from stationary and mobile sensors, respectively; \(i_{b},i_{0}\) are the indices of data collected from boundary and initial conditions, respectively; and \(j\) is still the index of collocation points. The numbers of stationary sensor data, mobile data, boundary and initial conditions, and collocation points are denoted as \(N_{o_{s}},N_{o_{m}},N_{b},N_{0},N_{c}\), respectively. The number of mobile trajectories is denoted as \(N_{n}\). \(\mathbf{X}(n,t^{(in)})\) is the \(n^{th}\) vehicle's position at time \(t^{(in)}\). Observation data in \(\mathcal{O}\) are limited to the time and locations where traffic sensors are placed. In contrast, collocation points \(\mathcal{C}\) have neither measurement requirements nor location limitations, and is thus controllable. In the next two sections, we will elaborate the PIDL-TSE framework on the architecture of HCG and training methods. ### _Hybrid computational graph (HCG)_ HCG is a tool we have invented to facilitate the visualization of the two components, namely, PUNN and PICG, and how they are wired. Over an HCG, the architecture of the PUNN and the PICG, the loss function to train the PUNN, and the training paradigm can be defined visually. A computational graph, establishing mathematical consistency across scales, is a labeled directed graph whose nodes are (un)observable physical quantities representing input information, intermediate quantities, and target objectives. The directed edges connecting physical quantities represent the dependency of a target variable on source variables, carrying a mathematical or ML mapping from a source to a target quantity. A path from a source to the observable output quantities represents one configuration of a model. A model configuration is to establish a path within the HCG [57]. ### _Training paradigms_ Once the physics model is selected, we need to determine the sequence of parameter calibration prior to, or during the training of PUNN. The former corresponds to solely infer time-dependent traffic flow fields, and the latter corresponds to system identification of traffic states [51]. #### Iii-C1 Sequential training Sequential training aims to first calibrate parameters of the PICG (i.e., parameter calibration) and then encode the known physics into the PUNN for training. Parameter calibration has been extensively studied in TSE using non-linear programming [58], genetic algorithm [59], least-squares fitting [60, 61, 62, 63], and kernel smoothing [64]. The physics based parameters include \(\rho_{max},u_{max}\) and other nominal parameters. Tab. I summarizes the existing methods for model discovery. Sequential training is the default paradigm in most PIDL-related studies, with a focus on how to make the training robust and stable, when large-scale NNs and complicated physics-informed loss functions are involved. A growing amount of works aim to develop more robust NN architectures and training algorithms for PIDL. For example, one can use adaptive activation function by introducing a scalable hyper-parameter in the activation function at some layers of the PUNN to improve the convergence rate and solution accuracy [72]. The adaptive activation function has also been used in DeLISA (deep learning based iteration scheme approximation), which adopts the implicit multistep method and Runge-Kutta method for time iteration scheme to construct the learning loss when training the PUNN [73]. To perform an efficient and stable convergence in the training phase, [74] investigates the training dynamics using neural tangent kernel (NTK) theory and proposes a NTK-guided gradient descent algorithm to adaptively adjust the hyperparameters for each loss component. New algorithms and computational frameworks for improving general PIDL training are currently a popular research area, and we refer readers to [4] by Karniadakis _et al._ for a detailed survey on this topic. #### Iii-C2 Joint training Physics parameters and hyperparameters of the PUNN and the PICG are updated _simultaneously_ or _iteratively_ in the training process. All the existing literature on PIDL-TSE employs simultaneous updating of all parameters associated with both PICG and PUNN altogether, which will be our focus below. However, we would like to pinpoint that there are increasing interests in training both modules iteratively [75], which could be a future direction to improve the training efficiency of PIDL-TSE. \begin{table} \begin{tabular}{|c|c|c||c|c|c|} \hline \multicolumn{2}{|c|}{Method} & Description & \(\rho_{max}\) & \(\rho_{critical}\) & \(u_{max}\) \\ \cline{3-6} \multicolumn{2}{|c|}{} & & Max. & Crutch & Max. \\ \multicolumn{2}{|c|}{} & & density & density & speed \\ \hline \hline \multirow{8}{*}{\begin{tabular}{c} Patient \\ \end{tabular} } & Calibrate & Each parameter & segment & traffic & speed \\ & each parameter & varies certain & length & density & limit or \\ & parameter & physical meaning & divided & at & max. \\ & separately & & by ave. & paucity & value \\ \cline{2-6} & Calibrate & augment states & tuning along with other & hyperparameters in DNNs \\ & parameter and & with parameters & estimated using & & \\ & predict & DA [34], & & & \\ & static & [38, 39, 40] & & & \\ & jointly & & & & \\ \cline{2-6} & Calibrate & fit parameters & density & density & velocity \\ & FD & associated with a pre-selected FD & at & at & at maximum \\ & & [58, 59, 60, 61, 62, 64] & & & \\ \hline \hline \multirow{8}{*}{ \begin{tabular}{c} Patient \\ \end{tabular} } & Calibrate & fit parameters & density & density & velocity \\ & FD & associated with a pre-selected FD & at & at max. \\ & & along with & & \\ & parameters of & & & \\ & DNNs [66, 67, 68] & & & \\ \cline{2-6} & ML & reduce variable & & parametrized in DNNs \\ & surrogate & and parameter & & \\ & sizes while & & & \\ & maintaining the & & & \\ & & minimum & & \\ & physical & & relevance & \\ & & [69, 70, 71] & & \\ \hline \end{tabular} \end{table} TABLE I: State-of-the-art approaches for parameter calibration Challenges The PUNN in PIDL is a typical deep learning component that most training techniques can apply. In contrast, the training challenges incurred by the PICG with unknown physics parameters are nontrivial, and accordingly, substantial research and additional adaptive efforts are in need. First, some traffic flow models may include a large number of physical parameters that need to discover in TSE, and it is challenging to train all the parameters at once. For example, the three-parameter LWR model in section III-D involves 5 parameters, and it is reported that the direct joint training for all the parameters with real-world noisy data leads to unsatisfactory results [71]. For this issue, the alternating direction method of multipliers (ADMM) method [76] is an option to improve training stability, i.e., to train one subset of physical parameters at a time with the rest fixed. The advanced ADMM variant, deep learning ADMM (dIADMM), may further address the global convergence problems in non-convex optimization with a faster learning efficiency [77]. Second, a highly-sophisticated traffic flow model may contain complicated terms that are unfriendly to the differentiation-based learning, making the model parameter discovery less satisfactory for real data. In this case, the structural design of PICG plays an important role to make the framework trainable. Specifically, additional efforts such as variable conversion, decomposition and factorization need to be made before encoding to have the structure learnable and the loss to converge. Alternatively, as will be discussed in section III-C2b, one can use an ML surrogate, such as a small NN, to represent the complicated terms in PICG to avoid tackling them directly [71]. ML surrogateWhen physics and PUNN are jointly trained, Fig. 5 further demonstrates the flow of simultaneously tuning parameters in the physics model and the hyperparameters in the PUNN. The top blue box encloses the data-driven component that contributes to the loss function, and the bottom red box encloses the physics-based component. Note that in Fig. 5, we omit details about how \(\rho\) and \(u\) interact within a PUNN. These two quantities can be generated in sequence or in parallel. When \(\rho\) is generated first, \(u\) can be derived from an FD that stipulates the relation between \(\rho\) and \(u\). Otherwise, one PUNN can be trained to generate both \(\rho,u\) altogether or two PUNNs are trained to generate \(\rho\) and \(u\) separately. Since \(\rho,u\) need to satisfy the CL, but generated out of PUNNs cannot guarantee that constraint. Thus, \(\rho,u\) are then fed into the physics component to impose this constraint on these quantities. To inject the minimum amount of knowledge like the universal CL defined in Eq. 6 to PIDL, and leave out those based on assumptions like FD defined in Eq. 7, we propose an ML surrogate model that replaces the mapping from traffic density to velocity as a DNN and learns the relation between these two quantities purely from data. A fundamental diagram that establishes a relation between \(\rho\) and \(u\), can be treated as an embedded physics component within traffic flow models. According to empirical studies, the relation between \(\rho\) and \(u\) is generally not a one-to-one mapping, especially in the congested regime. Thus, it is natural to employ an ML surrogate component to characterize the interaction between these two traffic quantities. We can further explore to what extent the addition of surrogates affects the performance and where ML surrogates are appropriate to use. Tab. II summarizes the existing studies that employ PIDL for TSE. To leverage PIDL for the TSE problem is firstly proposed by Huang _et al._[65] and Shi _et al._[67, 68, 71], concurrently and independently. Next, we will demonstrate how to design the architecture of PIDL and the corresponding PICG. We will first present a numerical example to demonstrate how the physics law of three-parameter-based LWR is injected into the PICG to inform the training of the PUNNs, and then compare all the existing architectures on a real-world dataset. ### _Numerical data validation for three-parameter-based LWR_ In this example, we show the ability of PIDL for the traffic dynamics governed by the three-parameter-based LWR traffic flow model on a ring road. Mathematically, \begin{table} \begin{tabular}{|p{42.7pt}||p{14.2pt}||p{14.2pt}||p{14.2pt}||} \hline Physics & Data & Descriptions & Ref. \\ \hline \multirow{4}{*}{LWR} & \multirow{4}{*}{LWR} & synthetic (Lax-Hopf) & Integrated the Greenshields-based LWR to PIDL and validated it using loop detectors as well as randomly placed sensors & [65] \\ \cline{3-4} & & & \\ \cline{2-4} & & memier-Grenshields-based and three-parameter-based LWR models, and demonstrated its advantages using the real-world dataset & [67] \\ \cline{2-4} & & & Studied the general partial-state reconstruction problem for traffic flow estimation, and used PIDL encoded with LWR to counterbalance the small number of probe vehicle data & [78] \\ \cline{2-4} & & & \\ \cline{2-4} & & & \\ \cline{2-4} & & & \\ \cline{2-4} & & & \\ \cline{2-4} & & & \\ \cline{2-4} & & & \\ \cline{2-4} & & & \\ \cline{2-4} & & & \\ \cline{2-4} & & & \\ \cline{2-4} & & & \\ \hline \end{tabular} \end{table} TABLE II: State-of-the-art PIDL-TSE Fig. 5: Flowchart of joint training of PIDL. \[\rho_{t}+(Q(\rho))_{x}=\epsilon\rho_{xx},\ x\in[0,1],\ t\in[0,3], \tag{9}\] where \(\epsilon=0.005\). The initial and boundary conditions are \(\rho(x,0)=0.1+0.8e^{-25(x-0.5)^{2}}\) and \(\rho(0,t)=\rho(1,t)\), respectively. In this model, three-parameter flux function [60] is employed: \(Q(\rho)=\rho U(\rho)=\sigma(a+(b-a)\frac{\rho}{\rho_{max}}-\sqrt{1+y^{2}})\), where \(a=\sqrt{1+(\delta p)^{2}}\), \(b=\sqrt{1+(\delta(1-p))^{2}}\) and \(y=\delta(\frac{\rho}{\rho_{max}}-p)\). In the model, \(\delta\), \(p\) and \(\sigma\) are the three free parameters as the function is named. The parameters \(\sigma\) and \(p\) control the maximum flow rate and critical density (where the flow is maximized), respectively, \(\delta\) controls the roundness level of \(Q(\rho)\). In addition to the above-mentioned three parameters, we also have \(\rho_{max}\) and diffusion coefficient \(\epsilon\) as part of the model parameters. In this numerical example, we set \(\delta=5\), \(p=2\), \(\sigma=1\), \(\rho_{max}=1\) and \(\epsilon=0.005\). Given the bell-shaped initial density condition, we apply the Godunov scheme to solve Eqs. 9 on 240 (space) \(\times\) 960 (time) grid points evenly deployed throughout the \([0,1]\times[0,3]\) domain. The PIDL architecture that encodes the LWR model is shown in Fig. 6. This architecture consists of a PUNN for traffic density estimation, followed by a PICG for calculating the residual \(r_{c}:=\rho_{t}+(Q(\rho))_{x}-\epsilon\rho_{xx}\) on collocation points. The estimated traffic density \(\rho\) is calculated by the PUNN \(f_{\theta}(x,t)\), which is an NN and maps a spatiotemporal point \((x,t)\) directly to \(\rho\), i.e., \(\rho=f_{\theta}(x,t)\). PUNN \(f_{\theta}(x,t)\), parameterized by \(\theta\), is designed as a fully-connected feedforward neural network with 8 hidden layers and 20 hidden nodes in each hidden layer. Hyperbolic tangent function (tanh) is used as the activation function for hidden neurons. By replacing \(\rho\) with \(f_{\theta}\), we have \(r_{c}:=[f_{\theta}]_{t}+(Q(f_{\theta}))_{x}-\epsilon[f_{\theta}]_{xx}\) in this case. With the estimated \(\rho\) and the observations \(\hat{\rho}\) on the observation points, we can obtain the data loss \(L_{o}\). In contrast, in PICG, connecting weights are fixed and the activation function of each node is designed to conduct specific nonlinear operation for calculating an intermediate value of \(r_{c}\). The physics discrepancy \(L_{c}\) is the mean square of \(r_{c}\) on collocation points. To customize the training of PIDL to Eqs. 9, we need to additionally introduce _boundary collocation points_\(\mathcal{C}_{B}=\{(0,t^{(i_{b})})|i_{b}=1,...,N_{b}\}\cup\{(1,t^{(i_{b})})|i_{b}= 1,...,N_{b}\}\), for learning the two boundary conditions. Different from the \(\mathcal{B}\) in Eqs. 8, observations on boundary points are not required here in \(\mathcal{C}_{B}\). Then, we obtain the following loss: \[Loss_{\theta}=\alpha\cdot L_{o}+\beta\cdot L_{c}+\gamma\cdot L_{b}, \tag{10}\] \[\text{where, }\ L_{o}=\frac{\alpha}{N_{o}}\sum_{i=1}^{N_{o}}|f_{ \theta}(x^{(i)},t^{(i)})-\hat{\rho}^{(i)}|^{2}\ \ \text{(data loss)},\] \[L_{c}=\frac{\beta}{N_{c}}\sum_{j=1}^{N_{c}}|r_{c}(x^{(j)},t^{(j )})|^{2}\ \ \text{(physics loss)},\] \[L_{b}=\frac{\gamma}{N_{b}}\sum_{i_{b}=1}^{N_{b}}|f_{\theta}(0, t^{(i_{b})})-f_{\theta}(1,t^{(i_{b})})|^{2}\ \ \text{(boundary loss)}.\] Note that because \(r_{c}:=[f_{\theta}]_{t}+(Q(f_{\theta}))_{x}-\epsilon[f_{\theta}]_{xx}\), \(r_{c}\) is affected by \(\theta\). Also, boundary collocation points are used to calculate the boundary loss \(L_{b}\). Because \(L_{b}\) might change with different scenarios, it is ignored from Fig. 6 for simplicity. **TSE and system identification using loop detectors**: In this experiment, five model variables \(\delta\), \(p\), \(\sigma\), \(\rho_{max}\), and \(\epsilon\) are encoded as learning variables in PICG depicted in Fig. 6. Define \(\lambda=(\delta,p,\sigma,\rho_{max},\epsilon)\), and the residual \(r_{c}\) is affected by both \(\theta\) and \(\lambda\), resulting in the objective \(Loss_{\theta,\lambda}\). We now use observations from loop detectors, i.e., only the traffic density at certain locations where loop detectors are installed can be observed. By default, loop detectors are evenly located along the road. We use \(N_{c}=150,000\) collocation points and other experimental configurations are given in SUPPLEMENTARY A1. We conduct PIDL-based TSE experiments using different numbers of loop detectors to solve \((\theta^{*},\lambda^{*})=\operatorname*{argmin}_{\theta,\lambda}\ \ Loss_{\theta,\lambda}\). In addition to the traffic density estimation errors of \(\rho(x,t;\theta^{*})\), we evaluate the estimated model parameters \(\lambda^{*}\) using the \(\mathbb{L}^{2}\) relative error (RE) and present them in percentage. Fig. 7 presents a visualization of the estimated traffic density \(\rho\) (left half) and traffic velocity (right half) of the PIDL. The comparision at certain time points are presented. Note that the PUNN in Fig. 6 does not predict \(u\) directly, and instead, it is calculated by \(Q(\rho)/\rho\) in the post-processing. More results are provided in SUPPLEMENTARY Table I, and the observation is that the PIDL architecture in Fig. 6 with five loop detectors can achieve a satisfactory performance on Fig. 6: PIDL flowchart for three-parameter-based LWR, consisting of a PUNN for traffic density estimation and a PICG for calculating the residual, where \(\lambda=(\delta,p,\sigma,\rho_{max},\epsilon)\). Fig. 7: Estimated traffic density \(\rho\) (left half) and traffic velocity \(u\) (right half) of the PIDL when the number of loop detectors is 3, where the horizontal black lines in the heatmap represent the sensor positions. In each half, the prediction heatmap and snapshots at certain time points are presented. Note that the PUNN does not predict \(u\) directly, and instead, it is calculated by \(Q(\rho)/\rho\) in the post-processing. both traffic density estimation and system identification. In general, more loop detectors can help our model improve the TSE accuracy, as well as the convergence to the true model parameters. Specifically, for five loop detectors, an estimation error of \(3.186\times 10^{-2}\) is obtained, and the model parameters converge to \(\delta^{*}=4.86236\), \(p^{*}=0.19193\), \(\sigma^{*}=0.10697\), \(\rho^{*}_{max}=1.00295\) and \(\epsilon^{*}=0.00515\), which are decently close to the ground truth. The observations demonstrate that PIDL can handle both TSE and system identification with 5 loop detectors for the traffic dynamics governed by the three-parameter-based LWR. We conduct sensitivity analysis on different numbers of collocation points and how they are identified. The details are presented in SUPPLEMENTARY Table II. A larger collocation rate (i.e., the ratio of the number of collocation points to the number of grid points) is beneficial for both TSE and system identification, because it could make estimation on the collocation points physically consistent by imposing more constraints on the learning process. Empirically, more collocation points can cause longer training time and the performance does not improve too much when a certain collocation rate is reached. ### _Real-world data validation_ It would be interesting to see the performance of state-of-the-art methods based on either physics or data-driven approaches in order to better quantify the added value of the proposed class of approaches We will use a widely used real-world open dataset, the Next Generation SIMULATION (NGSIM) dataset, detailed in Tab. III. Fig. 8 plots the traffic density heatmap using data collected from US 101 highway. The performance of 2 baseline models and 4 PIDL variants for deterministic TSE is presented in Fig. 9. As shown in the y-axis, the REs of the traffic density and velocity are used for evaluation. The comparison is made under representative combinations of probe vehicle ratios (see x-axis) and numbers of loop detectors (see the title of sub-figures). We implement an EKF and a pure NN model as the representative pure data-driven and physics-driven baseline approaches, respectively. The EKF makes use of the three parameter-based LWR as the core physics when conducting estimation. The NN only contains the PUNN component in Fig. 6, and uses the first term in Eq. 10 as the training loss. Among the PIDL variants, the PIDL-LWR and PIDL-ARZ are the PIDL models that encodes three parameter-based LWR and Greenshields-based ARZ, respectively, into the PICG. PIDL-LWR-FDL and PIDL-ARZ-FDL are the variant models of PIDL-LWR and PIDL-ARZ by replacing the FD components in the PICG with an embedded neural network (i.e., the FD learner). Note, the FD leaner technique is the one introduced by [71]. **Performance of baseline models**: From the experimental results, it is observed that the performance of all models improves as the data amounts increase. The EKF method performs better than the NN method, especially when the number of observations is small. The results are reasonable because EKF is a physics-driven approach, making sufficient use of the traffic flow model to appropriately estimate unobserved values when limited data are available. However, the model cannot fully capture the complicated traffic dynamics in the real world and the performance improves slowly as the data amount increases. The NN is able to catch up with the EKF when the data is relatively large (see the case with loop=2 and ratio=3.0\(\%\)). However, its data efficiency is low and large amounts of data are needed for accurate TSE. **Comparison between PIDL-based models**: The PIDL-based approaches generally outperform the baseline models. PIDL-ARZ achieves lower errors than PIDL-LWR, because that ARZ model is a second-order model which can capture more complicated traffic dynamics and inform the PIDL in a more sophisticated manner. **Effects of using FDL**: Comparing the models with FD learner (PIDL-LWR-FDL and PIDL-ARZ-FDL) to the ones without (PIDL-LWR and PIDL-ARZ), the former generally shows better data efficiency. In PIDL-LWR-FDL and PIDL-ARZ-FDL, the FD equation is replaced by an internal small neural network to learn the hidden FD relation of the real traffic dynamics. A proper integration of the internal neural network may avoid directly encoding the complicated terms in PIDL and trade off between the sophistication of the model-driven aspect of PIDL and the training flexibility, making the framework a better fit to the TSE problem. **Comparison between PIDL-FDL based models**: PIDL-LWR-FDL can achieve lower errors than PIDL-ARZ-FDL, implying that sophisticated traffic models may not always lead to a better performance, because the model may contain complicated terms that makes the TSE performance sensitive to the PIDL structural design. With the NGSIM data, PIDL-LWR-FDL can balance the trade-off between the sophistication of PIDL and the training flexibility more properly. **Transition from pure physics-driven to data-driven TSE models**: The contributions of the physics-driven and data-driven components can be controlled by tuning the hyperparameters \(\alpha\) and \(\beta\) in Eqs. 10. Fig. 10 shows how the optimal \(\beta/\alpha\) ratio changes as the data size increases. The x-axis is the number of loop detectors, which represents the training data size. The y-axis is the optimal \(\beta/\alpha\) corresponding to the minimally achievable estimation errors of the PIDL-LWR Fig. 8: Average traffic density and speed on US 101 highway. Heatmap for the traffic density (left) and velocity (right). \begin{table} \begin{tabular}{|c||c|c|c|c|c|} \hline Site & Location & Date & Length & Sampling & Lane \\ & & & (m) & rate (s) & \# \\ \hline \hline US1011 & LA, CA & 6/15/2005 & 640 &.1 & 5 \\ \hline \end{tabular} Note. Vehicular information includes the position, velocity, acceleration, occupied lane and vehicle class. Time periods include 7:50:8:05 a.m., 8:05-8:20 a.m., 8:20-8:35 a.m. We average traffic states across all lanes. \end{table} TABLE III: Real-world data description. methods shown in Fig. 9. The property of tuning hyperparameters enables the PIDL-based methods to make a smooth transition from a pure physics-driven to pure data-driven TSE model: in the sufficient data regime, by using a small \(\beta/\alpha\) ratio, the PIDL performs more like a pure data-driven TSE model to make an ample use of the traffic data and mitigate the issue that the real dynamics cannot be easily modeled by some simple PDEs; while in the "small" data regime, by using a large ratio, the PIDL behaves like a pure physics-driven model to generalize better to unobserved domains. ## IV PIDL for UQ It is a widely regarded modeling and computational challenge to quantify how uncertainty propagates within dynamical systems that could result in cascading errors, unreliable predictions, and worst of all, non-optimal operation and management strategies. It is thus crucial to characterize uncertainties in traffic state estimators and consequently, in traffic management that relies on TSE predictors. UQ for TSE using PIDL is still at its nascent stage. **Definition IV.1**.: **Uncertainty quantification (UQ)** aims to assess robustness of the developed model and bound prediction errors of dynamical systems, by estimating the probability density of quantities with input features and boundary conditions [79]. Stochastic effects and uncertainties potentially arise from various sources, including variability and errors in measurements, dynamic actions of various entities, model biases, and discretization and algorithmic errors. In summary, there are two types of uncertainty [79, 80] in the context of TSE: 1. Aleatoric uncertainty (or data uncertainty): an endogenous property of data and is thus irreducible, coming from measurement noise, incomplete data, mismatch between training and test data. 2. Epistemic uncertainty (or knowledge uncertainty, systematic uncertainty, model discrepancy): a property of model arising from inadequate knowledge of the traffic states. For example, traffic normally constitutes multi-class vehicles (e.g., passenger cars, motorcycles, and commercial vehicles). A single model can lead to insufficiency in capturing diversely manifested behaviors. UQ is "as old as the disciplines of probability and statistics" [79]. In recent years, its explosive growth in large-scale applications has been bolstered by the advance of big data and new computational models and architectures. The conventional UQ techniques include but not limited to: sensitivity analysis and robust optimization [81]; probabilistic ensemble methods and Monte-Carlo methods with multi-level variants [82, 83, 84]; stochastic spectral methods [79]; and methods based on the Frobenius-Perron and Koopman operators [85, 86] for dynamic systems. Epistemic uncertainty arising from model discrepancy, often biased, can be compensated by improved domain knowledge, which has received increasing attention, especially with the advance of PIDL. In the computational architecture of PIDL, the data component can be treated as a compensation term for inaccurate or biased physics supplied by the physics-based component. Thus, it is natural to generalize PIDL to UQ, where the physics-based component provides partial domain knowledge when stochasticity is propagated within highly nonlinear models, and the data-driven component learns extra randomness arising from both data and model errors. **Definition IV.2**.: **UQ for traffic state estimation (UQ-TSE)** aims to capture randomness of traffic states \(\hat{\mathbf{s}}=\{\hat{\rho},\hat{u}\}\) by probabilistic models. It is assumed that \(\hat{\mathbf{s}}\) follows the observational distribution, i.e., \(\hat{\mathbf{s}}\sim p_{\text{data}}(\hat{\mathbf{s}}|x,t)\). The goal of the UQ-TSE problem is to train a probabilistic model \(G_{\theta}\) parameterized by \(\theta\) such that the distribution of the prediction \(\mathbf{s}\sim p_{\theta}(\hat{\mathbf{s}}|x,t)\) resembles the distribution of the observation \(\hat{\mathbf{s}}\sim p_{\text{data}}(\hat{\mathbf{s}}|x,t)\). One widely-used metric to quantify the discrepancy between \(p_{\text{data}}\) and \(p_{\theta}\) is the reverse Kullback-Leibler (KL) divergence. Since the majority of literature on UQ-PIDL employs deep generative models, including generative adversarial networks (GAN) [87], normalizing flow [88], and variational autoencoder (VAE) [89], here we will focus on how to leverage deep generative models for UQ problems. Among them, physics-informed generative adversarial network (PhysGAN) is the most widely used model, which has been applied to solve stochastic differential equations [90, 91, 92] and to quantify uncertainty in various domains [93, 71]. Little has been done on using physics-informed VAE for the UQ-TSE problem, which can be a future direction. ### _PIDL-UQ for TSE_ #### Iv-A1 Physics-informed generative adversarial network (PhysGAN) One way to formulate the UQ-TSE problem is to use the generative adversarial network (GAN) [87], which imitates Fig. 10: Ratios of the contributions made by the physics-based component and the data-driven component to the optimal performance of PINN. Fig. 9: Results of the deterministic PIDL models for the NGSIM dataset. the data distribution without specifying an explicit density distribution and can overcome the computational challenge of non-Gaussian likelihood [52], as opposed to using Gaussian process [94, 95]. Now we will formulate the UQ problem in the context of conditional GANs. The generator \(G_{\theta}\) learns the mapping from the input \((x,t)\) and a random noise \(z\) to the traffic state \(\mathbf{s}\), \(G_{\theta}:(x,t,z)\rightarrow\mathbf{s}\), where \(\theta\) is the parameter of the generator. The objective of the generator \(G_{\theta}\) is to fool an adversarially trained discriminator \(D_{\phi}:(x,t,\mathbf{s})\rightarrow[0,1]\). The loss functions of the GAN are depicted as below: \[\begin{split}&\mathit{Loss}_{\theta}\ \ (generator\ loss)\\ &=\mathbb{E}_{x,t,\mathbf{z}}\left[D_{\phi}(x,t,\mathbf{s}) \right]\simeq\frac{1}{N_{o}}\sum_{i=1}^{N_{o}}D_{\phi}(x^{(i)},t^{(i)}, \mathbf{s}^{(i)}),\end{split} \tag{11}\] \[\begin{split}&\mathit{Loss}_{\theta}\ \ (discriminator\ loss)\\ &=-\mathbb{E}_{x,t,\mathbf{z}}\left[\ln D_{\phi}(x,t,\mathbf{s} )\right]-\mathbb{E}_{x,t,\mathbf{\hat{s}}}\left[\ln(1-D_{\phi}(x,t,\mathbf{ \hat{s}}))\right],\\ &\simeq-\frac{1}{N_{o}}\sum_{i=1}^{N_{o}}\ln D_{\phi}(x^{(i)},t _{o}^{(i)},\mathbf{s}^{(i)})+\ln(1-D_{\phi}(x^{(i)},t^{(i)},\mathbf{\hat{s}}^ {(i)})),\end{split} \tag{12}\] where \(\mathbf{s}^{(i)}=G_{\theta}(x^{(i)},t^{(i)},z^{(i)})\) is the predicted traffic state, and \(\mathbf{\hat{s}}\) is the ground-truth. With physics imposed, the generator loss carries the same form as Eq. 10, and the data loss \(L_{o}\) and boundary loss \(L_{b}\) become: \[\begin{split}& L_{o}=\frac{1}{N_{o}}\sum_{i=1}^{N_{o}}D_{\phi}(x^{ (i)},t^{(i)},\mathbf{s}^{(i)}),\\ & L_{b}=\frac{1}{N_{b}}\sum_{i_{b}=1}^{N_{b}}D_{\phi}(x^{(i_{b}) },t^{(i_{b})},\mathbf{s}^{(i_{b})}).\end{split} \tag{13}\] Different PhysGAN variants adopt different ways of integrating physics into GANs, and the exact form of \(L_{c}\) changes accordingly. Below we will introduce the general structure of the PhysGAN and its four variants. The general structure of the PhysGAN is illustrated in Fig. 11. The top blue box encloses the data-driven component, which is a GAN model consisting of a generator \(G_{\theta}\) and a discriminator \(D_{\phi}\). Here we omit details about how \(\rho\) and \(u\) interact within the generator. These two quantities can be generated sharing the same NN or from separate NNs. The PICG can be encoded with either the LWR or the ARZ equations. **PI-GAN**[96, 97, 90, 91] calculates the physics loss \(L_{c}\) based on the residual \(r_{c}\), as illustrated in branch \(B_{1}\) of Fig. 11 (a). \(L_{c}\) is added into the loss of the PUNN \(L_{o}\) using the weighted sum. This model is the first and most widely used to encode physics into the generator. **PID-GAN**[92] feeds the residual \(r_{c}\) into the discriminator \(D_{\phi}\) to provide additional information on whether the predictions deviate from the physics equations, which helps the discriminator to distinguish between the prediction and the ground-truth. This way of integrating physics is illustrated in branch \(B_{2}\) of Fig. 11 (a). It is worth mentioning that the PID-GAN and the PI-GAN share the same structure of the data-driven component. They differ in how the physics are incorporated, i.e., informing the generator (branch \(B_{1}\)) or the discriminator (branch \(B_{2}\)). [92] shows that, by informing the discriminator, PID-GAN can mitigate the gradient imbalance issue of the PI-GAN. The above-mentioned two PhysGAN variants use deterministic physics, that is, the parameters in the physics equations are deterministic. **Mean-GAN**[98] incorporates stochastic differential equations into the physics components (illustrated as the PICG in Fig. 11 (a)), where physics parameters are assumed to follow Gaussian distributions. The randomness in physics parameters is the source of the epistemic uncertainty, which leads to the randomness in the residual \(r_{c}\). The physics loss is calculated based on the square error of the mean of \(r_{c}\), i.e., \(|\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}r_{c}|^{2}\), where \(N_{k}\) is the number of physics parameter samples. Then the physics loss is included to the loss function of the PUNN using the weighted sum. Within the PICG in Fig. 11 (a), we can also replace the parametric FD with ML surrogates, which is used in the **PI-GAN-FDL**[91, 99]. #### Iv-A2 Physics-informed normalizing flow (PhysFlow) **PhysFlow**[100] employs the normalizing flow as an alternative generative model to the GAN. The normalizing flow explicitly estimates the likelihood, and is thus more straightforward to train compared to the GAN model. It estimates the likelihood by constructing an invertible function \(G_{\theta}\) that transforms a Gaussian prior \(\hat{z}\) to the traffic states \(\mathbf{s}\). The structure of the PhysFlow, i.e., PI-Flow, is illustrated in Fig. 11 (b). The top blue box encloses the data-driven component consisting of a normalizing flow model. The inverse function \(G_{\theta}^{-1}\) takes as input the traffic states and outputs a predicted prior \(z\). The training objective is to make \(z\) follow a Gaussian distribution, which can be achieved by the maximum likelihood estimation. The bottom red box encloses the physics component, which is the same as the PI-GAN. #### Iv-A3 Physics-informed flow based GAN (PhysFlowGAN) **PhysFlowGAN** combines the merits of GAN, normalizing flow, and PIDL. It uses normalizing flow as the generator for explicit likelihood estimation, while exploiting adversarial training with the discriminator to ensure sample quality. The structure of PhysFlowGAN is shown in Fig. 11 (c), which consists of a normalizing flow, a discriminator, and a PICG. The data loss \(L_{o}\) is composed of of two parts, i.e., \(L_{o}^{GAN}\) that is calculated from the discriminator and \(L_{o}^{flow}\) that is calculated from the normalizing flow. The physics loss is calculated in the same way as PI-GAN. One PhyFlowGAN model, TrafficFlowGAN [99], has been applied to the UQ-TSE problem. Tab. IV summarizes the hybrid architecture used for the UQ-TSE. ### _Numerical data validation for Greenshields-based ARZ_ As PI-GAN is the most widely used UQ-TSE model, we conduct numerical experiments using PI-GAN for demonstration. In the next subsection, we will use real-world data to compare the performance of the aforementioned UQ-TSE models. The ARZ numerical data is generated from Greenshields -based ARZ traffic flow model on a ring road: \[\rho_{t}+(Q(\rho))_{x}=0,\ x\in[0,1],\ t\in[0,3], \tag{14}\] \[\partial_{t}(u+h(\rho))+u\cdot\partial_{x}(u+h(\rho))=(U_{eq}(\rho )-u)/\tau,\] \[h(\rho)=U_{eq}(0)-U_{eq}(\rho).\] \(U_{eq}=u_{max}(1-\rho/\rho_{max})\) is the Greenshields speed function, where \(\rho_{max}=1.13\) and \(u_{max}=1.02\); \(\tau\) is the relaxation time, which is set to 0.02. The boundary condition is \(\rho(0,t)=\rho(1,t)\). The initial conditions of \(\rho\) and \(u\) are \(\rho(x,0)=0.1+0.8e^{-25(x-0.5)^{2}}\) and \(u(x,0)=0.5\), respectively. Gaussian noise following a distribution of \(\mathcal{N}(0,0.02)\) is added to impose randomness. The results of applying PI-GAN to ARZ numerical data a re shown in SUPPLEMENTARY Table IV. Fig. 12 illustrates the predicted traffic density (left half) and velocity (right half) of the PI-GAN when the number of loop detectors equals to 3. The snapshots at sampled time steps show a good agreement between the prediction and the ground-truth. ### _Real-world data validation_ We apply PIDL-UQ models to the NGSIM dataset. We use 4 model types, i.e., PI-GAN, PI-GAN-FDL, PI-Flow, and TrafficFlowGAN for demonstration. Each model can be informed by either LWR or ARZ equations, resulting in 8 model variants in total. EKF and GAN are used as baselines. Fig. 11: PIDL architecture for UQ-TSE. The PhysGAN (a) consists of a generator, a discriminator, and a PICG. The PhysFlow (b) consists of a normalizing flow and a PICG. The PhysFlowGAN (c) consists of a normalizing flow, a discriminator, and a PICG. The EKF uses the three parameter-based LWR as the physics. The results are shown in Fig. 13. As shown in the y-axis, the upper panels are the REs of the traffic density and velocity, and the lower panels are the Kullback-Leibler divergence (KL) of the traffic density and velocity. The comparison is made under representative combinations of probe vehicle ratios (see x-axis) and numbers of loop detectors (see the title of sub-figures). We interpret the results in three perspectives: **Effect of loop data.** When the probe vehicle ratio is fixed to 0.5%, the performance of all models are significantly improved as the loop number increases from 0 to 2, which is because the loop data provides more information. This improvement is not significant when the probe vehicle ratio is 3%, as the probe data alone has provided sufficient information. **Effects of using FDL.** When the loop number is 2 and the probe vehicle ratio is 0.5%, PI-GAN-FDL achieves significantly lower REs and KLs compared to PI-GAN and PI-Flow, while this advantage becomes less significant when the data is sparse. This is because the ML surrogate requires more data to train. Also, PI-GAN-FDL achieves lower KLs than PI-GAN in general, indicating that PI-GAN-FDL can better capture uncertainty. **Comparison between the ARZ-based model and the LWR-based model.** The ARZ-based model outperform the LWR-based model in general, which shows that the second-order physics is more suitable for the real-world scenario. **Comparison between PIDL-UQ models.** As the data amounts increase, the performance improves and the performance difference across models becomes small. Among all PIDL based models, TrafficFlowGAN generally achieves the least error in terms of both RE and KL, because it combines the advantages of both PhysGAN and PhysFlow. In Fig. 13(e) we summarize the \(RE\) (x-axis), i.e. the relative difference between the predicted and ground-truth mean, and \(KL\) (y-axis), i.e. the statistical difference between the predicted and ground-truth distribution, of all models with different training datasets that are shown in Fig. 13(a-d). Each point represents a combination of metrics by applying one model type to one training dataset. We interpret these points by assigning them into 4 regions: * Region A (optimal RE and KL): Most points in this region belong to the TrafficFlowGAN model type (in stars), which shows that the combination of PI-GAN and PI-Flow help achieve the best performance in terms of both RE and KL. * Region B (low RE and high KL): Most points in this region belong to GAN (in upsided triangle) and PI-GAN (in dot), which is a sign that the GAN-based models are prone to mode-collapse. * Region C (balanced RE and KL): Most points in this region belong to the PI-Flow model type, indicating that explicit estimation of data likelihood helps balance RE and KL. * Region D (high RE and low KL): Most points in this region belong to the EKF (in triangle) and PI-GAN-FDL (square), showing that these two types of models can better capture the uncertainty than the mean. **Transition from pure physics-driven to data-driven TSE models**: Fig. 14 shows the optimal \(\beta/\alpha\) ratio of the TrafficFlowGAN under different numbers of loop detectors. The optimal \(\beta/\alpha\) has a similar decreasing trend as in the the deterministic TSE problem shown in Fig. 10. ## V Conclusion and Future Work ### _Conclusions_ This paper lays a methodological paradigm of PIDL for the TSE problem, including traffic state inference and uncertainty Fig. 14: Ratios of the contributions made by the physics-based component and the data-driven component to the optimal training of TrafficFlowGAN. \(\beta\) and \(\alpha\) are hyperparameters in Eq. 10, which control the contribution of physics-based and data-driven components, respectively. Fig. 13: Results of the PIDL-UQ models for the NGSIM dataset. quantification. We present the concept of HCG that integrates both a PUNN and a PICG. In the TSE problem in particular, we present various architecture design. For the traffic state inference problem, the PUNN and PICG can be linked in sequence or in parallel, depending on whether traffic velocity is computed from traffic density using FDs, or whether both velocity and density are computed from PUNN(s) simultaneously. For the UQ problem, GAN and non-GAN based adversarial generative models are presented to characterize the impact of measurement uncertainty on traffic states. The architecture variants, including PI-GAN, PID-GAN, Mean-GAN, PI-GAN-FDL, PI-Flow, and TrafficFlowGAN are introduced. This is the first study that compare PIDL model variants using the same real-world dataset, which provides a benchmark platform for model comparison. By comparing various PIDL models, we demonstrate that this paradigm is advantageous over pure physics or data-driven approaches in the "small data" regime, and also allows a smooth transition from pure physics-driven to data-driven models via tuning the hyperparameters in the loss. Tab. V summarizes the existing types of hybrid architecture used in PIDL-TSE. ### _Outlook_ Looking forward, we will pinpoint several promising research directions that hope to guide researchers to exploit this under-tapped arena. #### Iv-B1 Physics representation What physics model is selected depends on data fidelity, model fidelity, and available computational resources. While there are a significant amount of models to describe traffic dynamics on micro- [101, 102], meso- [103, 104], and macro-scale [26, 29, 30, 105], the future is in the discovery of a multiscale traffic model (i.e., a mapping from multiscale measurements to traffic states on different scales) that collectively infers traffic states with various measurements. Analytical multiscale models have been thoroughly studied in fields like biological [13] and materials science [106], but remain under-exploited in transportation modeling. The drivers of developing a multiscale traffic model are two-fold: (1) Sensor data are at different scales. The collected traffic data include high-resolution individual trajectories and low-resolution aggregate information, both at various spatiotemporal granularity. Traffic measurements on different scales essentially measure the same system and phenomenon. Accordingly, a multiscale model capable of integrating measurements of various scales could characterize traffic dynamics more accurately with a smaller dataset. (2) Traffic optimization and management strategies need to rely on diverse models and data of different scales. Individual traces are normally modeled using ODEs while aggregate traffic patterns are modeled with PDEs. An integrated ODE-PDE physics model can potentially accommodate these measurements at both micro- and macro-scale [107]. Multiscale traffic models could also help reduce model complexity and speed up simulation for real-time applications. A multiscale traffic model, however, presents computational challenges due to the curse of dimensionality (i.e., high-dimensional input-output pairs). It is thus important to utilize reduced modeling techniques and multi-fidelity modeling [108, 109]. An emerging direction to improve physics representation is to consider proxy models. For example, symbolic regression has been demonstrated to learn relatively simple physical forms to describe complex physical systems [110, 111]. Traffic engineers have spent decades discovering various physical laws to describe human behavior models, from car-following, lane-change, to other driving scenarios and tasks. Is it possible to develop a systematic method to select mathematical models? For example, selection of a model can be reformulated as finding an optimal path over the PICG from inputs to outputs [112]. Accordingly, model selection is converted to an optimization problem. Machine learning methods, such as neural architecture search [113] and automatic modularization of network architecture [114], could enable the automatic architecture design of PICG and PUNN. #### Iv-B2 Learning discontinuity in patterns Traffic patterns, especially congested traffic, are highly driven by underlying physics, thus, how to learn patterns around shockwave, which corresponds to the discontinuity of which gradients do not exist, remains an active research area in PIDL. There is a small amount of literature that discussed such a limitation using PIDL for nonlinear PDEs with solutions containing shocks and waves. One solution is to add viscosity terms [68, 71, 115] to smoothen the discontinuity. Because the state-of-the-art primarily focuses on the "soft" method, which imposes the physics constraints as part of the loss function, the fulfillment of physics constraints cannot be guaranteed, leading to poor prediction in shocks and waves. Thus, "hard" methods that enforce physics can be a radical remedy. One option is to convert the TSE problem using varational formulation [116], and define an energy-based loss function [117] that naturally adopts the objective function in the variational formulation. #### Iv-B3 Transfer and meta- learning For engineers, an important question is, if we have trained a PUNN using data collected from city \(A\), can we directly generalize the prediction out of the NN using data from city \(B\)? Traffic datasets from two cities could differ drastically in the underlying traffic dynamics (arising from heterogeneity in driver behavior such as that in San Francisco and Mumbai), traffic environments, road conditions, initial and boundary \begin{table} \begin{tabular}{l||c|c} \hline \begin{tabular}{c} PUNN- \\ PICG topology \\ \end{tabular} & Shared & Separate \\ \hline \hline Sequential & [65, 67, 68, 70, 91, 78, 90, 91, 92, 97, 99, 96, 100] \\ \hline Parallel & [9, 10] & - \\ \hline \end{tabular} Note. PUNN-PICG topology refers to the layout between PUNN and PICG, which can be either “sequential” (i.e., the output of PUNN is the input into PICG, so PICG follows PUNN) or “parallel” (i.e., PUNN and PICG output predictions side-by-side to compute a loss). PUNN hierarchy refers to the layout of NNs that are used to predict \(\rho\) and \(u\), respectively. “Shared” means that only one NN is used to output both \(\rho,u\), while “separate” means that two NNs are used to output \(\rho,u\), respectively. \end{table} TABLE V: Configuration of physics in the physics-based component. conditions, and traffic demands. To address this challenge, we have to modify the input of the PUNN without using \((x,t)\) but other attributes that vary across datasets, including but not limited to road type, geometry, lane width, lane number, speed limit, travel demands, traffic composition and so on. Another direction to meet the transfer needs is to ask, how can we train a family of related physics based PDEs (such as LWR and ARZ) and generalize the tuned hyperparameters to other physics members? Meta-learning the parameters involved in the PIDL pipeline could be a potential solution [118]. #### V-B4 IoT data for urban traffic management How can we fully exploit the advantage of multimodal, multi-fidelity IoT data, including but not limited to individual trajectories, camera images, radar heatmap, and Lidar cloud points? The existing practice is to preprocess these multimodal, multi-fidelity measurements using computer vision and extract aggregate traffic information in terms of traffic velocity and/or density within discretized spatial cells and time intervals. Rather than converting these data formats to conventional ones, we should think outside the box and potentially, redefine the entire TSE framework. It thus demands a shift in paradigm from what constitutes traffic states to one taking individual trajectories and images as inputs. In other words, is it sufficient to simply use aggregate traffic velocity, density, and flux to describe traffic states? The aggregate traffic measures were introduced decades ago when inductive loop detectors were deployed to measure cumulative traffic counts. Traffic flow has long been regarded analogous to fluids, and accordingly, hydrodynamic theory is applied to model traffic flow dynamics. It is natural to adapt physical quantities defined for fluids to traffic flow. However, traffic systems are not physical but social systems, in which road users constitute a complex traffic environment, while interacting continuously with the built environment. Contextual information greatly influences driving behaviors and traffic dynamics. Accordingly, the question is, when we describe the state of traffic and aim to optimize traffic management strategies, would traffic contextual information perceived by our eyes and brains (represented by DNNs) be helpful as an addition to those widely used quantitative measures, especially when this type of information becomes more widely available, thanks to IoT and smart city technology? Furthermore, if TSE models can be enriched with multimodal data, what are new challenges and opportunities to traffic control and optimization models that rely on traffic state inference? There are extensive studies on causal discovery and causal inference using observational data when unobserved confounders are present [119, 120]. However, little work has been done to leverage explicit causal relations from physical knowledge to improve PIDL. For example, counterfactual analysis of physical dynamics concerns identifying the casual effects of various interventions, including traffic control and the sequential decision-making of other agents in the environment [121, 122, 123]. Without performing extra experimentation that is risky and unsafe, how can we design and validate traffic management strategies using new information provided by IoT data? #### V-B5 TSE on networks With the ubiquitous sensors in smart cities, traffic state estimation on large-scale road networks will be more feasible and useful to perform. To generalize from single road segments to networks, the challenge lies in the spatial representation of graphs, as well as temporal evolution of traffic dynamics. When PIDL is applied when there is sparse networked sensing information, we need to straighten out in what representation existing physics models could be incorporated, in other words, how we should encode traffic states on links and those at junctions into what deep learning models. [124] predicts network flows using a spatio-temporal differential equation network (STDEN) that integrates a differential equation network for the evolution of traffic potential energy field into DNNs. There are a lot more open questions that remain unanswered. As this field continues growing, we would like to leave them for readers to ponder: 1. How do we leverage various observation data to fully exploit the strengths of PIDL? 2. What types of sensors and sensing data would enrich the application domains of PIDL and better leverage its benefits? 3. Would there exist a universal architecture of hybrid computational graphs across domains? 4. What are robust evaluation methods and metrics for PIDL models against baselines?
2306.02865
Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic
Learning high-quality $Q$-value functions plays a key role in the success of many modern off-policy deep reinforcement learning (RL) algorithms. Previous works primarily focus on addressing the value overestimation issue, an outcome of adopting function approximators and off-policy learning. Deviating from the common viewpoint, we observe that $Q$-values are often underestimated in the latter stage of the RL training process, potentially hindering policy learning and reducing sample efficiency. We find that such a long-neglected phenomenon is often related to the use of inferior actions from the current policy in Bellman updates as compared to the more optimal action samples in the replay buffer. To address this issue, our insight is to incorporate sufficient exploitation of past successes while maintaining exploration optimism. We propose the Blended Exploitation and Exploration (BEE) operator, a simple yet effective approach that updates $Q$-value using both historical best-performing actions and the current policy. Based on BEE, the resulting practical algorithm BAC outperforms state-of-the-art methods in over 50 continuous control tasks and achieves strong performance in failure-prone scenarios and real-world robot tasks. Benchmark results and videos are available at https://jity16.github.io/BEE/.
Tianying Ji, Yu Luo, Fuchun Sun, Xianyuan Zhan, Jianwei Zhang, Huazhe Xu
2023-06-05T13:38:14Z
http://arxiv.org/abs/2306.02865v5
# Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic ###### Abstract Learning high-quality \(Q\)-value functions plays a key role in the success of many modern off-policy deep reinforcement learning (RL) algorithms. Previous works focus on addressing the value overestimation issue, an outcome of adopting function approximators and off-policy learning. Deviating from the common viewpoint, we observe that \(Q\)-values are indeed underestimated in the latter stage of the RL training process, primarily related to the use of inferior actions from the current policy in Bellman updates as compared to the more optimal action samples in the replay buffer. We hypothesize that this long-neglected phenomenon potentially hinders policy learning and reduces sample efficiency. Our insight to address this issue is to incorporate sufficient exploitation of past successes while maintaining exploration optimism. We propose the Blended Exploitation and Exploration (BEE) operator, a simple yet effective approach that updates \(Q\)-value using both historical best-performing actions and the current policy. The instantiations of our method in both model-free and model-based settings outperform state-of-the-art methods in various continuous control tasks and achieve strong performance in failure-prone scenarios and real-world robot tasks1. Footnote 1: Please refer to [https://jity16.github.io/BEE/](https://jity16.github.io/BEE/) for experiment videos. ## 1 Introduction Reinforcement learning (RL) has achieved impressive progress in solving many complex decision-making problems in recent years [54; 73; 34; 61]. Many of these advances are obtained by off-policy deep RL algorithms, where the ability to leverage off-policy samples to learn high-quality value functions underpins their effectiveness. Value overestimation [25; 56] has long been recognized as an important issue in off-policy RL, which is primarily associated with the function approximation errors [25] and the side-effect of off-policy learning [5; 40; 7], and can potentially lead to suboptimal policies and sample inefficiency. To tackle this issue, techniques for alleviating the overestimation errors of value functions have been ubiquitously adopted in modern off-policy RL algorithms [29; 48; 30; 56] in order to achieve superior performance and sample efficiency. Intriguingly, we find in this paper that in online off-policy actor-critic methods such as soft actor-critic (SAC) [29], the commonly known value overestimation issue might disappear and be replaced by value underestimation when the agent gradually starts to successfully solve the task. A concrete example is illustrated in Figure 0(a): in a failure-prone quadruped robot locomotion task DKittyWalkRandomDynamics [3], SAC could underestimate historical successful experiences in the latter part of the training process. A possible explanation is that, the \(Q\)-value update could be negatively impacted by the actions \(a^{\prime}\) (_i.e._, target-update actions) from the current suboptimal policy as compared to using samples from the scarce successful experiences in the replay buffer when computing the target \(Q\)-value \(Q(s^{\prime},a^{\prime})\). If such circumstances occur, the RL agent would take a substantially longer time to re-encounter these "serendipities" for training with decreased sample efficiency. The above observation highlights the existence of an "under-exploitation" stage after the initial "under-exploration" stage [6; 22; 23] in many robotic tasks, where the \(Q\)-value can be underestimated due to the insufficient exploitation on the high-quality samples in the replay buffer (illustrated in Figure 0(b)). Thus allowing the RL agent to swiftly seize the serendipities, i.e., luckily successful experiences, can be a natural cure to resolve the underestimation issue without introducing additional overestimation, while also providing potentially improved sample efficiency. At the heart of this paper, we connect this intuition with Bellman operators: the Bellman Exploitation operator enables effective exploitation of high-quality historical samples while the Bellman Exploration operator targets maintaining exploration optimism. A simple but effective mechanism, the Blended Exploration and Exploitation (BEE) operator, is then proposed to combine the merits of both sides. BEE operator can provide superior \(Q\)-value estimation, especially in addressing the "under-exploitation" issue. Moreover, it can be flexibly integrated into existing off-policy actor-critic frameworks, leading to the instantiations of two practical algorithms: BAC (BEE Actor-Critic) for model-free settings and MB-BAC (Model-based BAC) for model-based settings. Both BAC and MB-BAC outperform other state-of-the-art methods on various MuJoCo and DMControl tasks by a large margin. On some failure-prone tasks such as DogRun and HumanoidStandup, BAC achieves over 2x the evaluation scores of the strongest baseline. Crucially, we conduct real-world experiments on four competitive quadruped robot locomotion tasks, and BAC achieves strong performance owing to the capability of addressing the "under-exploitation" issue. Furthermore, in our experiments, we observe unanimously improved performance when applying the BEE operator to different backbone algorithms, highlighting its flexibility and generic nature. ## 2 Related works Off-policy actor-critic methods leverage a replay buffer to update the \(Q\)-function and policy [17; 55], providing higher sample efficiency than on-policy RL methods. The prior works commonly rely on the standard policy gradient formulation [64] for policy improvement. Various attempts have Figure 1: Motivating examples. **(a)** In the DKittyWalkRandomDynamics task, when the agent gradually starts to solve the task in the latter stage of training (referred to as the “under-exploitation” stage), SAC is prone to underestimation pitfalls. The gap in \(Q\) estimation is evaluated by comparing the SAC \(Q\)-values and the Monte-Carlo \(Q\) estimates using the trajectories in the replay buffer. **(b)** We expect the \(Q\) value of \((s,a)\) that ever observed with the successful successor \((s^{\prime},a^{\prime})\) to be high. But the Bellman evaluation operator, whose target-update actions \(a^{\prime}\) are only sampled from the current policy, tends to underestimate it. been devoted to modifying the policy evaluation procedure, primarily pursuing a high-quality value function to tackle the exploration or exploitation issue -- central concerns in online RL [16; 22]. Despite the ongoing interest in exploration and exploitation, most previous works devote to exploration design following the optimism principle in the face of uncertainty [5; 24; 79], but view exploitation as merely maximizing \(Q\)-function. The Bellman evaluation operator, \(\mathcal{T}Q(s,a)=r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim P}\mathbb{E}_{a^{ \prime}\sim\pi}Q(s^{\prime},a^{\prime})\), underpins the critic learning. Existing efforts can be summarized into modifying this operator \(\mathcal{T}\) in three main ways: 1) perturbing action \(a^{\prime}\) with techniques such as \(\epsilon\)-greedy, target policy smoothing [25], and pink noise [21]; 2) augmenting reward \(r\) to foster exploration [60; 16; 8; 90]; 3) directly adjusting \(Q\) values such as max-entropy RL methods [29; 30; 32; 37; 49; 89] that infuse the operator with an entropy term, and optimistic exploration methods that learn Upper Confidence Bound (UCB) [36; 4; 58] of ensemble \(Q\)-value networks [9; 20; 56]. In essence, value overestimation might be associated with optimistic exploration [40; 48; 56], alongside factors such as off-policy learning and high-dimensional, nonlinear function approximation. Hence, attempts to correct for overestimation, _e.g._, taking the minimum of two separate critics, delaying policy updates, have been widely adopted in the above exploration-driven methods [25; 29; 30; 74]. Our work investigates the long-neglected issue, value underestimation that stems from "under-exploitation". We show that incorporating sufficient exploitation into current exploration-driven algorithms would be a natural solution and also leads to an improved algorithm. Outside the online RL paradigm, imitation learning [66; 69; 68] and offline RL algorithms [26; 43; 44; 42; 87; 88] are known for their effective exploitation of provided datasets. Although the prospect of integrating these techniques to enhance online RL is attractive, offline learning is often considered overly-conservative and requires a reasonable-quality dataset for high performance [51], leading to limited success in improving online learning [59]. In standard online RL, we only have access to a dynamic and imperfect replay buffer, rather than a well-behaved dataset. As a result, recent efforts are mainly under a two-stage paradigm, integrating these techniques as policy pre-training for subsequent online training, such as initializing the policy with behavior cloning [33; 72; 83; 10] or performing offline RL followed by online fine-tuning [57; 50; 31]. By contrast, our work suggests a new paradigm that incorporates exploitation ingredients from offline RL to enhance pure online RL, as demonstrated in our proposed framework. ## 3 Preliminaries Markov Decision Process.We denote a discounted Markov decision process (MDP) as \(\mathcal{M}=(\mathcal{S},\mathcal{A},P,r,\gamma)\), where \(\mathcal{S}\) denotes the state space, \(\mathcal{A}\) the action space, \(r:\mathcal{S}\times\mathcal{A}\in[-R_{max},R_{max}]\) the reward function, and \(\gamma\in(0,1)\) the discount factor, and \(P(\cdot\mid s,a)\) stands for transition dynamics. Policy Mixture.During policy learning, we consider the historical policies at iteration step \(k\) as a historical policy sequence \(\Pi^{k}=\{\pi_{0},\pi_{1},\ldots,\pi_{k}\}\). Given its corresponding mixture distribution \(\alpha^{k}\), the policy mixture \(\pi_{\text{mix},k}=(\Pi^{k},\alpha^{k})\) is obtained by first sampling \(\pi_{i}\) from \(\alpha^{k}\) and then following that policy over subsequent steps. The mixture policy induces a state-action visitation density according to \(d^{\pi_{\text{mix},k}}(s,a)=\sum_{i=1}^{k}\alpha_{i}^{k}d^{\pi_{i}}(s,a)\)[32; 90; 85]. While the \(\pi_{\text{mix},k}\) may not be stationary in general, there exists a stationary policy \(\mu\) such that \(d^{\mu}=d^{\pi_{\text{mix},k}}\). Off-policy Actor-critic RL.Online off-policy RL methods based on approximate dynamic programming typically utilize an action-value function \(Q(s,a)\). For a given policy \(\pi\), the \(Q\)-value can be updated by repeatedly applying a Bellman evaluation operator \(\mathcal{T}\)[75; 86; 11]: \[\mathcal{T}Q(s,a)\triangleq r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim P(\cdot \mid s,a)}\mathbb{E}_{a^{\prime}\sim\pi(\cdot\mid s^{\prime})}[Q(s^{\prime},a ^{\prime})] \tag{3.1}\] Several works under the optimism-in-face-of-uncertainty (OFU) principle could be interpreted as learning \(Q\)-value using a modified Bellman operator [29; 30; 56]. We conclude them as a Bellman Exploration operator \(\mathcal{T}_{explore}\) that incorporates an exploration term \(\omega(s^{\prime},a^{\prime}|\pi)\), \[\mathcal{T}_{explore}Q(s,a)\triangleq r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim P (\cdot\mid s,a)}\mathbb{E}_{a^{\prime}\sim\pi(\cdot\mid s^{\prime})}\big{[}Q( s^{\prime},a^{\prime})-\omega(s^{\prime},a^{\prime}|\pi)\big{]} \tag{3.2}\] ## 4 Exploiting past success for off-policy optimization In this section, we first propose the Blended Exploration and Exploitation (BEE) operator which has good theoretical properties. Our thorough investigations highlight BEE's superior \(Q\)-value estimation and effectiveness in addressing the "under-exploitation" issue. Owing to its universality, we finally arrive at both model-free and model-based algorithms based on the BEE operator. ### Blended Exploitation and Exploration (BEE) operator To address the "under-exploitation" issue illustrated in Figure 1, a natural idea is to extract the best-performing actions for updating the \(Q\)-target value. A straightforward solution might be the Bellman optimality operator, _i.e._, \(\mathcal{T}_{opt}Q(s,a)=r(s,a)+\gamma\cdot\max_{a^{\prime}\in A}\mathbb{E}_{s^ {\prime}\sim P(s^{\prime}|s,a)}[Q(s^{\prime},a^{\prime})]\), however, it entails traversing all possible actions, being intractable in large or continuous action spaces [78; 43; 27]. In light of this, we consider the policy mixture \(\mu\) induced by the replay buffer, which contains lots of samples and varies per policy iteration. Based on \(\mu\), we introduce the Bellman Exploitation operator \(\mathcal{T}_{exploit}\) to leverage the best-performing transitions from the historical policies: \[\mathcal{T}_{exploit}^{\mu}Q(s,a)=r(s,a)+\gamma\cdot\max_{a^{\prime}\in A\atop \mu(a^{\prime}|s^{\prime})>0}\mathbb{E}_{s^{\prime}\sim P(s^{\prime}|s,a)}[Q( s^{\prime},a^{\prime})] \tag{4.1}\] It yields a \(Q\)-value estimation that is less affected by the optimality level of the current policy. Several offline RL methods [42; 87; 27] have also focused on computing \(\max Q\) constrained to the support of a pre-collected dataset for Bellman update, yet rely on a stationary behavior policy, which could be viewed as a reduced form of the \(\mathcal{T}_{exploit}\) operator. Meanwhile, towards maintaining the exploration optimism, we utilize the general Bellman Exploration operator in Eq.(3.2), namely, \[\mathcal{T}_{explore}^{\pi}Q(s,a)=r(s,a)+\gamma\cdot\mathbb{E}_{s^{\prime}\sim P (s^{\prime}|s,a)}\mathbb{E}_{a^{\prime}\sim\pi(a^{\prime}|s^{\prime})}[Q(s^{ \prime},a^{\prime})-\omega(s^{\prime},a^{\prime}|\pi)] \tag{4.2}\] With the Bellman Exploitation and Bellman Exploration operators, which respectively capitalize on past successes and promote the exploration of uncertain regions, we shift our focus to addressing the balance between exploitation and exploration. Here, we opt for a simple linear combination to regulate the trade-off preference, as presented below: **Definition 4.1**.: _The Blended Exploitation and Exploration (BEE) Bellman operator \(\mathcal{B}\) is defined as:_ \[\mathcal{B}^{\{\mu,\pi\}}Q(s,a)=\lambda\cdot\mathcal{T}_{exploit}^{\mu}Q(s,a) +(1-\lambda)\cdot\mathcal{T}_{explore}^{\pi}Q(s,a) \tag{4.3}\] _Here, \(\mu\) is the policy mixture, \(\pi\) is the current policy, and \(\lambda\in(0,1)\) is a trade-off hyperparameter._ The choice of \(\lambda\) in Eq.(4.3) impacts the exploitation-exploration trade-off, as shown in Figure 2. Besides choosing a fixed number, \(\lambda\) can also be autonomously and adaptively tuned with multiple methods as detailed in Appendix B. The single-operator design incurs comparable computational costs to general-purpose algorithms such as SAC [29], and is relatively lightweight compared to other methods that require training a large number of \(Q\)-networks to tackle the exploration-exploitation dilemma [20; 74; 18]. ### Dynamic programming properties For a better understanding of the BEE operator, we conduct a theoretical analysis of its dynamic programming properties in the tabular MDP setting, covering policy evaluation, policy improvement, and policy iteration. All proofs are included in Appendix A. Figure 2: Comparison of different operators on a toy grid world. The agent’s goal is to navigate from the bottom of the maze to the top left. The color of each square shows the learned value, red arrows reveal incorrect actions, and question marks indicate unencountered states. **Proposition 4.2** (**Policy evaluation)**.: _Consider an initial \(Q_{0}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\) with \(|\mathcal{A}|<\infty\), and define \(Q_{k+1}=\mathcal{B}^{\{\mu,\pi\}}Q_{k}\). Then the sequence \(\{Q_{k}\}\) converges to a fixed point \(Q^{\{\mu,\pi\}}\) as \(k\to\infty\)._ **Proposition 4.3** (**Policy improvement)**.: _Let \(\{\mu_{k},\pi_{k}\}\) be the policies at iteration \(k\), and \(\{\mu_{k+1},\pi_{k+1}\}\) be the updated policies, where \(\pi_{k+1}\) is the greedy policy of the \(Q\)-value. Then for all \((s,a)\in\mathcal{S}\times\mathcal{A}\), \(|\mathcal{A}|<\infty\), we have \(Q^{\{\mu_{k+1},\pi_{k+1}\}}(s,a)\geq Q^{\{\mu_{k},\pi_{k}\}}(s,a)\)._ **Proposition 4.4** (**Policy iteration)**.: _Assume \(|\mathcal{A}|<\infty\), by repeating iterations of the policy evaluation and policy improvement, any initial policies converge to the optimal policies \(\{\mu^{*},\pi^{*}\}\), s.t. \(Q^{\{\mu^{*},\pi^{*}\}}(s_{t},a_{t})\geq Q^{\{\mu^{\prime},\pi^{*}\}}(s_{t},a _{t}),\forall\mu^{\prime}\in\Pi,\pi^{\prime}\in\Pi,\forall(s_{t},a_{t})\in \mathcal{S}\times\mathcal{A}\)._ With the approximate dynamic programming properties established, our BEE operator is well-defined and flexible that could be integrated into various off-policy actor-critic algorithms. ### Superior \(Q\)-value estimation using BEE operator While being intuitively reasonable, BEE's potential benefits require further verification. In the following, we show that the BEE operator would facilitate the estimation of \(Q\) and thus improve sample efficiency compared to the commonly used Bellman evaluation operator. Investigation on the "under-exploitation" stage.As we argued in the introduction, we show the possible "under-exploitation" stage after the initial "under-exploration" stage. Here, we provide a more concrete investigation on this existence. To quantify the existence of "under-exploitation", we compute the expected difference between the maximum \(Q\)-value from the historical policies and the expected \(Q\)-value under the current policy (considering the exploration bonus), which can be stated as \(\Delta(\mu_{k},\pi_{k})=\mathbb{E}_{s}\big{[}\max_{a\sim\mu_{k}(\cdot|s)}Q(s, a)-\mathbb{E}_{a\sim\pi_{k}(\cdot|s)}[Q(s,a)-\omega(s,a|\pi_{k})]\big{]}\) with policy mixture \(\mu_{k}\) and current policy \(\pi_{k}\). \(\Delta(\mu_{k},\pi_{k})\) symbolizes the discrepancy between the value of past successes and of current policy. A positive \(\Delta(\mu_{k},\pi_{k})\) indicates that the value of optimal target-update actions in the replay buffer surpasses that of the actions generated by the current policy, even with the exploration bonus. This suggests that an optimal policy derived from the replay buffer would excel over the current policy, implying a potential "under-exploitation" of valuable historical data. In Figure 3, we illustrate \(\Delta(\mu_{k},\pi_{k})\) of SAC over training steps. Notably, a significant proportion of \(\Delta(\mu_{k},\pi_{k})\) is positive in the latter training stage, suggesting that the use of the common Bellman Exploration operator \(\mathcal{T}_{explore}\) does suffer from the "under-exploitation" issue. **BEE mitigates the under-exploitation pitfalls.** The prevalent positive \(\Delta(\mu,\pi)\) exposes the limitations of the Bellman Exploration operator \(\mathcal{T}_{explore}\). The BEE operator alleviates the over-reliance on the current policy and mitigates the "under-exploitation" pitfalls by allowing the value of optimal actions in the replay buffer to be fully utilized in the \(Q\)-value update. To be more specific, when the \(\mathcal{T}_{explore}\) operator is stuck in underestimation, the BEE operator would output a higher \(Q\)-value, as shown by the inequality \(Q^{\{\mu_{k},\pi_{k}\}}_{\mathcal{B}}(s,a)\geq Q^{\pi_{k}}_{\mathcal{T}_{explore }}(s,a)+\lambda\gamma\Delta(\mu_{k},\pi_{k})\). This agrees with the findings in Figure 0(a), the BEE Figure 4: \(Q\)-value estimation error comparison. \(\mathcal{T}_{explore}\) is referred to as \(\mathcal{E}\) for brevity. And \(Q^{*}_{k}\) is obtained practically with Monte-Carlo estimation. Figure 3: Visualization of \(\Delta(\mu,\pi)\) across four different tasks using an SAC agent. Blue bars correspond to positive \(\Delta(\mu,\pi)\), indicating the “under-exploitation” stage, while orange bars represents the “under-exploration” stage. operator exhibits lower underestimation bias and faster convergence of success rate, indicating its better sample efficiency. BEE exhibits no extra overestimation.While the BEE operator seeks to alleviate underestimation, it does not incite additional overestimation. This is in contrast to prior techniques that excessively increase exploration bonuses or use optimistic estimation [13; 41; 63], which may distort the \(Q\)-value estimates and potentially cause severe overestimation [20]. The Bellman Exploitation operator, \(\mathcal{T}_{exploit}\) does not introduce artificial bonus items and instead relies solely on the policy mixture induced by the replay buffer to calculate the maximum \(Q\)-value. Consequently, \(\mathcal{T}_{exploit}\) is grounded in real experiences. As illustrated in Figure 4, the Q-value function induced by our BEE operator enjoys a lower level of overestimation and underestimation. Further, as empirically shown in Figure 1a and 2, with enhanced exploitation, the BEE operator enables faster and more accurate \(Q\)-value learning, thereby reducing the chains of ineffective exploration on some inferior samples, and leading to improved sample efficiency. ### Algorithmic instantiation We now describe two practical algorithmic instantiations based on the BEE operator \(\mathcal{B}\) for both model-free and model-based RL paradigms, namely BEE Actor-Critic (BAC) and Model-Based BAC (MB-BAC), respectively. The implementation of our methods requires the specification of two main design choices: 1) a practical way to optimize the objective value on the Bellman Exploitation operator, and 2) a specific choice on the exploration term \(\omega(\cdot|\pi)\) in the Bellman Exploration operator. To effectively compute the \(\max Q\)-target value in Eq.(4.1) subject to the samples in the replay buffer, we utilize the in-sample learning objectives [42; 27; 87] to learn the maximum \(Q\)-value over actions in the replay buffer. This treatment not only avoids the explicit computation of the policy mixture \(\mu\) of replay buffer but also promotes the stability of \(Q\) estimation by only extracting actions that have been previously encountered for the Bellman update. For the exploration term \(\omega(\cdot|\pi_{\theta})\), numerous options have been extensively explored in prior off-policy actor-critic methods [29; 30; 21]. Here, we employ the entropy regularization term from SAC to compute \(\mathcal{T}_{explore}Q_{\phi}(s,a)\), where actions \(a^{\prime}\) for target updating are extracted from \(\pi_{\theta}\). For detailed \(\mathcal{T}_{exploit}\) and \(\mathcal{T}_{explore}\) design choices see Appendix B. Integration into Dyna-style model-based RL.Our method could be invoked into the Dyna-style model-based RL (MBRL) framework [76; 77; 45; 15; 52]. As observed in [52; 47; 28], a better policy optimizer could potentially further enhance the algorithm's performance, this motivates us to incorporate the BEE operator in existing model-based approaches. We propose a modification to the general Dyna-style algorithm, where we replace the standard \(Q\)-value update rule with our BEE operator, resulting in the Model-based BAC (MB-BAC) algorithm. In contrast to previous methods that utilize SAC as policy optimization backbones [38; 46; 62; 39], our MB-BAC algorithm treats real and model-generated data differently. It applies the \(\mathcal{T}_{exploit}\) to real data \(\mathcal{D}_{e}\), capitalizing on past successful experiences while employing the \(\mathcal{T}_{explore}\) on model rollout data \(\mathcal{D}_{m}\) to explore new possibilities. This approach enhances the effective use of valuable real data and fosters exploration in new regions of the state space. The practical implementation builds upon MBPO [38] by integrating the BAC as policy optimizer, with the pseudocode in Appendix B. ## 5 Experiments Our experimental evaluation aims to investigate the following questions: 1) How effective is the proposed BEE operator in model-based and model-free paradigms? 2) How effectively does BAC perform in failure-prone scenarios, that highlight the ability to seize serendipity and fleeting successes, particularly in various real-world tasks? For the failure-prone tasks, we provide videos in this link [https://beeauthors.github.io/](https://beeauthors.github.io/). ### Evaluation on standard control benchmarks To illustrate the effectiveness of the BEE operator across both model-based and model-free paradigms, we evaluate BAC and MB-BAC on various continuous control benchmarks. Comparison of model-free methods.We compare BAC to several popular model-free baselines, including: 1) SAC [29], regarded as the most popular off-policy actor-critic method; 2) TD3 [25], which introduces the Double \(Q\)-learning trick to reduce function approximation error; 3) Diversity Actor-Critic (DAC) [30], a variant of SAC, using a sample-aware entropy regularization instead, which is a potential choice for our \(\omega(\cdot|s,a)\); 4) Random Reward Shift (RRS) [74], which learns multiple value functions (seven double-\(Q\) networks) with different shifting constants for the exploration and exploitation trade-off; 5) PPO [71], a stable on-policy algorithm that discards historical policies. We evaluate BAC and these baselines on a set of MuJoCo [81] continuous control tasks, including Hopper, Walker2d, Swimmer, Ant, Humanoid, and HumanoidStandup. BAC surpasses all baselines in terms of eventual performance, coupled with better sample efficiency, as shown in Figure 5. Notably, Figure 5: Training curves of BAC and five baselines on six continuous control benchmarks. Solid curves depict the mean of five trials and shaded regions correspond to the one standard deviation. Figure 6: Training curves of MB-BAC and six baselines on four continuous control benchmarks. The dashed lines are the asymptotic performance of SAC (up to 3M) and MBPO. the HumanoidStandup task, known for its high action dimension and susceptibility to failure [30], requires the algorithms to be able to seize and value serendipity. In the HumanoidStandup task, BAC gains a significantly better performance, with average returns up to 280,000 at 2.5M steps and 360,000 at 5M steps, which is 1.5x and 2.1x higher than the strongest baseline, respectively. This echoes the hypothesis that BAC exploits past serendipities in failure-prone environments. Furthermore, we provide a visualization in Appendix D.4 that showcases the remarkable ability of BAC to swiftly reach a stable standing pose. By contrast, the SAC agent ends up with a wobbling kneeling posture, the DAC agent sitting on the ground, and the RRS agent rolling around. More experimental results on the fifteen DMControl [82] benchmark tasks are provided in Appendix D.3. Comparison of model-based methods.We evaluate the performance of MB-BAC, which integrates the BEE operator into the MBPO algorithm, against several model-based and model-free baselines. Among the Dyna-style counterparts, MBPO [38], CMLO [39], and AutoMBPO [46] use SAC as the policy optimizer, while SLBO [52] employs TRPO [70]. PETS [19] is a planning-based method that utilizes CEM [12] as the planner. Figure 6 showcases that MB-BAC learns faster than other modern model-based RL methods and yields promising asymptotic performance compared with model-free counterparts. Moreover, the result highlights the universality of the BEE operator. ### Evaluation in real-world quadruped robots walking task We evaluate BAC and baseline methods on a real quadruped robot D'Kitty [3]. We follow the sim2real paradigm as in previous legged locomotion works [1, 3, 35, 80] where the agent is trained in simulated environments with randomized terrains and then deployed in the real world without further training. The task is challenging, as agents are prone to falling due to fluctuating terrain. As for real-world scenarios, the D'Kitty robot is required to traverse various complex terrains, contending with unpredictable environmental factors. Figure 8: Success rate and average return in the “DKittyWalk-Hard” task. Figure 7: Success rate and average return in the “DKittyWalk-Medium” task. Figure 9: Comparisons on four challenging real-world tasks. The bar plots show how far the agent walks toward the goal for each algorithm averaged over 5 runs. For (a) and (b), we employ the policy trained in the “-Medium” task, and for (c) and (d) use the policy trained in the “-Hard” task. Firstly, we construct two versions of the simulation task: "DKittyWalk-Medium" and "DKittyWalk-Hard". The "-Medium" variant features a random height region of 0.07m, while the "-Hard" variant has a height of 0.09m, which is 1.4 times and 1.8 times higher than the base task "DKittyWalkRandomDynamics", respectively. Given D'Kitty's leg length of around 0.3m when standing, navigating uneven terrain with height variations of over 0.2x to 0.3x the leg length poses a significant challenge, as a deviation of 0.02m would lead to a considerable shift in the center of gravity. Figure 7 and 8 demonstrate that BAC outperforms other algorithms in both tasks with clearer advantages. BAC achieves a success rate surpassing SAC by approximately 50%. Additionally, we integrate our BEE into the TD3 algorithm and find that the ad-hoc TD3-BEE also outperforms the original TD3 method. More crucially, BAC achieves superior performance when deployed in the real world across various terrains, as shown in Figure 9. The policy learned in the "-Medium" variant is deployed on two terrains -- smooth road and rough stone road, with target points positioned at distances of 3m and 1m, respectively. For more challenging terrains -- uphill stone roads and grasslands, we employ the policy trained in the "-Hard" variant, with a target point located 1m ahead. Specifically, the BAC algorithm outperformed the TD3 and SAC agents in achieving stable movement across a variety of terrains and displaying natural gaits. In contrast, the TD3 agent prefers lower postures, such as knee walking, which makes it prone to falling on uneven terrain, while the SAC agent suffers from more oscillatory gait patterns, as shown in the supplementary videos. The empirical results also shed light on the necessity of algorithmic improvement for real-world robotics in addition to building better environments and designing informative rewards. ### Ablation study Ability to seize serendipity.To better understand how well the BEE operator captures past well-performing actions, we conduct experiments on the "DKittyWalk-Medium" task. We initialize SAC and BAC with the identical \(Q\) network, random policy, and replay buffer. Next, we collected 15 trajectories (2400 transitions in total) using an expert policy whose success rate is 100% and adding them to the replay buffer. Keeping all components and parameters the same as in the main experiment, we train BAC and SAC on the blended buffer harboring several successful actions. Figure 10 suggests that BAC recovers success faster than SAC, indicating its supposed ability to seize serendipity. More stable \(Q\)-value in practice.In failure-prone scenarios, policy performance typically experiences severe oscillation across iterations due to easily encountered failure samples from the current policy in \(Q\)-value update if using the Bellman evaluation operator. The \(Q\)-value learned by the BEE operator is less affected by the optimality level of the current policy, thus it might be expected of having better learning stability. In Figure 11, the smaller error bar across 5 runs supports it. Hyperparameters.We conduct an investigation on the weighted coefficient \(\lambda\), and we observe that BAC with appropriate \(\lambda\) performs the best. Smaller \(\lambda\) encourages exploration more while larger \(\lambda\) prefers exploitation more. Simply put, when \(\lambda=1\), the algorithm becomes purely exploitative, and setting \(\lambda=0\) will reduce it to SAC. Therefore, a moderate choice of \(\lambda\) is preferable as shown in Figure 12. Figure 11: \(Q\)-value learning stability comparison. The experiments are run over 5 seeds. Figure 12: Parameter study on \(\lambda\). The experiments are run over 4 random seeds. Figure 10: Comparison of the ability to seize serendipity in the “DKittyWalk-Medium” task. Left: success rate; Right: average return. Conclusion In this paper, we investigate the overlooked issue of value underestimation in off-policy actor-critic methods, which stems from "under-exploitation" in the latter training steps that hinder sample efficiency. These observations motivate us to propose the Blended Exploitation and Exploration (BEE) operator, which leverages the value of past successes to enhance \(Q\)-value estimation and policy learning. The proposed algorithms BAC and MB-BAC outperform both model-based and model-free methods across various continuous control tasks. Remarkably, without further training, BAC shines in real-robot tasks, emphasizing the need for improved general-purpose algorithms in real-world robotics. A moderate \(\lambda\) value of around 0.4 to 0.5 generally provides good results in robot locomotion tasks. However, in specific environments, such as adversarially designed games, a little time overhead may be incurred for tuning \(\lambda\). Finally, our work sheds light on future work on fully fusing exploitation and exploration techniques, _e.g._, incorporating up-to-date design choices for computing \(\max Q\) or exploration term, in building strong RL methods.
2302.01643
Exploring the short-term variability of Hα and H\b{eta} emissions in a sample of M Dwarfs
Activities in M dwarfs show spectroscopic variability over various time scales ranging from a few seconds to several hours. The time scales of such variability can be related to internal dynamics of M dwarfs like magnetic activity, energetic flaring events, their rotation periods, etc. The time variability in the strengths of prominent emission lines (particularly H{\alpha} ) is mostly taken as a proxy of such dynamic behavior. In this study, we have performed the spectroscopic monitoring of 83 M dwarfs (M0-M6.5) to study the variations in H{\alpha} and H\b{eta} emissions on short-time scales. Low-resolution (resolution around 5.7 angstroms) spectral time series of 3-5 minutes cadence over 0.7-2.3 hours were obtained with MFOSC-P instrument on PRL 1.2m Mt. Abu telescope, covering H\b{eta} and H{\alpha} wavelengths. Coupled with the data available in the literature and archival photometric data from TESS and Kepler/K2 archives, various variability parameters are explored for any plausible systematics with respect to their spectral types, and rotation periods. Though about 64% of our sample shows statistically significant variability, it is not uniform across the spectral type and rotation period. H{\alpha} activity strength (LH{\alpha}/Lbol) is also derived and explored for such distributions.
Vipin Kumar, A. S. Rajpurohit, Mudit K. Srivastava
2023-02-03T10:38:24Z
http://arxiv.org/abs/2302.01643v1
Exploring the short-term variability of H\(\alpha\) and H\(\beta\) emissions in a sample of M Dwarfs ###### Abstract Activities in M dwarfs show spectroscopic variability over various time scales ranging from a few seconds to several hours. The time scales of such variability can be related to internal dynamics of M dwarfs like magnetic activity, energetic flaring events, their rotation periods, etc. The time variability in the strengths of prominent emission lines (particularly H\(\alpha\) ) is mostly taken as a proxy of such dynamic behavior. In this study, we have performed the spectroscopic monitoring of 83 M dwarfs (M0-M6.5) to study the variations in H\(\alpha\) and H\(\beta\) emissions on short-time scales. Low-resolution (resolution \(\sim\)5.7 angstroms) spectral time series of 3-5 minutes cadence over 0.7-2.3 hours were obtained with MFOSC-P instrument on PRL 1.2m Mt. Abu telescope, covering H\(\beta\) and H\(\alpha\) wavelengths. Coupled with the data available in the literature and archival photometric data from TESS and Kepler/K2 archives, various variability parameters are explored for any plausible systematics with respect to their spectral types, and rotation periods. Though about 64% of our sample shows statistically significant variability, it is not uniform across the spectral type and rotation period. H\(\alpha\) activity strength (L\({}_{H\alpha}\)/L\({}_{bol}\)) is also derived and explored for such distributions. + Footnote †: slugcomment: The 21th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun Edited by A. S. Brun, J. Bouvier, P. Petit ## 1 Introduction A significant number of M dwarfs are found to be magnetically active (West _et al._, 2015), and it is expected that the nature of magnetic activities (coronal mass ejection, flares, strong stellar winds, star spots, etc.) would affect the evolution of its companion. It has also been proposed that these magnetic activities are closely related to the rotating periods of the underlying M dwarfs (West _et al._, 2008; Reiners _et al._, 2012, 2014; Newton _et al._, 2017; Wright _et al._, 2018). In comparison to other chromospheric emission lines like Ca II, Mg II, and K lines found in the faint blue region of the spectrum, the chromospheric H\(\alpha\) emission line is commonly utilized for activity-related studies (Walkowicz & Hawley, 2009). Magnetic activities in the M dwarf occur on time scales ranging from a few seconds to several hours (Kowalski _et al._, 2010; Yang _et al._, 2017). However, we notice that there is a scarcity of systematic spectroscopic studies of M dwarfs samples with shorter cadence (5 minutes or less) in the literature. While several studies have been made on a sample of M dwarfs across spectral type (Lee _et al._, 2010; Hilton _et al._, 2010; Kruse _et al._, 2010; Bell _et al._, 2012), except Lee _et al._ (2010), most of these studies had explored the variability in H\(\alpha\) at the cadence greater than 15-20 minutes or with a sample of uneven cadence. As a result, shorter-duration behavior could not be investigated, leaving a gap in our systematic understanding of H\(\alpha\) variability on such a temporal scale. Therefore, we conducted a systematic short-term (mostly 5 minutes individual frame exposures spanning 0.7-2.3 hours) spectroscopy monitoring of a sample of M dwarfs in the spectral range of M0-M6.5 to investigate their short-term H\(\alpha\) variability. This spectral range was suitable because our sample of 83 M dwarfs in the M0-M6.5 spectral types complemented the data set studied by Lee _et al._ (2010) in the M3.5-M8.5 range. As a result, we constructed a sample of 126 sources spanning the whole M0-M8.5 spectral range. The photometric light curves from the TESS and Kepler/K2 missions are used to calculate rotation periods in order to investigate the possible distribution of rotation periods, activity, and spectral types. ## 2 Sample selection & Observations We used the MFOSC-P instrument (Srivastava _et al._, 2018, 2021; Rajpurohit _et al._, 2020) on PRL 1.2 m, f/13 telescope to observe our sample of M dwarfs (spectral region M0-M6.5). Because of the telescope's moderate aperture and combined telescope + instrument efficiency, we often limited our sample to V magnitudes brighter than 14. This also limits us to extend the spectral range beyond M6, as most of the late M dwarfs (M7 and beyond) are too faint (V\(>\)16) for spectroscopy with MFOSC-P on a 1.2m telescope. However, as discussed in section 1, this spectral range would complement the sample of Lee _et al._ (2010). The distribution of all sources is depicted in Fig. 1. The sources used for this study had typical H\(\alpha\) equivalent widths (EW) of \(\sim\) -0.75 A, which corresponds to the detectable H\(\alpha\) line in emission. As a result, we chose 83 appropriate targets from the lists of Jeffers _et al._ (2018) and Lepine & Gaidos (2011). They were observed from March 2020 to March 2021. We used the R1000 mode of MFOSC-P (dispersion \(\sim\)1.9 A per pixel) with a slit-width of 1 arc-second having a spectral range of 4700-6650A. The targets were monitored for \(\sim\)0.7-2.3 hours in a single stretch with integration times ranging from 200-600s per frame for each of the sources. As a result, each data set typically consists of \(\sim\)8-18 frames of the single spectrum. Further information on the MFOSC-P instrument and data reduction process can be found in Srivastava _et al._ (2018, 2021); Ra jpurohit _et al._ (2020). Photometric light curves of 75 of the above sources were obtained from the TESS and Kepler/K2 archival databases through Mikulski Archive for Space Telescopes (MAST) portal1. Footnote 1: [https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html) ## 3. Analysis and Results ### H\(\alpha\) & H\(\beta\) Equivalent widths and their variability To estimate the Equivalent Widths (EWs) of H\(\alpha\) nd H\(\beta\) emission lines, the spectral wavebands are taken 6557.6-6571.6 and 4855.7-4870.0 respectively (following Hilton _et al._ (2010)). The corresponding continuum regions are 6500-6550 & 6575-6625 for H\(\alpha\) and 4810-4850 & 4880-4900 for H\(\beta\) emissions. The average values of the continuum flux in these regions are used for the EWs estimations while summing the area under the line. A \(\chi^{2}\) minimization is done over the EW time series data set (EW light curve) for each of the sources to measure the emission line flux variability in this time series. Calculating \(p\)-values for given degrees of freedom was used to determine the confidence of the \(\chi^{2}\) fit. A source is considered variable if its \(p\)-value is less than 0.05. In our sample of 83 M dwarfs, 30 objects (\(\sim 36\%\)) exhibit no variability in H\(\alpha\) emission with a confidence level of more than \(95\%\) (\(p\)-value \(<\)0.05). We utilize the metrics\(\Delta\mathrm{EW}=Max(EW)-Min(EW)\), \(\mathrm{RMS(EW)/\langle EW\rangle}\) and \(R(\mathrm{EW)}=\mathrm{Max(EW)/Min(EW)}\) to measure the variability strength for both the H\(\alpha\) and H\(\beta\) emission lines. In Fig. 2, we see a clearly rising trend, as previously observed by Lee _et al._ (2010), and Kruse _et al._ (2010), indicating more variability in the later spectral type of M dwarfs. Panels (d) and (h) of Fig. 2 indicate the segregation of our data set (M0-M6.5) and the Lee _et al._ (2010) data set (M3.5-M8.5). The later types of M dwarfs, though having a lower \(\langle\mathrm{EW}\rangle\), are more variable. For our sources (M0-M6.5), the values of \(\mathrm{RMS(EW)/\langle EW\rangle}\) are found to be less than \(\sim\)0.2 for H\(\alpha\) and less than \(\sim\)0.5 for H\(\beta\). We attempted to investigate the time scales of this variability, by constructing a simple fractional structure function (SF). Though the fractional SFs do not clearly demonstrate any time scale of variability, they do corroborate an interesting trend seen by Bell _et al._ (2012). The sources observed to vary over a longer time period showed a fractional SF with an increasing trend, as expected. However, the sources whose EWs light curves are seen to vary at shorter time scales show a roughly flat distribution of fractional SF at all times, as also demonstrated by the results of Bell _et al._ (2012). Bell _et al._ (2012) attributed such high variable source behavior to a variability time scale shorter than their shortest time-separation bin of \(\sim\)15 minutes. Even at a cadence of \(\sim\)5 minutes, we see the same tendency. These findings are discussed in section 4 along with other results. For one of the sources, the spectra, EW light curves, and fractional SFs for H\(\alpha\) and H\(\beta\) are shown in Fig. 3. ### H\(\alpha\) and H\(\beta\) activity strength H\(\alpha\) or H\(\beta\) activity strength is defined as the ratio of their luminosity to the bolometric luminosity (West _et al._, 2008; Hilton _et al._, 2010; Lee _et al._, 2010; Newton _et al._, 2017). The activity strength represents a more meaningful estimation of activity between stars of various masses than EW alone (Reid _et al._, 1995) since it indicates the relevance of the line flux relative to the star's total energy output. To compute the activity strength for the H\(\alpha\) and H\(\beta\) emission lines, we used the relationships provided by Douglas _et al._ (2014). The chi factor, which is required to calculate the activity strength, is derived from photometric color \((i-J)\)(Walkowicz _et al._, 2004; Douglas _et al._, 2014; West & Hawley, 2008). The derived values of \(L_{\mathrm{H}_{\alpha}}/L_{\mathrm{bol}}\) and \(L_{\mathrm{H}_{\beta}}/L_{\mathrm{bol}}\) are shown in Fig. 4 with respect to their spectral type. The values from Table 2 of Lee _et al._ (2010) are also included in the H\(\alpha\) plots. \(L_{H\alpha}/L_{bol}\) reaches a constant value of \(\sim\)10\({}^{-3.8}\) for M dwarfs with spectral types M0-M4, then decreases for later spectral types (later than M4), indicating decreased activity strengths for the later types M dwarfs. However, the variability is higher for these later spectral types, which can be approximated as the ratio of maximum to minimum values (of the H\(\alpha\) and H\(\beta\) line flux for a given time series of an M dwarf). This indicates that, while the level of activity in these later type M dwarfs is low, they are more variable. These findings are discussed in section 4 along with other results. ### Rotation Period TESS and Kepler/K2 missions (Caldwell _et al._, 2010; Koch _et al._, 2010; Howell _et al._, 2014; Ricker _et al._, 2015) have delivered excellent cadence and photometric precisions for a wide range of stars in the last decade. These photometric light curves were used to calculate the rotation periods of the objects in our sample. The rotation periods, P\({}_{rot}\), are calculated by quantifying the periodic brightness oscillations in the light curve induced by starspots on the objects' surfaces. These were determined using a periodogram technique such as the Lomb-Scargle periodogram (Lomb, 1976; Scargle, 1982). The rotation periods of 82 objects were estimated using the above method from 106 sources where light curves from TESS and Kepler/K2 were available. They are found to be in the \(\sim\)0.2-10 day range. Fig. 5 depicts the distribution of the variability indicators with respect to rotation periods. In the past, many magnetic activity indicators were utilized to investigate the relationship between magnetic field strength and star rotation (Douglas _et al._, 2014; West _et al._, 2008, 2015; Newton _et al._, 2017; Jeffers _et al._, 2018). Similar Figure 1.— Distribution of 83 M dwarfs of this study along with 43 M dwarfs from Lee _et al._ (2010) with respect to the spectral type. Figure 3: The figure shows the time-varying spectra along with their photometric light curves in the inset for one of our sources, PMJ02088+4926 (Spectral type: M4.0, Rotation period: 0.75 days). The corresponding upper and bottom right side panels show the time variations of the EWs of H\(\alpha\) / H\(\beta\) and fraction structure function (SF), respectively. Data for H\(\alpha\) and H\(\beta\) are shown in black circles and blue squares, respectively. Figure 2: Plots showing the variations of various quantities depicting the variability of the EWs of H\(\alpha\) and H\(\beta\) emissions in M dwarfs. Panels (a), (b), and (c) (in the top row for H\(\alpha\) ) and panels (e), (f), and (g) (in the bottom row for H\(\beta\) ) show the changes in \(\Delta\)(EW), \(\mathrm{RMS(EW)/\langle EW\rangle}\), and \(\mathrm{Max(EW)/Min(EW)}\) respectively as the function of spectral type. Panels (d) and (h) show the variation of \(\mathrm{RMS(EW)/\langle EW\rangle}\) with respect to \(\langle\mathrm{EW}\rangle\) for H\(\alpha\) and H\(\beta\) respectively. Black circles represent the data points of our sample in this study. Blue squares represent the data points derived from the values given in Table 2 of Lee _et al._ (2010). Filled and open circles/squares represent the objects identified with varying and non-varying H\(\alpha\) using the \(\chi^{2}\) criterion. The error bars on the top-left corner show the median errors of the data points. to West _et al._ (2015) and Jeffers _et al._ (2018), we find that M dwarfs with longer periods exhibit less variability. It is also known that the strength of activity in M dwarfs decreases with increasing rotation period (West _et al._, 2015; Jeffers _et al._, 2018). When we consider the spectral types, we can see an interesting behavior in the panels \(a\), \(b\), \(c\) (for H\(\alpha\) ) and \(e\), \(f\), and \(g\) (for H\(\beta\) ). The short-term variability indicators show a significant increase in variability for the faster rotating M dwarfs having periods of \(<\) 2 days and the majority of these objects belong to the later types (M4-M8). Because these later types are often fast rotators, such significant variability could very likely be caused by the magnetic field coupling with the rotation-induced dynamics of chromospheric regions. The magnetic properties of stars are known to be connected to H\(\alpha\) emissions and star-spots (Newton _et al._, 2017). As a result, such apparent correlations may be plausible and expected. ## 4 Discussion The EW light curves of H\(\alpha\) and H\(\beta\) emissions exhibit a significant gradient in variability amplitudes over a time scale of a few minutes to a few tens of minutes. The derived mean activity strengths (\(\langle\)L\({}_{H\alpha}\)/L\({}_{bol}\)\(\rangle\) and \(\langle\)L\({}_{H\beta}\)/L\({}_{bol}\)\(\rangle\)) across this short term time series agree with the trend seen at other time scales (Bell _et al._, 2012; Lee _et al._, 2010), where the activity strengths decrease for later spectral types and the corresponding variability increases. The activity strengths(\(\langle\)log\({}_{10}\)(L\({}_{H\alpha}\)/L\({}_{bol}\)\(\rangle\))) in our dataset of 126 sources are close to \(\sim\)-3.8 for the spectral types M0-M4 and then decline to \(\sim\)-5.0 for mid to late M dwarfs. This is very similar to the trend seen in Kruse _et al._ (2010) and Bell _et al._ (2012) for a larger cadence. These activity strength breaks could be explained by a change in the magnetic dynamo process at the fully convective boundary. While early M dwarfs possess a partially convective envelope, late M dwarfs (M4 and up) have a fully convective envelope (Reiners _et al._, 2012, 2014; Newton _et al._, 2017; Wright _et al._, 2018). In this work, the derived rotation periods of M dwarfs range between \(\sim\)0.2-10 days. Higher variability was seen for stars with rotation periods of \(<\sim\)2 days. The majority of these are late-type M dwarfs (M6-M8.5). This behavior could be explained by the magnetic heating of the stellar atmosphere, which is a result of the underlying magnetic dynamo's significant dependency on stellar rotation. In the literature, such behavior has been thoroughly investigated(West _et al._, 2015; Newton _et al._, 2017; Wright _et al._, 2018; Raetz _et al._, 2020). As a result, the observed behavior in the H\(\alpha\) variability with respect to the rotation period is completely plausible. High-activity objects are found to be less variable, possibly because more energetic events (such as large flares) are required to change their observational status in terms of H\(\alpha\) / H\(\beta\) strengths. Such intense events may necessitate longer time scales to originate and/or evolve. Low energy events that might occur on shorter time scales would most likely govern the variability of less active stars. Thus, studies of the time scales of the H\(\alpha\) / H\(\beta\) variations are crucial to understanding the underlying activity on the star's surface. A larger sample with finer time resolution could be the key to understanding the physical mechanisms behind such behaviors. ## Acknowledgments The research work at the Physical Research Laboratory is funded by the Department of Space, Government of India. We thank Veeresh Singh (PRL) and Rishikesh Sharma (PRL) for their useful discussions. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This paper includes data collected by the TESS and Kepler mission and acquired from the Mikulski Archive for Space Telescopes (MAST) data archive at the Space Telescope Science Institute (STScI). Funding for the TESS and KEPLER missions is provided by NASA's Science Mission Directorate. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Figure 4: The distribution of derived activity strengths (\(L_{\rm H\alpha}/L_{\rm bol}\) and \(L_{\rm H\beta}/L_{\rm bol}\)) for H\(\alpha\) and H\(\beta\) (left panel for H\(\alpha\) and right panel for H\(\beta\) ) with respect to their spectral type. The solid lines connect the maximum and minimum activity strength values measured for each source. The positions of objects are displaced horizontally within vertical solid lines for clarity. Symbols have the same meaning as in Fig. 2.
2308.14573
Linearizing Anhysteretic Magnetization Curves: A Novel Algorithm for Finding Simulation Parameters and Magnetic Moments
This paper proposes a new method for determining the simulation parameters of the Jiles-Atherton Model used to simulate the first magnetization curve and hysteresis loop in ferromagnetic materials. The Jiles-Atherton Model is an important tool in engineering applications due to its relatively simple differential formulation. However, determining the simulation parameters for the anhysteretic curve is challenging. Several methods have been proposed, primarily based on mathematical aspects of the anhysteretic and first magnetization curves and hysteresis loops. This paper focuses on finding the magnetic moments of the material, which are used to define the simulation parameters for its anhysteretic curve. The proposed method involves using the susceptibility of the material and a linear approximation of a paramagnet to find the magnetic moments. The simulation parameters can then be found based on the magnetic moments. The method is validated theoretically and experimentally and offers a more physical approach to finding simulation parameters for the anhysteretic curve and a simplified way of determining the magnetic moments of the material.
Daniele Carosi, Fabiana Zama, Alessandro Morri, Lorella Ceschini
2023-08-28T13:37:43Z
http://arxiv.org/abs/2308.14573v1
Linearizing Anhysteretic Magnetization Curves: A Novel Algorithm for Finding Simulation Parameters and Magnetic Moments ###### Abstract This paper proposes a new method for determining the simulation parameters of the Jiles-Atherton Model used to simulate the first magnetization curve and hysteresis loop in ferromagnetic materials. The Jiles-Atherton Model is an important tool in engineering applications due to its relatively simple differential formulation. However, determining the simulation parameters for the anhysteretic curve is challenging. Several methods have been proposed, primarily based on mathematical aspects of the anhysteretic and first magnetization curves and hysteresis loops. This paper focuses on finding the magnetic moments of the material, which are used to define the simulation parameters for its anhysteretic curve. The proposed method involves using the susceptibility of the material and a linear approximation of a paramagnet to find the magnetic moments. The simulation parameters can then be found based on the magnetic moments. The method is validated theoretically and experimentally and offers a more physical approach to finding simulation parameters for the anhysteretic curve and a simplified way of determining the magnetic moments of the material. ## 1 Introduction Ferromagnetic materials have long presented a challenge in determining their magnetic constitutive laws. Numerous approaches and mathematical models have been developed to address this issue. The most accurate models, according to the literature, are the Brillouin and Langevin Functions for describing reversible magnetic transformations, which produce "anhysteretic curves," and the Presiach and Jiles-Atherton Model for describing irreversible magnetic transformations, which produce the first magnetization curve and hysteresis loop [1]. In daily applications, the magnetization of a magnetic material due to an external generated magnetic field does not pass through equilibrium states but through non - equilibrium states, showing the phenomenon of hysteresis. Many models to describe the hysteresis behaviour of a magnetic material have been developed in the years, like Preisach Model, Stoner-Wolfhart Model and so on. One of the most used, especially in engineering applications is the Jiles-Atherton Model (JA) [1]. This model describes the magnetisation of a ferromagnetic material in function of the external applied field with a first-order Ordinary Differential Equation (ODE) depending on several critical parameters related to the material and experiment conditions. To define the JA model of a given material, it is necessary to estimate the parameters from magnetisation measurements at different intensities of the applied magnetic field. Such a problem is a well-known, very difficult task, and many approaches can be found in the literature to simulate the first magnetization curve and hysteresis loop. One such method is the genetic algorithm, which uses a penalty fitness function and boundary values [2]. Another method is the "Branch and bound method," which uses boundary conditions of the parameters and is mainly based on mathematical considerations [3]. A third method involves considering the anhysteretic function similar to the first magnetization curve at the maximum applied field [4]. More recently, an improved genetic algorithm has been developed that uses a loss function to evaluate the distance between simulated and experimental hysteresis loops [5]. Additionally, neural networks have been used, with inputs such as frequency, maximum flux density and flux density, and a parameter indicating whether the magnetic field increases or decreases [6]. The above methods determine the simulation parameters primarily based on mathematical aspects of the anhysteretic and first magnetization curves and hysteresis loops, using differential or non-linear equations and differential susceptibilities. However, the parameters for simulating the anhysteretic curve are related to the magnetic moment \(m\), which has yet to be determined. The present work aims to investigate a robust way to define approximate parameter values using the material physical properties by introducing the linearization of the anhysteretic Magnetization curves. This method offers several benefits, such as finding the magnetic moments of the material, finding the simulation parameters for the anhysteretic curve more physically, and finding the simulation parameters based solely on the value of initial anhysteretic susceptibility. The results show that it is possible to describe the anhysteretic magnetization curve of a ferromagnetic material with a paramagnetic function linearly approximated for every value of the external applied field. This approach could also be used to define the starting guess of parameter estimation procedure, making it robust and efficient. The paper has the following structure: in section 2 we present the problem and the algorithm in paper [7]. Tn section 3 we present our estimation procedure and finally in section 4 we validate the proposed algorithm. ## 2 The problem According to the JA model, the magnetisation \(M\) of ferromagnetic materials in function of an externally applied field \(H\) is described by the following ODE: \[\frac{dM}{dH}=\frac{1}{1+c}\frac{M_{an}(H)-M}{\delta k-\alpha(M_{an}(H)-M)}+ \frac{c}{1+c}\frac{dM_{an}(H)}{dH} \tag{1}\] where \(M_{an}(H)\) is the anhysteretic magnetisation function: \[M_{an}(H)=M_{s}\left(\coth\left(\frac{H+\alpha M}{a_{J}}\right)-\frac{a_{J}}{H +\alpha M}\right) \tag{2}\] and \(a_{J},c,\delta,k,\alpha\) are the model parameters [8], defined in table 1. One of the most difficult tasks of such a modelling problem is the determination of the model parameters from measurements of the anhysteretic magnetization \(M_{an}\) at different intensities of the external field \(H\). In addition to the inherent difficulty of solving an ill-posed problem, JA model also presents extreme challenges in defining starting guesses and extreme sensitivity to their value. To address this difficulty, we propose to improve the original the approach of [7] based on the exploitation of physical relationships between different quantities that can be obtained from the measurements. Such quantities are reported in table 2 The estimation procedure proposed in [7] exploits the quantities reported in table 2 and reduces the dependence of all model parameters to a single parameter \(\alpha\) which is heuristically set. Such procedure is outlined in algorithm 1. The fit_condition is usually represented by a test on the least squares distance or Mean Square Error between the simulated hysteresis loop and the experimental data. A known weakness of such an approach is the lack of guarantee that the exit condition can be fulfilled; therefore, a restart with different seeds might be required, depending on the measured data. One strength is the reduced computational cost consisting of the solution of two nonlinear equations for each iteration. In the next section we introduce a method to find the magnetic moment \(m\) based on the linearisation of the anhysteretic magnetisation curve and its susceptibilities. This provides a simple way to find values of parameters \(a_{J}\) and \(\alpha\). ## 3 Magnetic moments and simulation parameters of anhysteretic curve Let's start considering an anhysteretic theoretic curve of a ferromagnetic material generated by (2) with given simulation parameters \(\alpha\) and \(a_{J}\). Since for every value of external applied field \(H_{a}\) the curve has only one value of magnetisation \(M\), the following injective function can describe its behaviour: \[M_{an}(H)=M_{s}\left(\coth\left(\frac{H}{a}\right)-\frac{a}{H}\right) \tag{3}\] \begin{table} \begin{tabular}{c|l} \hline \(\chi^{\prime}_{in}\) & Initial differential susceptibility of the first magnetisation curve \\ \(\chi^{\prime}_{an}\) & Initial differential susceptibility of the anhysteretic magnetisation curve \\ \(\chi^{\prime}_{max}\) & the differential susceptibility at coercive field \\ \(\chi^{\prime}_{r}\) & Differential susceptibility at remanence point \\ \(\chi^{\prime}_{m}\) & Differential susceptibility at hysteresis loop tip \\ \(H_{c}\) & Value of coercive field \\ \(M_{r}\) & Value of magnetisation at remanence point \\ \(M_{m}\) & Value of magnetisation at loop tip \\ \(H_{m}\) & Value of external applied field corresponding to \(M_{m}\) \\ \hline \end{tabular} \end{table} Table 2: Physical quantities that can be extracted from measured data. \begin{table} \begin{tabular}{c|l} \hline \(a_{J}=k_{B}T/(\mu_{0}m)\) & related to the shape of the anhysteretic curve \\ \(k_{B}\) & the Boltzmann constant \\ \(T\) & the temperature of the material in \(K\) \\ \(\mu_{0}\) & the magnetic permeability of free space \\ \(m\) & the magnetic moment of a pseudo-domain \\ \(\alpha\) & related to the interdomain coupling \\ \(M_{s}\) & the saturation magnetisation \\ \(k\) & related to the coercive field and the pinning sites \\ \(c\) & related to the reversible processes of magnetisation \\ \(\delta=sign(dH/dt)\) & related to the derivative of external applied magnetic field \\ \hline \end{tabular} \end{table} Table 1: JA model parameters. with a simulation parameter \(a\neq a_{J}\). Since there is no interaction between the magnetic moments in paramagnetic materials, the parameter \(\alpha\) is absent. Injective functions such as (3) usually describe the magnetic behaviour of a paramagnetic material [9]. Therefore, analogously to (2), the shape parameter \(a\) is: \[a=\frac{k_{B}T}{\mu_{0}m_{1}}, \tag{4}\] where \(m_{1}\) is the magnetic moment of an equivalent paramagnetic curve that can describe the ferromagnetic one. A common experimental procedure to obtain the anhysteretic curve of a magnetic material consists of superimposing a steady external magnetic field on another magnetic field that varies between a minimum and maximum value. The varying magnetic field is responsible for creating the hysteresis loop. Gradually, the range of the varying magnetic field is reduced until it aligns with the value of the constant external magnetic field. Through this procedure, consistent magnetization values of the material can be obtained that are free of hysteresis [10]. Considering then that the magnetization curve of ferromagnetic materials is obtained with relatively small and constant values of the external applied field \(H_{a}\), it is possible to obtain a simplified approximation of \(M_{an}(H)\) considering the Taylor expansion of equation (3) : \[M_{an}(H)\approx M_{s}\left(\coth\left(\frac{H_{a}}{a}\right)-\frac{a}{H_{a}} \right)+\mathcal{O}(H-H_{a})\] and taking only the first term. In the assumption that \(H\approx H_{a}\), we have: \[M_{an}(H)\approx M_{s}\left(\coth\left(\frac{H_{a}}{a}\right)-\frac{a}{H_{a}}\right)\] then considering the series expansion of \(\coth(x)\) for \(x\neq 0\), \[\coth(x)=\frac{1}{x}+\frac{x}{3}-\frac{x^{3}}{45}+\frac{2x^{5}}{945}-\cdots\] and taking the first two terms, we obtain the linearized approximation \(M_{an}^{a}\) : \[M_{an}^{a}=M_{s}\left(\frac{1}{\cancel{\frac{\frac{\mu_{a}}{\mu_{a}}}{a}}+\frac {\frac{H_{a}}{a}}{3}-\cancel{\frac{a}{\mu_{a}}}{\cancel{\frac{\mu_{a}}{\mu_{a}} }}\right)\] Substituting \(a\) from (4): \[M_{an}^{a}=\frac{M_{s}}{3}\frac{\mu_{0}m_{1}}{k_{B}T}H_{a} \tag{5}\] Using \(M_{an}^{a}\) to define the anhysteretic susceptibility of the ferromagnetic material \(\chi_{an}^{a}\)[10],[9] i.e.: \[\chi_{an}^{a}=\frac{M_{an}^{a}}{H_{a}} \tag{6}\] and substituting into (5) it is possible to compute the magnetic moment \(m_{1}\) of the equivalent paramagnetic material for every value of the external applied field related to the anhysteretic susceptibility of the ferromagnetic material: \[m_{1}=\frac{3k_{B}T\chi_{an}^{a}}{\mu_{0}M_{s}}. \tag{7}\] Going back to equation (2) that describes the anhysteretic behaviour of a ferromagnetic material we have: \[M_{an}=\frac{M_{s}}{3a_{J}}(H_{a}+\alpha M_{an})=\frac{M_{s}}{3}\frac{\mu_{0}m }{k_{B}T}(H_{a}+\alpha M_{an}) \tag{8}\] To find the value \(m\) for the magnetic moment of the ferromagnetic material in (8), we substitute \(M_{an}\) with \(M_{an}^{a}\): \[M_{an}^{a}= \frac{M_{s}}{3}\frac{\mu_{0}m}{k_{B}T}(H_{a}+\alpha M_{an}^{a}),\] \[\chi_{an}^{a}= \frac{M_{s}}{3}\frac{\mu_{0}m}{k_{B}T}(1+\alpha\chi_{an}^{a}),\] and find the following expression for \(m\): \[m=\frac{3k_{B}T}{M_{s}}\frac{\chi_{an}^{a}}{(1+\alpha\chi_{an}^{a})} \tag{9}\] However, since \(\alpha\) is unknown this relation cannot be applied to compute \(m\). We observe that \(m\) multiplies the external and the molecular field, so setting \(\alpha=0\) we obtain that \(m\) can be considered as the magnetic moment of an equivalent paramagnetic curve of the ferromagnetic material, with susceptibility \(\chi_{param}\). Hence, using (7), we obtain an alternative characterization of \(m\) depending on the unknown susceptibility \(\chi_{param}\) : \[m=\frac{3k_{B}T\chi_{param}}{\mu_{0}M_{s}}. \tag{10}\] An estimate of \(\chi_{param}\) can be obtained by substituting (10) in equation (3) \[M_{an}=M_{s}\left(\coth\left(\frac{3\chi_{param}H_{a}}{M_{s}}\right)-\frac{M_{s}}{ 3\chi_{param}H_{a}}\right) \tag{11}\] and solving the nonlinear equation for properly chosen values \(M_{an}\) and \(H_{a}\). The idea is to choose very large values of the applied external field \(H_{a}^{1}\) since the molecular field in the paramagnetic case does not act. Still, the saturation of the magnetization is almost reached. However, since measuring the anhysteretic magnetisation curve for a very high external applied magnetic field value is impossible, we use its equivalent paramagnetic curve with magnetic moment \(m=m_{1}\). We can compute the corresponding magnetization value \(M_{an_{1}}(H_{a}^{1})\) as: \[M_{an_{1}}(H_{a}^{1})=M_{s}\left(\coth\left(\frac{\mu_{0}m_{1}}{k_{B}T}H_{a}^{1 }\right)-\frac{k_{B}T}{\mu_{0}m_{1}H_{a}^{1}}\right) \tag{12}\] Since in general \(M_{an_{1}}>M_{an}\), we solve equation (11) with the magnetization value \(M_{an}\) estimated as follows: \[M_{an}\approx\eta^{*}M_{an_{1}},\quad 0.9\leq\eta^{*}<1.\] Finally, we can simplify computation by considering the anhysteretic susceptibility of the equivalent paramagnetic curve \[\chi_{an_{1}}=\frac{M_{an_{1}}}{H_{a}^{1}}\] and substituting it into (12) we obtain : \[\eta^{*}\cdot\chi_{an_{1}}-\frac{M_{s}}{H_{a}^{1}}\left(\coth\left(\frac{3\chi _{param}H_{a}^{1}}{M_{s}}\right)-\frac{M_{s}}{3\chi_{param}H_{a}^{1}}\right)=0. \tag{13}\] After computing \(\chi_{param}\) through the numerical solution of (13), we obtain the magnetic moment \(m\) through (10) and the parameter \(a_{J}=\frac{k_{B}T}{\mu_{0}m}\), then we can compute \(\alpha\) by relations (9) and (10), i.e.: \[\alpha=\frac{1}{\chi_{param}}-\frac{1}{\chi_{an}^{a}}. \tag{14}\] Once the parameters have been computed, we compute the approximate anhysteretic magnetization value \(\hat{M}_{an}\) correspondent to the observed applied external field \(H_{a}\), solving the following nonlinear equations for each observed field \(H_{a}:\) \[\hat{M}_{an}-M_{s}\left(\coth\left(\frac{H_{a}+\alpha\hat{M}_{an}}{a_{J}} \right)-\frac{a_{J}}{H_{a}+\alpha\hat{M}_{an}}\right)=0. \tag{15}\] Since \(\eta^{*}\) is not known, the idea is to evaluate (13) in a sequence \(\{\eta_{k}\}_{k>0}\in[0.9,1)\) and define \[\eta^{*}=\arg\min_{\eta}\|\mathbf{r}^{(\eta)}\|_{2},\quad\mathbf{r}^{(\eta)} =\mu_{0}\|\hat{M}_{an}-M_{an}\|_{2}\] where \(\mathbf{r}^{(\eta)}\) has components \(r_{i}^{(\eta)}\) given by the difference between the magnetic anhysteretic data (\(\mu_{0}M_{an_{i}}\)) and its approximation (\(\mu_{0}\hat{M}_{an_{i}}\)) for each corresponding value of external applied field. These steps are is summarized in Algorithm JA_par. From extended experimental tests with different materials it is verified that \(\eta\in[0.9,1)\) is sufficient obtain close ferromagnetic and equivalent paramagnetic curves, for very high value of external applied field. ``` 1:\(\chi_{an}^{a}=\frac{M_{an}^{a}(1)}{H_{a}(1)}\); with \(H_{a}(1)\neq 0\), s.t. \(M_{an}^{a}(1)\) first experimental positive value; 2:\(m_{1}=\frac{3k_{B}T\chi_{an}^{a}}{\mu_{0}M_{s}}\); 3:\(a_{1}=\frac{k_{B}T}{\mu_{0}m_{1}}\); 4:\(H_{a}{}^{1}=10^{6}\): \(M_{an_{1}}=\left[M_{s}\left(\coth\left(\frac{H_{a}^{1}}{a_{1}}\right)-\frac{a_{ 1}}{H_{a}^{1}}\right)\right]\) 5:\(\chi_{an_{1}}=\frac{M_{an_{1}}}{H_{a}^{1}}\); 6:\(\varepsilon=10^{-5}\); \(\triangleright\) step increment 7:\(\eta_{0}=0.9\); 8:\(k=0\) 9:Loop = true 10:while Loop do 11: Compute \(\chi_{param}\) by solving the nonlinear equation : \[\eta_{k}\cdot\chi_{an_{1}}-\frac{M_{s}}{H_{a}^{1}}\left(\coth\left(\frac{3 \chi_{param}H_{a}^{1}}{M_{s}}\right)-\frac{M_{s}}{3\chi_{param}H_{a}^{1}}\right) =0;\] 12:\(m_{2}=\frac{3k_{B}T\chi_{param}}{\mu_{0}M_{s}}\); 13:\(a_{J}=\frac{k_{B}T}{\mu_{0}m_{2}}\); 14:\(\alpha=\frac{1}{\chi_{param}}-\frac{1}{\chi_{an}^{a}}\); 15: Solve the nonlinear system for \(M\): \[\hat{M}_{an}-M_{s}\left(\coth\left(\frac{H_{a}+\alpha\hat{M}_{an}}{a_{J}} \right)-\frac{a_{J}}{H_{a}+\alpha\hat{M}_{an}}\right) =0\] 16:\(r=\mu_{0}(\hat{M}_{an}-M_{an})\); \(\triangleright\) Residual vector 17:\(k=k+1\); \(Nr(j)=norm(r)\); \(\triangleright\) Norm of residual vector 18:Loop = \(Nr(j)<Nr(j-1)\) 19:if Loop then 20:\(\eta_{k+1}=\eta_{k}+\varepsilon\) 21:endif 22:end while ``` **Algorithm 2** Algorithm JA_par. INPUT: \((H_{a},M_{an})\), \(k_{B}\), \(T\), \(M_{s}\) ## 4 Methodology validation and testing This section validates the proposed method by investigating its robustness in presence of perturbations. Then the method is tested against data taken from literature and real measurements. ### Methodology validation Firstly, we demonstrate that the anhysteretic curve of a ferromagnetic material can be approximated using the linear approximation of a paramagnet for any given external applied field. By utilizing the anhysteretic susceptibilities of the ferromagnetic material's anhysteretic curve, which are described by equation (6) and substituted into equation (7), it becomes possible to approximate the curve for any external applied field value using equation (5) of an equivalent paramagnetic curve. To accomplish this, we generate a theoretical anhysteretic curve by employing real parameters from a ferromagnetic material and synthetic simulation parameters. For example, we consider a carbon steel at room temperature, with the material parameters listed in table 3. We then choose a set of simulation parameters for the JA model, as provided in table 3, to generate an anhysteretic curve, which is depicted in blue in figures 3 and 4. For each value of the external applied field and magnetization along the anhysteretic curve of the ferromagnetic material, we compute the susceptibility (6) and the values of magnetic moment \(m_{1}\) by applying equation (7). By substituting these values into equation (5), we can evaluate the anhysteretic magnetization for every value of the external applied field. In figure 1 we can appreciate the perfect agreement between the anhysteretic magnetisation curve of a ferromagnetic material and that of an equivalent paramagnetic curve for every value of external applied field. Such quality is preserved even for changes in the JA parameters \(a_{J}\) and \(\alpha\) as reported in the examples represented in figures 2 where the parameters are modified according to table3 in rows 2-5. \begin{table} \begin{tabular}{c c c|c c} \hline \multicolumn{3}{c}{Ferromagnetic material parameters} & \multicolumn{2}{c}{JA parameters} \\ \hline \(M_{s}\) & \(T\) & \(T_{c}\) & \(a_{J}\) & \(\alpha\) \\ \hline \multirow{5}{*}{\(1.6\cdot 10^{6}\left[\frac{A}{m}\right]\)} & \multirow{5}{*}{\(303.5[K]\)} & \multirow{5}{*}{\(1023.5[K]\)} & \(972\) & \(1.4\cdot 10^{-3}\) \\ & & & \(972\) & \(1.0\cdot 10^{-3}\) \\ & & & \(972\) & \(1.8\cdot 10^{-3}\) \\ & & & \(800\) & \(1.4\cdot 10^{-3}\) \\ & & & \(1000\) & \(1.4\cdot 10^{-3}\) \\ & & & \(1200\) & \(1.4\cdot 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 3: Parameters of an electrical steel at room temperature and JA simulation parameters. Figure 1: Theoretical anhysteretic magnetisation curve and its linearization with paramagnetic function. By defining the curve obtained using equation (3), with \(a=\frac{\mu_{0}m_{1}}{k_{B}T}\) where \(m_{1}\) is calculated using the initial anhysteretic susceptibility (considering only the initial value of \(H_{a}>0\)), as _Paramagnet equivalent 1_, we can observe in Figure 3 that it tends to overestimate the anhysteretic magnetization curve, particularly for small values of \(H_{a}\). On the other hand, if we consider the curve obtained by setting \(\alpha=0\) in equation (3), and using the value of \(a\) provided in the first row of table (3), we obtain an underestimating curve, referred to as _Paramagnet equivalent 2_. This curve demonstrates better agreement with the anhysteretic magnetization curve, particularly for large values of the applied field \(H_{a}\). Finally, we verify that by evaluating \(\chi_{param}\) for extremely high values of external applied field and magnetization, through the solution of equation (15), and Figure 3: Theoretical anhysteretic magnetisation curve and its paramagnetic curves Figure 2: Theoretical anhysteretic magnetisation curves and linearized approximations. (a) variation of \(a_{J}\) (b) variation of \(\alpha\) calculating the initial anhysteretic susceptibility of the ferromagnetic material \(\chi_{an}\), we can determine the values of the parameters \(a_{J}\) and \(\alpha\) that result in a reliable approximation of the experimental anhysteretic curve of the ferromagnetic material described by equation (2). We validate the evaluation of \(\chi_{param}\) and \(\chi_{an}\) by reporting in figures 4 the computed magnetization curves varying \(a_{J}\) and \(\alpha\) as in table 3. Again we can observe the perfect agreement between the theoretical and simulated anhysteretic curve with the simulation parameters' variation. ### Algorithm 2 testing In this paragraph, we evaluate the performance of the JA_par algorithm using data from papers [11] and [8], which were extracted using the web tool for data extraction called WebPlotDigitizer [12]. Figures 5 depict the residual behaviour within the interval \([0.0,1]\), thereby confirming that the minimum value can be found in the given interval. Additionally, these figures provide the optimal value \(\eta^{*}\) computed by the JA_par algorithm. The computed residual and parameters are presented in table 4. The parameters \(a_{J}\) and \(\alpha\) corresponding to \(\eta^{*}\) provide the best fit for the anhysteretic data (Figures 6), making them the most representative of the magnetic material. From a computational efficiency perspective, we observe that the algorithm requires solving nonlinear equations in steps 11 and 15 of the JA_par algorithm. For this purpose, the function fzero is applied, using zero as the starting guess. Figure 4: Anhysteretic magnetisation curves obtained by \(\chi_{param}\), \(\chi_{an}\) estimates and simulation curves varying parameters \(\alpha\) and \(a_{J}\). (a) variation of \(a_{J}\) (b) variation of \(\alpha\). Figure 5: Behaviour of the residual norm \(||\mathbf{r}||\) for \(n\in[0.9,1)\). (a) Data in [11] (b) Data in [8]. ### Experimental hysteresis validation It is necessary to verify whether the parameters obtained through simulating the anhysteretic curve and solving equation (13) describe the hysteresis curve accurately. To this purpose, we set the simulation parameters \(c\) and \(k\) in (1) as follows: * \(c=\frac{\chi^{\prime}_{in}}{\chi^{\prime}_{an}}\); * \(k=H_{c}\); where \(\chi^{\prime}_{in}\), \(\chi^{\prime}_{an}\) are defined in table 2. The results are checked on the curves obtained from Jiles' paper [7] (figures 7) and real measurements (figure 8). In the case of real data, a hysteresis loop is obtained from a Non Oriented M 470-50A produced by Marcegaglia Ravenna s.p.a with a Single Sheet Tester from Brockhaus Messtechnik. This machine has the following characteristics: \begin{table} \begin{tabular}{c c c c c c c} \hline Data & \(\eta^{*}\) & \(||\mathbf{r}||\) & \(\chi_{param}\) & \(m_{2}\) & \(a_{J}\) & \(\alpha\) \\ \hline [11] & 0.9896 & 0.0608 & 45.7954 & \(2.8738\cdot 10^{-19}\) & \(1.1584\cdot 10^{4}\) & 0.0195 \\ [8] & 0.9989 & 0.093 & 406.5476 & \(2.5512\cdot 10^{-18}\) & \(1.3049\cdot 10^{3}\) & 0.0021 \\ \hline \end{tabular} \end{table} Table 4: Parameters and residual obtained by algorithm 2. Figure 6: Experimental anhysteretic magnetisation curves (blue circles) and JA-par simulations (red line). (a) data [11] (b) data [8] Figure 7: Experimental hysteresis loop and its simulation from [7]. * Model: MPG100 D DC/AC * frequency ranges: from 3Hz to 10 kHz * maximum polarization: 2T * measurement repeatability: \(\leq 2\) percent; From the result represented in figure 8, we can see is a good agreement between experimental data points and simulation. ## 5 Conclusion This paper focused on the Jiles-Atherton Model, which is widely used in engineering applications, and presented a new approach for finding the simulation parameters for the anhysteretic curve of ferromagnetic materials. By using the material's susceptibility and linearizing the anhysteretic magnetization curve with a paramagnetic function, we could find the magnetic moments of the material and determine the simulation parameters in a more physical and simplified manner. Our results showed that it is possible to describe the anhysteretic magnetization curve of a ferromagnetic material with a linear approximation of a paramagnet for every value of the external applied field. Validation of the proposed method with synthetic and experimental data has demonstrated its effectiveness and stability. In conclusion, JA_par extends the approach of Algorithm 1 by improving the quality of parameter estimation without requiring the iterative solution of a system of ordinary differential equations (ODEs), which is computationally expensive and presents challenges in solving an inverse problem. This approach can be useful in many engineering applications requiring accurate ferromagnetic material characterisation. Figure 8: Experimental hysteresis loop and its simulation. ## 6 Acknowledgement This work was financed by the European Union - NextGenerationEU (National Sustainable Mobility Center CN00000023, Italian Ministry of University and Research Decree n. 1033 - 17/06/2022, Spoke 11 - Innovative Materials & Lightweighting), and National Recovery and Resilience Plan (NRRP), Mission 04 Component 2 Investment 1.5 - NextGenerationEU, Call for tender n. 3277 dated 30/12/2021. Moreover this work was supported by Alessandro Ferraiuolo, R & D Manager of Marcegaglia Ravenna S.p.A., giving the material for experimental validations.
2310.00994
A Uniform One-Dimensional Fragment with Alternation of Quantifiers
The uniform one-dimensional fragment of first-order logic was introduced a few years ago as a generalization of the two-variable fragment of first-order logic to contexts involving relations of arity greater than two. Quantifiers in this logic are used in blocks, each block consisting only of existential quantifiers or only of universal quantifiers. In this paper we consider the possibility of mixing quantifiers in blocks. We identify a non-trivial variation of the logic with mixed blocks of quantifiers which retains some good properties of the two-variable fragment and of the uniform one-dimensional fragment: it has the finite (exponential) model property and hence decidable, NExpTime-complete satisfiability problem.
Emanuel Kieroński
2023-10-02T08:57:45Z
http://arxiv.org/abs/2310.00994v1
# A Uniform One-Dimensional Fragment with Alternation of Quantifiers ###### Abstract The uniform one-dimensional fragment of first-order logic was introduced a few years ago as a generalization of the two-variable fragment of first-order logic to contexts involving relations of arity greater than two. Quantifiers in this logic are used in blocks, each block consisting only of existential quantifiers or only of universal quantifiers. In this paper we consider the possibility of mixing quantifiers in blocks. We identify a non-trivial variation of the logic with mixed blocks of quantifiers which retains some good properties of the two-variable fragment and of the uniform one-dimensional fragment: it has the finite (exponential) model property and hence decidable, NExpTime-complete satisfiability problem. ## 1 Introduction In this paper we are going to push forward the research on the uniform one-dimensional fragment of first-order logic. To set up the scene and locate our results in a broader context let us first recall some facts about the two-variable fragment, \(\mathrm{FO}^{2}\). \(\mathrm{FO}^{2}\), obtained just by restricting first-order logic so that its formulas may use only variables \(x\) and \(y\), is one of the most important decidable fragments of first-order logic identified so far. The decidability of its satisfiability problem was shown by Scott [34] in the case without equality, and by Mortimer [26] in the case with equality. In [26] it is proved that the logic has the finite model property, that is, its every satisfiable formula has a finite model. Later, Gradel, Kolaitis and Vardi [11] strengthened that result, by showing that every satisfiable formula has a model of size bounded exponentially in its length. This exponential model property led to the NExpTime upper bound on the complexity of \(\mathrm{FO}^{2}\) satisfiability. The matching lower bound follows from the earlier work by Lewis [24]. An important motivation for studying \(\mathrm{FO}^{2}\) is the fact that it embeds, via the so-called standard translation, basic modal logic and many standard description logics. Thus \(\mathrm{FO}^{2}\) constitutes an elegant first-order framework for those formalisms. However, its simplicity and naturalness make it also an attractive logic in itself, inheriting potential applications in knowledge representation, artificial intelligence, or verification of hardware and software from modal and description logics. Plenty of results on \(\mathrm{FO}^{2}\), its extensions and variations have been obtained in the last few decades, e.g., decidability was shown for \(\mathrm{FO}^{2}\) with counting quantifiers [12, 27, 28], one or two equivalence relations [19, 18], counting quantifiers and equivalence relation [29], betweenness relations [22], its complexity was established on words and trees, in various scenarios including the presence of data or counting [5, 4, 8, 9, 3], to mention just a few of them. However, further applications, e.g., in database theory are limited by the fact that \(\mathrm{FO}^{2}\) and its extensions mentioned above can speak non-trivially only about relations of arity at most two. This is in contrast to some other decidable fragments studied because of their potential applications in computer science, like the guarded fragment, GF [1], the unary negation fragment, UNFO [7], the guarded negation fragment, GNFO [2], or the fluted fragment, FF [32, 31]. A natural question is whether there is an elegant decidable formalism which retains full expressivity of FO\({}^{2}\), but additionally, allows one to speak non-trivially about relations of arity bigger than two. In the recent literature we can find a few such formalisms. An interesting idea is for example to combine FO\({}^{2}\) with GF. The idea can be traced back already in Kazakov's PhD thesis [15], was present in the work by Bourish, Morak and Pieris [6], and found a more systematic treatment in the paper by Rudolph and Simkus [33], who formally introduced the triguarded fragment, TGF. TGF is obtained from GF by allowing quantification for subformulas with at most two free variables to be unguarded. What we get this way is a logic in which one can speak freely about pairs of elements, and in a local, guarded way about tuples of bigger arity. TGF turns out to be undecidable with equality, but becomes decidable when equality is forbidden. The satisfiability problem is then 2-ExpTime- or 2-NExpTime-complete, depending on whether constants are allowed in signatures [33]; the finite model property is retained [21]. A variation of the idea above is the one-dimensional triguarded fragment [20], still containing FO\({}^{2}\), which becomes decidable even in the presence of equality. FO\({}^{2}\) (or, actually, even its extension with counting quantifiers, C\({}^{2}\)) was also combined with FF by Pratt-Hartmann [30]. This logic was shown decidable but the complexity of its satisfiability problem is non-elementary, as already FF alone has non-elementary complexity [31]. Finally, probably the most canonical extension of FO\({}^{2}\) to contexts with relations of arity bigger than two, capturing the spirit of FO\({}^{2}\) more closely than the logics discussed above, is the _uniform one-dimensional fragment_, UF\({}_{1}\), proposed by Hella and Kuusisto [13]. In this fragment quantifiers are used in blocks and a single block is built out only of existential or only of universal quantifiers and leaves at most one variable free; a fragment meeting this condition is called _one-dimensional_. Imposing one-dimensionality alone is not sufficient for ensuring the decidability of the satisfiability problem and thus another restriction, _uniformity_, is applied which, roughly speaking, allows boolean combinations of atoms only if the atoms use precisely the same set of variables or use just one variable. In effect, just as FO\({}^{2}\) contains modal logic (or even Boolean modal logic), UF\({}_{1}\) contains _polyadic_ modal logic (even with negations of the accessibility relations) (cf. [23]). In [13] it is shown that UF\({}_{1}\) without equality is decidable and has the finite model property. In [16] this result is improved by showing that the decidability is retained even if free use of equalities is allowed (by _free use_ of equalities we mean that they need not obey the uniformity restriction) and that the logic has exponential model property and NExpTime-complete satisfiability problem, exactly as FO\({}^{2}\). A question arises whether the requirement that the blocks of quantifiers from the definition of UF\({}_{1}\) must consist of quantifiers of the same type (all universal or all existential) is necessary for decidability, that is what happens if we allow one to mix quantifiers as, e.g., in the formula \(\forall x\exists y\forall zR(x,y,z,t)\). Let us denote the extension of UF\({}_{1}\) allowing to alternate quantifiers in blocks AUF\({}_{1}\). The motivations behind studying AUF\({}_{1}\) are multifarious. UF\({}_{1}\) lies very close to the borderlines between the decidable and undecidable, so, firstly and most importantly, analysing its expressive extensions may enhance our understanding of these borderlines which may be also useful in different scenarios. Secondly, the logics UF\({}_{1}\) and AUF\({}_{1}\) can be useful themselves, offering extensions of modal and description logics to contexts with relations of arity greater than two, such as databases, orthogonal to other proposals. Thirdly, though it is of course a matter of taste, we believe that AUF\({}_{1}\) is just quite an elegant formalism, which can be justified by a relative simplicity of its definition and a nice game-theoretic characterization of its expressivity--natural Ehrenfeucht-style games for UF\({}_{1}\) were introduced in [16]; shifting to AUF\({}_{1}\) would probably allow for an even nicer game characterizations (though this topic is not formally studied in this paper). The first step to understand AUF\({}_{1}\) was done in the companion paper [10], where we show the decidability and the finite model property of the three variable restriction of this logic, AUF\({}_{1}^{3}\); in that paper \(\mathrm{AUF}_{1}^{3}\) is then made a basis for obtaining a rich decidable subclass of the three-variable fragment, \(\mathrm{FO}^{3}\). Turning now to our current contribution, we first remark that in this paper we still do not answer the question whether the whole \(\mathrm{AUF}_{1}\) has decidable satisfiability. We however make another step towards understanding \(\mathrm{AUF}_{1}\) by identifying its fragment, \(\mathrm{AUF}_{1}^{-}\), which contains full \(\mathrm{FO}^{2}\) without equality, allows for mixed blocks of quantifiers of unbounded length, has \(\mathrm{NExpTime}\)-complete satisfiability problem, and has the exponential model property. Additionally, we observe that if we allow for a free use of equality in \(\mathrm{AUF}_{1}^{-}\) then we lose the finite model property. The main restriction of \(\mathrm{AUF}_{1}^{-}\), compared to full \(\mathrm{AUF}_{1}\), is that it admits only blocks of quantifiers that are purely universal or end with the existential quantifier. Additionally, mostly for the clarity of presentation, we will define \(\mathrm{AUF}_{1}^{-}\) not as an extension of the version of \(\mathrm{UF}_{1}\) originally defined in [13], but rather as an extension of the _strongly_ uniform one-dimensional fragment, \(\mathrm{sUF}_{1}\), introduced in [17]. The definition of \(\mathrm{AUF}_{1}^{-}\) is inspired by the definition of the Maslov class \(\overline{\mathrm{K}}\)[25] and, as we will see in a moment, the decidability of \(\mathrm{AUF}_{1}^{-}\) can be shown by a reduction to conjunctions of sentences in \(\overline{\mathrm{K}}\), whose decidability was shown by the resolution method by Hustadt and Schmidt [14]. However, this reduction does not allow us to establish the precise compleixty of \(\mathrm{AUF}_{1}^{-}\), since, to the best of our knowledge, the precise complexity of the Maslov class has not been established. It is also not known whether \(\overline{\mathrm{K}}\) has the finite model property. ## 2 Preliminaries ### Notation and terminology We assume that the reader is familiar with first-order logic. We work with purely relational signatures with no constants nor function symbols. We refer to structures using Fraktur capital letters, and to their domains using the corresponding Roman capitals. Given a structure \(\mathfrak{A}\) and some \(B\subseteq A\) we denote by \(\mathfrak{A}\,|\,B\) the restriction of \(\mathfrak{A}\) to its subdomain \(B\). We usually use \(a,b,\ldots\) to denote elements of structures, and \(x,y,\ldots\) for variables; all of these possibly with some decorations. For a tuple of variables \(\overline{x}\) we use \(\psi(\overline{x})\) to denote that the free variables of \(\psi\) are in \(\overline{x}\). In the context of uniform logics it is convenient to speak about some partially defined (sub)structures which we will call _pre-(sub)structures_. A pre-structure over a signature \(\sigma\) consists of its domain \(A\) and a function specifying the truth-value of every fact \(P(\overline{a})\), for \(P\in\sigma\) and a tuple \(\overline{a}\) of elements of \(A\) of length equal to the arity of \(P\), such that \(\overline{a}\) contains all elements of \(A\) or just one of them. The truth values of all the other facts remain unspecified. We will use Fraktur letters decorated with \(*\) to denote pre-structures: a pre-structure with domain \(A\) will be denoted by \(\mathfrak{A}^{*}\). If a structure \(\mathfrak{A}\) is fully defined, \(\mathfrak{A}^{*}\) denotes its induced pre-structure. Similarly, if \(B\subseteq A\) is a subdomain of some structure \(\mathfrak{A}\) we donote by \(\mathfrak{B}^{*}\) the pre-structure \((\mathfrak{A}\,|\,B)^{*}\) and call it a pre-substructure of \(\mathfrak{A}\). An (atomic) \(1\)_-type_ over a signature \(\sigma\) is a maximal consistent set of atomic or negated atomic formulas over \(\sigma\) using at most one variable \(x\). We often identify a \(1\)-type with the conjunction of its elements. We will usually be interested in \(1\)-types over signatures \(\sigma\) consisting of the relation symbols used in some given formula. Observe that the number of \(1\)-types is bounded by a function which is exponential in \(|\sigma|\), and hence also in the length of the formula. This is because a \(1\)-type just corresponds to a subset of \(\sigma\). Let \(\mathfrak{A}\) be a structure, and let \(a\in A\). We denote by \(\mathrm{tp}^{\mathfrak{A}}(a)\) the unique atomic \(1\)-type _realized_ in \(\mathfrak{A}\) by the element \(a\), _i.e._, the \(1\)-type \(\alpha(x)\) such that \(\mathfrak{A}\models\alpha(a)\). ### Satisfiability and finite model property Let \(\mathcal{L}\) be a class of first-order formulas (a logic). The _(finite) satisfiability problem_ for \(\mathcal{L}\) takes as its input a sentence from \(\mathcal{L}\) and verifies if it has a (finite) model. \(\mathcal{L}\) has the _finite model property_ if every satisfiable sentence in \(\mathcal{L}\) has a finite model; \(\mathcal{L}\) has the _exponential model property_ if there is a fixed exponential function \(f\) such that every satisfiable sentence \(\varphi\) has a finite model over a domain whose size if bounded by \(f(|\varphi|)\) (where the length of \(\varphi\), \(|\varphi|\), is measured in any reasonable fashion). ### Logics As the starting point we define the logic sUF\({}_{1}\) (without equality), called in [17] the _strongly restricted uniform one-dimensional fragment_. Formally, for a relational signature \(\sigma\), the set of \(\sigma\)-formulas of sUF\({}_{1}\) is the smallest set \(\mathcal{F}\) such that: * every \(\sigma\)-atom using at most one variable is in \(\mathcal{F}\) * \(\mathcal{F}\) is closed under Boolean connectives * if \(\varphi(x_{0},\ldots,x_{k})\) is a Boolean combination of formulas in \(\mathcal{F}\) with free variables in \(\{x_{0},\ldots,x_{k}\}\) and atoms1 built out of precisely all of the variables \(x_{0},\ldots,x_{k}\) (in an arbitrary order, possibly with repetitions) then \(\exists x_{0},\ldots,x_{k}\varphi\), \(\exists x_{1},\ldots,x_{k}\varphi\), \(\forall x_{0},\ldots,x_{k}\varphi\) and \(\forall x_{1},\ldots,x_{k}\varphi\) are in \(\mathcal{F}\). Footnote 1: Please note that those atoms need not to belong to \(\mathcal{F}\). Example formulas in sUF\({}_{1}\) are: \[\forall xyz(P(x)\wedge P(y)\wedge P(z)\to R(x,y,z)\vee\neg S(z,z,x,y))\] \[\forall x(P(x)\rightarrow\exists yz(\neg R(y,z,x)\wedge(\neg R(x,y,z)\lor P (y))))\] For interested readers we say that (non-restricted) _uniform one-dimensional fragment_ is defined as above but in the last point of the definition the non-unary atoms must not necessarily use the whole set \(\{x_{0},\ldots,x_{k}\}\) of variables but rather all those atoms use the same subset of this set (see [13]). By AUF\({}_{1}\) we denote the extension of sUF\({}_{1}\) without equality with _alternation of quantifiers in blocks_. The set of \(\sigma\)-formulas of AUF\({}_{1}\) is the smallest set \(\mathcal{F}\) such that: * every \(\sigma\)-atom using at most one variable is in \(\mathcal{F}\) * \(\mathcal{F}\) is closed under Boolean connectives * if \(\varphi(x_{0},\ldots,x_{k})\) is a Boolean combination of formulas in \(\mathcal{F}\) with free variables in \(\{x_{0},\ldots,x_{k}\}\) and atoms built out of precisely all of the variables \(x_{0},\ldots,x_{k}\) (in an arbitrary order, possibly with repetitions) then Q\({}_{0}\)x\({}_{0}\ldots\)Q\({}_{k}\)x\({}_{k}\)\(\varphi\) and Q\({}_{1}\)x\({}_{1}\ldots\)Q\({}_{k}\)x\({}_{k}\)\(\varphi\) are in \(\mathcal{F}\), where each Q\({}_{i}\) is one of \(\exists,\forall\). Finally, we define a subset AUF\({}_{1}^{-}\) of AUF\({}_{1}\) by requiring that its formulas are written in negation normal form NNF (that is negation is used only in front of atomic formulas, and the only other Boolean connectives are \(\vee\) and \(\wedge\)), and that every sequence of quantifiers in the last point of the definition either contains only universal quantifiers or the last quantifier Q\({}_{k}\) is existential. In AUF\({}_{1}^{-}\) we can write, e.g.: \[\forall xy\exists z(\neg P(x)\wedge\neg P(y)\lor R(x,y,z))\] \[\forall x\big{(}P(x)\vee\exists y\forall z\exists tS(x,y,z,t)\big{)}\] \[\forall xyz(R(x,y,z)\vee\exists tT(x,t)\wedge\exists tT(y,t)\wedge\exists tT( z,t))\] Observe that AUF\({}_{1}^{-}\) contains the whole sUF\({}_{1}\) and hence also FO\({}^{2}\). (For example the FO\({}^{2}\) sentence \(\exists x\forall y\psi(x,y)\) belongs to sUF\({}_{1}\), since one may think that it has two blocks of quantifiers, both of length one, and indeed one of them is purely universal and the other ends with \(\exists\).) ### Normal forms and basic decidability result We introduce normal form for \(\mathrm{AUF}_{1}^{-}\) formulas, generalizing Scott's normal form for \(\mathrm{FO}^{2}\) (cf. [34, 11]). We start with a version involving 0-ary predicates, called _weak_ normal form, and then explain how to remove them. In our normal form as well as in some intermediate formulas we allow ourselves to use implications which are usually not allowed in NNF formulas, but here they are very natural (note that converting them to disjunctions using the basic law \(p\to q\equiv\neg p\lor q\) will not affect the blocks of quantifiers). We say that a \(\mathrm{AUF}_{1}^{-}\) sentence is in _weak normal form_ if it is a conjunction of formulas having one of the following shapes. \[\mathrm{Q}_{1}x_{1}\ldots\mathrm{Q}_{k}x_{k}\psi(x_{1},\ldots,x_ {k}), \tag{1}\] \[E\to\mathrm{Q}_{1}x_{1}\ldots\mathrm{Q}_{k}x_{k}\psi(x_{1}, \ldots,x_{k}), \tag{2}\] where * \(k\geqslant 0\) is a natural number * the \(x_{i}\) are distinct variables * \(\psi\) is a quantifier-free \(\mathrm{AUF}_{1}^{-}\) formula (Boolean combination of atoms, each of them containing all the variables \(x_{1},\ldots,x_{k}\) or at most one of them) * every \(\mathrm{Q}_{i}\) is a quantifier (universal or existential) and either all the \(\mathrm{Q}_{i}\) are universal (_universal conjunct_) or \(\mathrm{Q}_{k}\) is existential (_existential conjunct_) * \(E\) is a 0-ary relation symbol In particular, in a formula of type (1), \(k\) may be equal to 0; in this case \(\psi\) is a Boolean combination of 0-ary predicates. **Lemma 1**.: _Let \(\varphi\) be a \(\mathrm{AUF}_{1}^{-}\) sentence. Then there exists a polynomially computable \(\mathrm{AUF}_{1}^{-}\) sentence \(\varphi^{\prime}\) in weak normal form over a signature extending the signature of \(\varphi\) by some fresh unary and \(0\)-ary relation symbols, such that (i) every model of \(\varphi\) can be expanded to a model of \(\varphi^{\prime}\) and (ii) every model of \(\varphi^{\prime}\) is a model of \(\varphi\)._ Proof.: (Sketch) Assume that \(\varphi\) is in NNF. Take an innermost subformula \(\psi_{0}\) starting with a maximal block of quantifiers. If it has a free variable, that is, is of the form \[\mathrm{Q}_{1}x_{1}\ldots\mathrm{Q}_{k}x_{k}\psi(x_{1},\ldots,x_{k},y)\] replace it by \(P(y)\), for a fresh unary symbol \(P\), and add the following normal form conjunct \(\varphi_{\psi_{0}}\) (partially) axiomatising \(P\). \[\forall y\mathrm{Q}_{1}x_{1}\ldots\mathrm{Q}_{k}x_{k}(P(y)\to\psi(x_{1}, \ldots,x_{k},y)).\] In other words, \(\varphi\) is replaced by \(\varphi(P(y)/\psi_{0})\wedge\varphi_{\psi_{0}}\). If \(\psi_{0}\) is a proper subsentence, that is it is of the form \[\mathrm{Q}_{1}x_{1}\ldots\mathrm{Q}_{k}x_{k}\psi(x_{1},\ldots,x_{k}),\] then replace it by \(E\), for a fresh 0-ary symbol \(E\) and add the conjunct \[E\to\mathrm{Q}_{1}x_{1}\ldots\mathrm{Q}_{k}x_{k}\psi(x_{1},\ldots,x_{k}).\] Repeat this process as long as possible. Note that we indeed append conjuncts belonging to \(\mathrm{AUF}_{1}^{-}\). The above process is similar to Scott's reduction of \(\mathrm{FO}^{2}\) formulas to their normal form. Besides the natural modifications needed to deal with sequences of quantifiers rather than with single quantifiers, the main difference is that in the appended conjuncts axiomatizing the freshly introduced unary and 0-ary predicates, we write implications in only one direction. This is sound as our initial formula is assumed to be in NNF. Indeed, consider a single step of the reduction, assuming that the case with a free variable in \(\psi_{0}\) applies. In this step \(\varphi^{\prime}=\varphi(P(y)/\psi_{0})\wedge\varphi_{\psi_{0}}\) is produced from \(\varphi\). Assuming that \(\mathfrak{A}\models\varphi\) we obtain a model \(\mathfrak{A}^{\prime}\) of \(\varphi^{\prime}\) by making the unary relation \(P\) true at all elements \(a\) of \(A\) such that \(\mathfrak{A}\models\psi_{0}[a]\). This makes the appended conjunct \(\varphi_{\psi_{0}}\) true; obviously, also \(\varphi(P(y)/\psi_{0})\) remains true. In the opposite direction assume \(\mathfrak{A}^{\prime}\models\varphi^{\prime}\). It may happen that the subformula \(\psi_{0}(y)\) of \(\varphi\) is true in \(\mathfrak{A}^{\prime}\) in more points than \(P(y)\) is. However, to guarantee that \(\varphi\) is true it suffices that it is true _at least_ at those points where \(P(y)\) is, which is ensured by the appended conjunct \(\varphi_{\psi_{0}}\); this is because \(\varphi\) is assumed to be in NNF and thus \(\psi_{0}\) appears in \(\varphi\) in the scope of no negation symbol. We reason similarly for the case when \(\psi_{0}\) is a subsentence. For our purposes, that is showing the finite model property for \(\mathrm{AUF}_{1}^{-}\) and demonstrating that its satisfiability problem is in NExpTime, we can further simplify our formulas, by eliminating 0-ary predicates. What we can do is to guess the truth values for all the 0-ary predicates and replace them by \(\top\) or \(\bot\), in accordance with the guess. In particular, the conjuncts of the form \(E\rightarrow\mathrm{Q}_{1}x_{1}\ldots\mathrm{Q}_{k}x_{k}\psi(x_{1},\ldots,x_{k})\) are eliminated if \(E\) is guessed to be \(\bot\) and replaced just by \(\mathrm{Q}_{1}x_{1}\ldots\mathrm{Q}_{k}x_{k}\psi(x_{1},\ldots,x_{k})\) if \(E\) is guessed to be \(\top\). It is convenient to split the set of the resulting conjuncts into those whose all quantifiers are universal and those which end with the existential quantifier. We say that a sentence \(\varphi\) is in _normal form_ if it is of the following shape: \[\bigwedge_{1\leq i\leq m_{2}}\varphi_{i}^{\exists}\wedge\bigwedge_{1\leq i\leq m _{i}}\varphi_{i}^{\vee}, \tag{3}\] where \(\varphi_{i}^{\exists}=\mathrm{Q}_{1}^{i}x_{1}\mathrm{Q}_{2}^{i}x_{2}\ldots \mathrm{Q}_{k_{i}-1}^{i}x_{k_{i}-1}\exists x_{k_{i}}\psi_{i}^{\exists},\ \varphi_{i}^{\vee}=\forall x_{1}\ldots x_{l_{i}}\psi_{i}^{\vee},\) for \(\mathrm{Q}_{j}^{i}\in\{\forall,\exists\},\ \psi_{i}^{\exists}=\psi_{i}^{ \exists}(x_{1},x_{2},\ldots,x_{k_{i}})\) and \(\psi_{i}^{\vee}=\psi_{i}^{\vee}(x_{1},\ldots,x_{l_{i}})\). The discussion above justifies the following. **Lemma 2**.: _(i) The satisfiability problem for \(\mathrm{AUF}_{1}^{-}\) can be reduced in nondeterministic polynomial time to the satisfiability problem for normal form \(\mathrm{AUF}_{1}^{-}\) sentences. (ii) If the class of all normal form \(\mathrm{AUF}_{1}^{-}\) sentences has the finite (exponential) model property then also the whole \(\mathrm{AUF}_{1}^{-}\) has the finite (exponential) model property._ The reduction to normal form described above allows us to easily prove the decidability of the satisfiability problem for \(\mathrm{AUF}_{1}^{-}\). This can be done by using the results on the Maslov Class \(\overline{\mathrm{K}}\) (which is a dual of the Maslov class \(\mathrm{K}\)). Full definition of \(\overline{\mathrm{K}}\) is quite complicated and can be found, e.g., in [14]. For our purposes it is sufficient to say that when converted to prenex form \(\overline{\mathrm{K}}\) formulas look as follows: \[\exists y_{1}\ldots\exists y_{m}\forall x_{1}\ldots\forall x_{k}\mathrm{Q}_{ 1}z_{1}\ldots\mathrm{Q}_{l}z_{l}\psi, \tag{4}\] where the \(\mathrm{Q}_{i}\) are quantifiers, \(\psi\) is a quantifier-free formula without equality and every atom of \(\psi\) satisfies one of the following conditions: (i) it contains at most one \(x_{i}\)- or \(z_{i}\)-variable, (ii) it contains all the \(x_{i}\)-variables and no \(z_{i}\)-variables, or (iii) it contains an existentially quantified variable \(z_{j}\) and no \(z_{i}\)-variables with \(i>j\). Now, one easily observes that every \(\mathrm{AUF}_{1}^{-}\)-normal form conjunct belongs to \(\overline{\mathrm{K}}\). Indeed, every \(\varphi_{i}^{\vee}\)-conjunct is of the form (4) with \(m=l=0\) and its every atom satisfies either condition (i) or (ii); every \(\varphi_{i}^{\bar{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}\)-conjunct is of the form (4) with \(m=k=0\) and every atom satisfying (i) or (iii). Hence any normal form formula belongs to \(\overline{\mathrm{DK}}\), the class of conjunctions of formulas in \(\overline{\mathrm{K}}\). The satisfiability problem for \(\overline{\mathrm{K}}\) was shown to be decidable in [25]. This result was extended to the class \(\overline{\mathrm{DK}}\) in [14]. This gives us the basic decidability result. **Theorem 3**.: _The satisfiability problem for \(\mathrm{AUF}_{1}^{-}\) is decidable._ We recall that the precise complexity of \(\overline{\mathrm{DK}}\) has not been established. It is also not known if \(\overline{\mathrm{DK}}\) has the finite model property and if its finite satisfiability is decidable. The same questions for \(\overline{\mathrm{K}}\) are also open. ## 3 Finite model property The following theorem is the main results of this paper. Besides just proving the finite model property for \(\mathrm{AUF}_{1}^{-}\), it will also allow us to establish the exact complexity of its satisfiability problem. **Theorem 4**.: \(\mathrm{AUF}_{1}^{-}\) _has the exponential model property._ The rest of this section is devoted to a proof of the above theorem. By Lemma 2 we may restrict attention to formulas of the form (3). ### Satisfaction forests In this subsection we introduce _satisfaction forests_, which are auxiliary structures (partially) describing some finite models of normal form \(\mathrm{AUF}_{1}^{-}\) sentences. We first explain how to extract a satisfaction forest from a given finite model \(\mathfrak{B}\) of a normal form sentence \(\varphi\). Then we formally define satisfaction forests and relate their existence to the existence of finite models of normal form sentences. #### 3.1.1 Extracting a satisfaction forest from a model Let \(\varphi\) be a normal form \(\mathrm{AUF}_{1}^{-}\) sentence and let \(\mathfrak{B}\) be its finite model. Assume that \(\varphi\) is as in (3). The satisfaction forest will be a collection of labelled trees, one tree for each existential conjunct of \(\varphi\), showing how this conjunct is satisfied in \(\mathfrak{B}\). The labelling function will be denoted \(\mathscr{L}\) and will assign elements from \(B\) to tree nodes (with the exception of the root, which will be assigned the special empty label). Consider a single existential conjunct \(\varphi_{i}^{\bar{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}}} \mathrm{}} \mathrm{\mathrm{Q}_{i}^{i}x_{2}x_{2}\ldots\mathrm{Q}_{k_{i-1}}^{ \mathrm{Q}}x_{k_{i}-1}\exists x_{k_{i}}\varphi_{i}^{\bar{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}} \mathrm{}}\mathrm{}\mathrm{}}\mathrm{}\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \ { \ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\}\ \ \ \ \).\ \ \ \ \ \ \ \ \ \ \ \ \\\\\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\}\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\}\\\\\\\}\\\\\\\\\}\\\\\\\\\}\\\\\\\\}\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\}\\\\\\\\}\\\\\\\\\}\\\\\\\}\\\\\\\}\\\\\\\\\}\\\\\\\\\}\\\\\\\}\\\\\\\\\}\\\\\\\}\\\ It is clear that such an element exists. If \(j<k_{i}\) then we call it an _intermediate witness_ for \(\varphi_{i}^{\natural}\), and if \(j=k_{i}\) we call it the _final witness_ for \(\varphi_{i}^{\natural}\). Add a single child \(n^{\prime}\) of \(n\) to \(\mathcal{T}_{i}\) and set \(\mathcal{L}(n^{\prime})=b\). The added element is called an _existential node_. For a branch \(\flat\) of the above-defined tree we denote by \(Set(\flat)\) the set of labels of the non-root elements of \(\flat\), by \(Set^{-}(\flat)\) the set of labels of non-root and non-leaf elements of \(\flat\) and by \(Seq(\flat)\) the sequence of the non-root elements of \(\flat\), ordered from the child of the root towards the leaf. We further overload the function \(\mathcal{L}\) by allowing it to define also labels for branches of the tree (by a _branch_ we mean here a sequence of elements \(n_{1},\ldots,n_{k_{i}}\) such that \(n_{1}\) is a child of the root, \(n_{k_{i}}\) is a leaf, and each \(n_{i+1}\) is a child of \(n_{i}\)). We label each branch \(\flat\) with the pre-substructure of \(\mathfrak{B}\) over \(Set(\flat)\). To declare some properties of satisfaction forests we need the following notions. A pre-structure \(\mathfrak{H}^{*}\) is _\(\varphi_{i}^{\vee}\)-compatible_, if for every sequence \(a_{1},\ldots,a_{l_{i}}\) of elements of \(H\) such that \(\{a_{1},\ldots,a_{l_{i}}\}=H\) we have \(\mathfrak{H}^{*}\models\psi_{i}^{\vee}(a_{1},\ldots,a_{l_{i}})\). A pre-structure is _\(\varphi^{\vee}\)-compatible_ if it is \(\varphi_{i}^{\vee}\)-compatible for every conjunct \(\varphi_{i}^{\vee}\). Further, a set of 1-types \(\{\alpha_{1},\ldots,\alpha_{k}\}\) is _\(\varphi^{\vee}\)-compatible_ if for a set of distinct elements \(H=\{a_{1},\ldots,a_{m}\}\) and any assignment \(f:\{a_{1},\ldots,a_{m}\}\rightarrow\{\alpha_{1},\ldots,\alpha_{k}\}\) one can build a \(\varphi^{\vee}\)-compatible pre-structure on \(H\) in which, for every \(i\), the 1-type of \(a_{i}\) is \(f(a_{i})\). We now collect some properties of the tree \(\mathcal{T}_{i}\) for \(\varphi_{i}^{\natural}\) constructed as above. 1. for \(1\leqslant j\leqslant k_{i}\), and every node \(n\) from level \(j-1\): 1. if \(\mathcal{Q}_{j}^{i}=\forall\) then \(n\) has precisely \(|B|\) children, labelled by distinct elements of \(B\) (recall that each of these children is called a universal node) 2. if \(\mathcal{Q}_{j}^{i}=\exists\) then \(n\) has precisely one child (recall that this child is called an existential node) 2. for every branch \(\flat\in\mathcal{T}_{i}\), assuming \(Seq(\flat)=(a_{1},\ldots,a_{k_{i}})\), we have \(\mathcal{L}(\flat)\models\psi_{i}^{\natural}(a_{1},\ldots,a_{k_{i}})\) 3. for every pair of branches \(\flat_{1},\flat_{2}\in\mathcal{T}_{i}\), for every \(a\in B\) such that \(a\in Set(\flat_{1})\) and \(a\in Set(\flat_{2})\) the 1-types of \(a\) in \(\mathcal{L}(\flat_{1})\) and in \(\mathcal{L}(\flat_{2})\) are identical 4. for every pair of branches \(\flat_{1},\flat_{2}\in\mathcal{T}_{i}\) such that \(Set(\flat_{1})=Set(\flat_{2})\) we have that \(\mathcal{L}(\flat_{1})=\mathcal{L}(\flat_{2})\). \(\Box\) Now we collect some properties of the whole sequence of trees \(\mathcal{T}_{1},\ldots,\mathcal{T}_{m_{3}}\) constructed for \(\varphi\) and \(\mathfrak{B}\). 1. for every \(i\), \(\mathcal{T}_{i}\) is a satisfaction tree over \(B\) for \(\varphi_{i}^{\natural}\) 2. for every pair of branches \(\flat_{1}\in\mathcal{T}_{i},\flat_{2}\in\mathcal{T}_{j}\), \(i\neq j\), for every \(a\in B\) such that \(a\in Set(\flat_{1})\) and \(a\in Set(\flat_{2})\) the 1-types of \(a\) in \(\mathcal{L}(\flat_{1})\) and in \(\mathcal{L}(\flat_{2})\) are identical 3. for every pair of branches \(\flat_{1}\in\mathcal{T}_{i},\flat_{2}\in\mathcal{T}_{j}\), \(i\neq j\) such that \(Set(\flat_{1})=Set(\flat_{2})\) we have that \(\mathcal{L}(\flat_{1})=\mathcal{L}(\flat_{2})\) 4. the set of all 1-types appearing in the pre-structures defined as labels of the branches of the trees in \(\mathcal{F}\) is \(\varphi^{\vee}\)-compatible. Properties (T3), (T4), (F2) and (F3) will be sometimes called the _(forest) consistency conditions_. **Claim 5**.: _The sequence of trees \(\mathcal{T}_{1},\ldots,\mathcal{T}_{m_{3}}\) constructed as above for the structure \(\mathfrak{B}\) and the sentence \(\varphi\) satisfies conditions (T1)-(T5) and (F1)-(F4)._ Proof.: (Sketch) It is not difficult to see that each of the \(\mathcal{T}_{i}\) satisfies (T1)-(T5) and that the whole sequence satisfies (F1)-(F3). The only non-obvious point is (F4). Let us prove that it is true. Let \(\alpha_{1},\ldots,\alpha_{k}\) be the list of all 1-types appearing in the pre-structures defined in the whole forest. Let \(H=\{a_{1},\ldots,a_{m}\}\) be a set of fresh distinct elements and \(f:\{a_{1},\ldots,a_{m}\}\rightarrow\{\alpha_{1},\ldots,\alpha_{k}\}\) an assignment of 1-types to these elements. We need to construct a \(\varphi^{\vee}\)-compatible pre-structure on \(H\) in which, for every \(i\), the 1-type of \(a_{i}\) is \(f(a_{i})\). For each \(i\) choose an element \(g(a_{i})\in B\) such that \(\operatorname{tp}^{\mathfrak{D}}(g(a_{i}))=f(a_{i})\); \(g\) need not be injective. Let us define the pre-structure \(\mathfrak{H}^{*}\) on \(H\) by setting the 1-type of \(a_{i}\) to be \(f(a_{i})\), and for every relation symbol \(R\), and every sequence \(c_{1},\ldots,c_{l}\) of elements of \(H\) such that \(l\) is the arity of \(R\) and \(\{c_{1},\ldots,c_{l}\}=H\), setting the truth-value of the atom \(R(c_{1},\ldots,c_{l})\) to be equal to the truth-value of \(R(g(c_{1}),\ldots,g(c_{l}))\) in \(\mathfrak{D}\). We claim that so defined \(\mathfrak{H}^{*}\) is \(\varphi^{\vee}\)-compatible. To see this take any conjunct \(\varphi^{\vee}_{i}\) and any sequence of elements \(c_{1},\ldots,c_{l_{i}}\) such that \(\{c_{1},\ldots,c_{l_{i}}\}=H\) and assume to the contrary that \(\mathfrak{H}^{*}\not\models\psi^{\vee}_{i}(c_{1},\ldots,c_{l_{i}})\). But then \(\mathfrak{D}\not\models\psi^{\vee}_{i}(g(c_{1}),\ldots,g(c_{l_{i}}))\), as the truth-values of the atoms appearing in \(\psi^{\vee}_{i}\) in the two considered structures appropriately coincide by our definition of \(\mathfrak{H}^{*}\). Contradiction. #### 3.1.2 Satisfaction forests and the existence of finite models Let \(\varphi\) be a normal form \(\operatorname{AUF}_{1}^{-}\) sentence (we do not assume that a model of \(\varphi\) is known). Formally, a _satisfaction forest_ for \(\varphi\)_over a domain_\(B\) is a sequence of trees \(\mathcal{T}_{1},\ldots,\mathcal{T}_{m_{3}}\) together with a labelling function \(\mathcal{L}\), assigning elements of \(B\) to the nodes of the \(\mathcal{T}_{i}\) (with the exception of their roots to which the special empty label is assigned) and pre-structures to their branches, such that each of the trees \(\mathcal{T}_{i}\) satisfies conditions (T1)-(T5) and the whole sequence satisfies conditions (F1)-(F4). **Lemma 6**.: _A normal form \(\operatorname{AUF}_{1}^{-}\) sentence \(\varphi\) has a finite model over a domain \(B\) iff it has a satisfaction forest over \(B\)._ Proof.: Left-to-right implication is justified by the extraction of a satisfaction forest from a given finite model of \(\varphi\) described in Section 3.1.1, and in particular by Claim 5. In the opposite direction assume that a satisfaction forest over a finite domain \(B\) for \(\varphi\) is given. We construct a model \(\mathfrak{B}\) of \(\varphi\) over the domain \(B\). The construction is natural: _Step 1: \(1\)-types._ The 1-type of an element \(b\in B\) is defined as the 1-type of \(b\) in the structure \(\mathcal{L}(\flat)\) for an arbitrarily chosen branch \(\flat\), in an arbitrarily chosen tree \(\mathcal{T}_{i}\), for which \(b\in\mathit{Set}(\flat)\). _Step 2: Witnesses._ For every tree \(\mathcal{T}_{i}\) and its every branch \(\flat\) of \(\mathcal{T}_{i}\) define the pre-structure on \(\mathit{Set}(\flat)\) in accordance with \(\mathcal{L}(\flat)\). _Step 3: Completion._ For any set of distinct elements \(\{b_{1},\ldots,b_{k}\}\) whose pre-structure is not yet defined, choose any \(\varphi^{\vee}\)-compatible pre-structure which retains the already defined 1-types of the \(b_{i}\). Properties (T3), (F2), (T4) and (F3) guarantee that Step 1 and Step 2 can be performed without conflicts and the existence of an appropriate pre-structure in Step 3 is guaranteed by (F4). It remains to see that \(\mathfrak{B}\models\varphi\). Consider any existential conjunct of \(\varphi\), that is a conjunct \(\varphi^{\flat}_{i}=\operatorname{Q}^{i}_{1}x_{1}\operatorname{Q}^{i}_{2}x_{2} \ldots\operatorname{Q}^{i}_{k_{l}-1}x_{k_{l}-1}\exists x_{k_{l}}\psi^{\wedge}_ {i}\). The satisfaction tree \(\mathcal{T}_{i}\) witnesses that \(\varphi^{\flat}_{i}\) indeed holds: it describes all possible substitutions for universally quantified variables, and shows how intermediate and final witnesses for existential quantifiers can be chosen. Consider now any universal conjunct \(\varphi^{\vee}_{i}=\forall x_{1}\ldots x_{l_{i}}\psi^{\wedge}_{i}\) and let \(b_{1},\ldots,b_{l_{i}}\) be any sequence of elements of \(B\) (possibly with repetitions). Let \(H=\{b_{1},\ldots,b_{l_{i}}\}\). The pre-structure on \(H\) has been defined either in Step 2 or in Step 3. In both cases we know that it is \(\varphi^{\vee}\)-compatible, in particular it is \(\varphi^{\vee}_{i}\)-compatible, so \(\mathfrak{H}^{*}\models\psi^{\vee}_{i}(b_{1},\ldots,b_{l_{i}})\). ### From a model to a satisfaction forest over a small domain We are ready to present the main construction of this paper in which we show that every satisfiable formula has a satisfaction forest over a small domain. Let \(\mathfrak{A}\) be a (possibly infinite) model of a normal form sentence \(\varphi\) of the shape as in (3). We show how to construct a satisfaction forest over a domain of size bounded exponentially in \(|\varphi|\). By Lemma 6 this will guarantee that \(\varphi\) has a finite model over such a bounded domain. #### 3.2.1 Domain Let \(L\) be the number of \(1\)-types (over the signature of \(\varphi\)) realized in \(\mathfrak{A}\), and let these types be enumerated as \(\alpha_{1},\ldots,\alpha_{L}\). Let \(K=\max\{k_{i}:1\leqslant i\leqslant m_{\exists}\}\). We define the domain \(B\) to be \(\{1,\ldots,2K\}\times\{1,\ldots,m_{\exists}\}\times\{1,\ldots,(K-1)^{K-1}\} \times\{1,\ldots,L\}\). Note that \(K\) and \(m_{\exists}\) are bounded linearly and \(L\) is bounded exponentially in \(|\varphi|\), and hence \(|B|\) is indeed bounded exponentially in \(|\varphi|\). For convenience let us split \(B\) into the sets \(B_{i}=\{(i,*,*,*)\}\) (here and in the sequel \(*\) will be sometimes used as a wildcard in the tuples denoting elements of the domain). We will sometimes call \(B_{i}\) the \(i\)-th _layer_ of \(B\). #### 3.2.2 Some simple combinatorics: Extension functions During the construction of the satisfaction forest we will design a special strategy for assigning labels to the leaves. To this end we introduce an auxiliary combinatorial tool, which we will call _extension functions_. Let us recall a well known Hall's marriage theorem. A _matching_ in a bipartite graph \((G_{1},G_{2},E)\) is a partial injective function \(f:G_{1}\to G_{2}\) such that if \(f(a)=b\) then \((a,b)\in E\). **Theorem 7** (Hall).: _Let \((G_{1},G_{2},E)\) be a bipartite graph. There exists a matching covering \(G_{1}\) iff for any set \(W\subseteq G_{1}\) the number of vertices of \(G_{2}\) incident to the edges emitted from \(W\) is greater or equal to \(|W|\)._ For a natural number \(n\), let \([n]\) denote the set \(\{1,\ldots,n\}\) and for \(1\leqslant l\leqslant n\) let \([n]^{l}\) denote the set of all subsets of \([n]\) of cardinality \(l\). **Lemma 8**.: _For every \(0<l<K\) there exists a \(1{-}1\) function \(\mathit{ext}_{l}:[2K]^{l}\to[2K]^{l+1}\) such that for any \(S\in[2K]^{l}\) we have that \(S\subseteq\mathit{ext}_{l}(S)\)._ Proof.: Consider the bipartite graph \(([2K]^{l},[2K]^{l+1},E)\) such that \((S,S^{\prime})\in E\) iff \(S\subseteq S^{\prime}\). To show that a desired \(\mathit{ext}_{l}\) exists it suffices to show the existence of a matching covering entirely the set \([2K]^{l}\). To this end we apply Hall's marriage theorem. In our graph every node from \([2K]^{l}\) has degree \(2K-l\) (given an \(l\)-element subset of \([2K]\) it can be expanded to an \(l+1\) subset just by adding to it precisely one of the remaining \(2K-l\) elements) and every node from \([2K]^{l+1}\) has degree \(l+1\) (to obtain an \(l\)-element subset of a \(l+1\)-subset one just removes one of the elements of the latter). Take a subset \(W\) of \([2K]^{l}\). The nodes of this subset are incident to \(|W|\cdot(2K-l)\) edges in total. Let us see that the number of nodes in \([2K]^{l+1}\) incident to a node from \(W\) is greater than or equal to \(|W|\). Indeed, assume to the contrary that it is not. Then at most \(|W|-1\) nodes absorb \(|W|\cdot(2K-l)\) edges emitted by \(W\), but this means that \(|W|\cdot(2K-l)\leqslant(|W|-1)\cdot(l+1)\). Rearranging this inequality we get that \(|W|(2K-2l-1)+l+1\leqslant 0\). But using the assumption that \(0<l<K\) we have that \((2K-2l-1)>0\) and hence the whole left-hand side of the last inequality must be greater than \(0\). Contradiction. Thus our graph satisfies the Hall's theorem assumptions which guarantee the existence of a matching from \([2K]^{l}\) to \([2K]^{l+1}\), covering entirely \([2K]^{l}\). This matching can be taken as \(\mathit{ext}_{l}\). Choose an extension function \(\mathit{ext}_{l}\) for every \(l\) and let \(\mathit{ext}=\bigcup_{l=1}^{K-1}\mathit{ext}_{l}\), that is, \(\mathit{ext}\) is a function which takes a non-empty subset of \([2K]\) of size at most \(K-1\) and returns a superset containing precisely one new element. Obviously \(\mathit{ext}\) remains an injective function. #### 3.2.3 Construction of a satisfaction forest We now describe how to construct a satisfaction forest \(\mathcal{T}_{1},\ldots,\mathcal{T}_{m_{3}}\) for \(\varphi\) over the domain \(B\). It should be helpful to announce how we are going to take care of the consistency conditions for the whole forest: * Conditions (F2) and (T3): With every element \(a=(*,*,*,l)\in B\) we associate the 1-type \(\alpha_{l}\). Whenever \(a\) will be used as a label of a node in a satisfaction tree then its 1-type in the pre-structure defined for any branch containing a node labelled with \(a\) will be set to \(\alpha_{l}\). * Conditions (F3) and (T4): for a a pair of distinct branches \(\flat_{1}\), \(\flat_{2}\) (either belonging to the same tree or to two different trees) we will simply have \(\mathit{Set}(\flat_{1})\neq\mathit{Set}(\flat_{2})\). This condition will be ensured by an appropriate use of the extension function. Here it is important that the last quantifier in every \(\varphi_{i}^{\exists}\)-conjunct is existential, and hence the last node of every branch in \(\mathcal{T}_{i}\) is also existential, so we can freely choose its label from \(B\). Let us explain how to construct a single \(\mathcal{T}_{i}\), a \(\varphi_{i}^{\exists}\)-satisfaction tree over \(B\). The general shape of \(\mathcal{T}_{i}\) is determined by \(\varphi_{i}^{\exists}\) and \(B\): we know how many nodes we need, we know which of them are existential, and which are universal, we know the labels of the universal nodes. It remains to assign labels to existential nodes (elements of \(B\)) and to branches (pre-structures on the set of elements formed from the labels of the nodes on a branch). We define an auxiliary function \(\mathit{pat}\) which for every node of \(\mathcal{T}_{i}\) returns \(\mathit{a}\)_pattern element_ from \(\mathfrak{A}\). We will choose \(\mathit{pat}(n)\), so that its 1-type is equal to type of \(\mathcal{L}(n)\). We remark, that if two nodes from different branches have the same label then they do not need to have the same pattern element. Consider a node \(n_{k}\) and assume that all its non-root ancestors \(n_{1},\ldots,n_{k-1}\) have the function \(\mathit{pat}\) and their labels already defined. We proceed as follows * If \(n_{k}\) is universal then its label \(\mathcal{L}(n_{k})\) is known * If \(\mathcal{L}(n_{k})=\mathcal{L}(n_{j})\) for some \(j<k\) then we set \(\mathit{pat}(n_{k})=\mathit{pat}(n_{j})\). * If the label \(\mathcal{L}(n_{k})\) is not used by the ancestors of \(n\) then choose as \(\mathit{pat}(n)\) an arbitrary element of \(\mathfrak{A}\) of the 1-type assigned to \(\mathcal{L}(n_{k})\). (In particular, we may use an element which was used by one of the ancestors of \(n\)) * If \(n_{k}\) is existential then we need to define both \(\mathcal{L}(n)\) and \(\mathit{pat}(n)\). By our construction we have that \[\mathfrak{A}\models\exists x_{k}\mathcal{Q}_{k+1}^{i}x_{k+1}\ldots\mathcal{Q} _{k_{i}-1}^{i}x_{k-1}\exists x_{k_{i}}\psi_{i}^{\exists}(\mathit{pat}(n_{1}),\ldots,\mathit{pat}(n_{k-1}),x_{k},x_{k+1},\ldots,x_{k_{i}}).\] We choose an element \(w\in A\) witnessing the previous formula, i.e., an element such that \[\mathfrak{A}\models\mathcal{Q}_{k+1}^{i}x_{k+1}\ldots\mathcal{Q}_{k_{i}-1}^{i }x_{k_{i}-1}\exists x_{k_{i}}\psi_{i}^{\exists}(\mathit{pat}(n_{1}),\ldots, \mathit{pat}(n_{k-1}),w,x_{k+1},\ldots,x_{k_{i}})\] and set \(\mathit{pat}(n_{k})=w\). To define the label of \(n_{k}\) we consider two cases: * If \(n_{k}\) is not a leaf then: * if \(\mathit{pat}(n_{j})=w\) for some \(j<k\) then set \(\mathcal{L}(n_{k})=\mathcal{L}(n_{j})\) * otherwise we choose as \(\mathcal{L}(n_{k})\) an arbitrary element of \(B\) which has assigned the 1-type \(\mathrm{tp}^{\mathfrak{A}}(w)\), not used by the ancestors of \(n_{k}\) (there are many copies of each 1-type in \(B\) so it is always possible). * If \(n_{k}\) is a leaf then let \(\flat\) be the branch of \(n_{k}\) and let \(S=\{j:n_{l}\in B_{j}\text{ for some }l<k\}\). Of course, \(|S|<k\leqslant K\) so \(\mathit{ext}(S)\) is defined. Let \(s\) be the unique member of \(\mathit{ext}(S)\setminus S\). We take as \(\mathcal{L}(n_{k})\) an element \((s,i,t,l)\in B_{s}\) where \(l\) is such that \(\alpha_{l}=\mathrm{tp}^{\mathfrak{A}}(w)\), and where \(t\) is chosen so that none of the branches \(\flat^{\prime}\) of the current tree for which the labels have been already defined such that \(Set^{-}(\flat^{\prime})=Set^{-}(\flat)\) used \((s,i,t,l)\) as the label of its leaf. We indeed have enough elements for this, since obviously \(|Set^{-}(\flat)|\leqslant K-1\) and thus there are at most \((K-1)^{K-1}\) different branches whose nodes from the first \(K-1\) levels are labelled by elements of \(|Set^{-}(\flat)|\) (recall that there are \((K-1)^{K-1}\) possible choices for \(t\)). Take now any branch \(\flat\) of \(\mathcal{T}_{i}\). It remains to define the pre-structure \(\mathcal{L}(\flat)\). For any relational symbol \(R\) of arity \(m\) and any sequence \(a_{i_{1}},\ldots,a_{i_{m}}\) of elements of \(Set(\flat)\) containing all the elements of \(Set(\flat)\) we set \(R(a_{i_{1}},\ldots,a_{i_{m}})\) to be true iff \(R(pat(a_{i_{1}}),\ldots,pat(a_{i_{m}}))\) is true in \(\mathfrak{A}\). For every \(a_{j}\) its 1-type is set to be equal to the 1-type of \(pat(a_{j})\). This completes the definition of the pre-structure on \(Set(\flat)\). Note that this ensures that this pre-structure satisfies \(\psi_{i}^{2}(Seq(\flat))\). ### Correctness Let us now see that the defined satisfaction forest indeed satisfies all the required conditions. * Conditions (T1), (T2) and (T3) should be clear. * For (T4) we show that there is no pair of branches \(\flat\), \(\flat^{\prime}\) in a tree \(\mathcal{T}_{i}\) with \(Set(\flat)=Set(\flat^{\prime})\). Indeed, we have chosen as labels of the leaves of \(\flat\) and \(\flat^{\prime}\) two different elements \(b=(s,i,x,*)\) and \(b^{\prime}=(s,i,y,*)\) of a layer \(B_{s}\) which is not inhabited by the elements of \(Set^{-}(\flat)\) or \(Set^{-}(\flat^{\prime})\) (due to the use of the function \(ext\)). So \(b\in Set(\flat)\) but \(b\not\in Set(\flat^{\prime})\) and thus \(Set(\flat)\neq Set(\flat^{\prime})\). * To show that (T5) holds assume to the contrary that for some branch \(\flat\), \(\mathfrak{H}^{*}=\mathcal{L}(\flat)\) is not \(\varphi^{\vee}\)-compatible; take \(i\) for which it is not \(\varphi_{i}^{\vee}\)-compatible. So, for some sequence \(a_{1},\ldots,a_{l_{1}}\) such that \(\{a_{1},\ldots,a_{l_{1}}\}=Set(\flat)=H\) we have \(\mathfrak{H}^{*}\neq\psi_{i}^{*}(a_{1},\ldots,a_{l_{i}})\). But then the definition of the pre-structure in \(\mathcal{L}(\flat)\) implies that \(\mathfrak{A}\neq\psi_{i}^{*}(pat(a_{1}),\ldots,pat(a_{l_{i}}))\), that is \(\mathfrak{A}\) violates \(\varphi_{i}^{\vee}\). Contradiction. * Conditions (F1), (F2) should be clear. * For (F3) the argument is similar to the argument for (T4): We show that there is is pair of branches \(\flat_{1}\), \(\flat_{2}\) in a tree \(\mathcal{T}_{i}\), and resp., \(\mathcal{T}_{j}\), \(i\neq j\), with \(Set(\flat_{1})=Set(\flat_{2})\). Again, this follows from the fact that we have chosen as labels of the leaves of \(\flat_{1}\) and \(\flat_{2}\) two different elements \(b\) and \(b^{\prime}\) of a layer \(B_{s}\) which is not inhabited by the elements of \(Set^{-}(\flat_{1})\) or \(Set^{-}(\flat_{2})\). This time the elements \(b_{1}\) and \(b_{2}\) are different from each other since \(b_{1}=(s,i,*,*)\) and \(b_{2}=(s,j,*,*)\). * For (F4) we reason precisely as in the reasoning for (F4) in the proof of the Claim in Section 3.1 (we just replace the structure \(\mathfrak{B}\) from this proof with the currently considered structure \(\mathfrak{A}\)). An immediate consequence of Thm. 4 is: **Theorem 9**.: _The satisfiability problem for \(\mathrm{AUF}_{1}^{-}\) is \(\mathrm{NExpTime}\)-complete._ Proof.: The lower bound is inherited from the lower bound for \(\mathrm{FO}^{2}\)[24]. Let us turn to the upper bound. By Lemma 2 it suffices to show how to decide satisfiability of a normal form sentence \(\varphi\). By Theorem 4 if \(\varphi\) is satisfiable then it has a model with exponentially bounded domain. We guess some natural description of such a model \(\mathfrak{A}\). We note that this description is also of exponential size with respect to \(|\varphi|\): Indeed, we need to describe some number (linearly bounded in \(|\varphi|\)) of relations of arity at most \(|\varphi|\), and it is straightforward, taking into consideration the size of the domain, that a description of a single such relation is at most exponential in \(|\varphi|\). A verification of a single normal form conjunct in the guessed structure can be done in an exhaustive way, by considering all possible substitutions for the variables. Alternatively, instead of guessing a model one could guess a satisfaction forest for \(\varphi\). Again, a routine inspection reveals that the size of its description can be bounded exponentially in \(|\varphi|\); also the verification of the properties (T1)-(T5), (F1)-(F4) would not be problematic. ## 4 Infinity axiom with free use of equality In this section we note that allowing for free use of equality in our logic changes the situation significantly: we lose the finite model property. We recall that in the case of \(\mathrm{UF}_{1}\) free use of equality does not spoil the decidability and even does not change the complexity. In the recent paper [11] we note that the fragment with arbitrary blocks of quantifiers \(\mathrm{AUF}_{1}\) and with free use of equality contains infinity axioms (satisfiable formulas without finite models), by constructing the following three-variable formula: \[\exists xS(x)\wedge\forall x\exists y\forall z(\neg S(y)\wedge R(x,y,z)\wedge( x=z\vee\neg R(z,y,x))),\] which has no finite models but is satisfied in the model whose universe is the set of natural numbers, \(S\) is true only at \(0\) and \(Rxyz\) is true iff \(y=x+1\). The above example can be simply adapted to the case of \(\mathrm{AUF}_{1}^{-}\) with free use of equality. We just add a dummy existentially quantified variable \(t\) and require it to be equal to the previous, universally quantified variable \(z\). To accommodate all the variables we increase the arity of \(R\) by \(2\) (one can think that the first and the last position of \(R\) from the previous example have been doubled): \[\exists xS(x)\wedge\forall x\exists y\forall z\exists t.(t=z\wedge\neg S(y) \wedge R(x,x,y,z,t)\wedge(x=z\vee\neg R(z,t,y,x,x))).\] ## 5 Conclusions We identified a non-trivial uniform one-dimensional logic in which a use of mixed blocks of quantifiers is allowed, strictly extending the two-variable fragment \(\mathrm{FO}^{2}\) without equality and the previously defined fragment \(\mathrm{sUF}_{1}\) without equality. We proved that, similarly to \(\mathrm{FO}^{2}\) and \(\mathrm{sUF}_{1}\), this logic has the finite, exponential model property and NExpTime-complete satisfiability problem. There are two interesting directions, orthogonal to each other, in which it would be valuable to extend our work. The first is investigating the decidability, complexity and the status of the finite model property for full \(\mathrm{AUF}_{1}\) without equality, that is to see what happens to our logic if arbitrary blocks of quantifiers, possibly ending with the universal quantifier, are allowed. As already mentioned, in our recent work [11] we answered this question for the three variable restriction, \(\mathrm{AUF}_{1}^{3}\), of \(\mathrm{AUF}_{1}\) by showing the exponential model property and NExpTime-completeness of its satisfiability problem. The second idea is to revive the research on Maslov Class \(\overline{\mathrm{K}}\), by attempting to determine the precise complexity of its satisfiability problem and investigating whether it has the finite model property. When designing the fragment \(\mathrm{AUF}_{1}^{-}\) we took some inspiration from the definition of \(\overline{\mathrm{K}}\), and indeed we were able to reduce satisfiability of the former to the latter. We believe that what we have learned working on \(\mathrm{AUF}_{1}^{-}\) will prove useful in the case of \(\overline{\mathrm{K}}\). There are also some probably slightly less attractive, but still interesting, a bit more technical questions that one can try to answer. For example, what happens to our logic if a use of equalities/inequalities (free or uniform) or constants is allowed. ## Acknowledgement This work is supported by NCN grant No. 2021/41/B/ ST6/00996.
2301.07762
Relative vs absolute fitness in a population genetics model. How stronger selection may promote genetic diversity
Since the foundation of population genetics, it is believed that directional selection should reduce genetic diversity. We present an exactly solvable population model which contradicts this intuition. The population is modelled as a cloud of particles evolving in a 1-dimensional fitness space (fitness wave). We show the existence of a phase transition which separates the parameter space into a weak and a strong selection regimes. We find that genetic diversity is highly non-monotone in the selection strength and, in contrast with the common intuition, our model predicts that genetic diversity is typically higher in the strong selection regime. This apparent paradox is resolved by observing that a higher selection strength increases the absolute fitness of the wave, but typically generate lower relative fitness between individuals within the wave. These findings entail that inferring the magnitude of natural selection from genetic data may raise some serious conceptual issues. Along the way, we uncover a new phase transition in front propagation. Namely, we show that the transition from weak to strong selection can be reformulated in terms of a transition from fully-pulled to semi-pulled waves. This transition is the pulled analog to the semi-pushed to fully-pushed regimes observed in noisy-FKPP travelling waves in the presence of Allee effect.
Emmanuel Schertzer, Alejandro H. Wences
2023-01-18T19:57:08Z
http://arxiv.org/abs/2301.07762v1
Relative vs absolute fitness in a population genetics model. How stronger selection may promote genetic diversity. ###### Abstract Since the foundation of population genetics, it is believed that directional selection should reduce genetic diversity. We present an exactly solvable population model which contradicts this intuition. The population is modelled as a cloud of particles evolving in a 1-dimensional fitness space (fitness wave). We show the existence of a phase transition which separates the parameter space into a weak and a strong selection regimes. We find that genetic diversity is highly non-monotone in the selection strength and, in contrast with the common intuition, our model predicts that genetic diversity is typically higher in the strong selection regime. This apparent paradox is resolved by observing that a higher selection strength increases the absolute fitness of the wave, but typically generate lower relative fitness between individuals within the wave. These findings entail that inferring the magnitude of natural selection from genetic data may raise some serious conceptual issues. Along the way, we uncover a new phase transition in front propagation. Namely, we show that the transition from weak to strong selection can be reformulated in terms of a transition from fully-pulled to semi-pulled waves. This transition is the pulled analog to the semi-pushed to fully-pushed regimes observed in noisy-FKPP travelling waves in the presence of Allee effect. ## 1 Introduction ### General introduction A central question in population genetics is to decipher the effect of natural selection on genetic diversity. In 1923, Fisher [22] wrote "The interesting speculation has recently been put forward that random survival is a more important factor in limiting the variability of species than preferential survival. The ensuing investigation negatives this suggestion". Since the foundation of population genetics [22, 25, 24, 41], it is widely believed that a stronger directional selection should result in a reduction of genetic diversity. The aim of the present article is to present a directional selection model with unexpected features, namely that a stronger selection may induce a higher genetic diversity. Darwinian evolution emerges from the processes of mutation, reproduction, competition, and genetic drift. To model these evolutionary forces, an _asexual_ population can be naturally encoded as a cloud of individuals moving in an abstract 1-dimensional fitness space [10, 35, 34, 30]. Assuming a discrete-time dynamics, each generation is generally subdivided into two sub-phases: a _reproduction phase_ followed by a _"culling" phase_, which constraints the population size to remain constant. See Figure 1. In the reproduction phase, each individual gives birth to a random number of individuals whose fitness deviates from the parental fitness due to the accumulation of random mutations along the genome [30]. The reproduction phase generally increases the population size whereas the culling phase, by selecting individuals according to their fitness, maintains the population at a constant size \(N\). Under a "truncation" selection scenario, only the \(N\) fittest offspring are selected to reproduce in the next generation [10, 9, 8]. In a "natural' selection scenario, a set of \(N\) individuals is _randomly_ selected according to a sampling procedure which depends on the fitness distribution in the population. We investigate a model where the culling phase is a mixture of truncation selection and natural selection. The model is constructed as an extension of the classical population model of Brunet and Derrida [10, 9, 8] and after the work of Cortines and Mallein [13]. We investigate the two natural questions. 1. How does relative fitness relate to absolute fitness? In other words, if we increase the selection strength, does it reduce or increase the relative fitness within the population? 2. How does genetic diversity relate to selection strengh? ### The model We consider a population evolution model of \(N\) particles on the real line which reproduces at discrete times. Every generation is of size \(N\) and particles are given a fitness value on the real line. At a given time, the population is constructed from the previous generation in three consecutive steps: one reproduction step and two selection steps. 1. An individual with position \(x\) produces an infinite number of children positioned on the real line according to an independent Poisson Point Process (PPP) of intensity measure \(e^{-(s-x)}\;ds\). After the reproduction step, parents die out. Figure 1: Simulations of the passage from generation \(t\) to generation \(t+1\) with \(N=100\) and \(\gamma=2\). Time flows from top to bottom. After reproduction (first layer), the size of the population increases. In the truncation phase phase (second layer), we only retain the \(N^{\gamma}\)-rightmost particles. In the natural selection phase (third layer) particles are randomly selected according to their fitness to maintain the population size constant. In the weak selection regime, particles are typically selected inside the bulk during the culling phase (fully-pulled wave). In the strong selection regime, particles at the front are selected with a high probability (semi-pulled wave). In the weak selection regime, the cloud of particles tend to be sparser at equilibrium (observe the gap between the first and second particle). As a consequence, a lower selection strength may promote higher relative fitnesses between individuals. * In order to keep the number of children finite, we only retain the \(N^{\gamma}\)-rightmost individuals, with \(\gamma>1\). Biologically, this is interpreted as the inability to survive beyond a certain threshold. * The second selection step is parametrized by a parameter \(\beta>0\). Conditionally on the positions of the children, we sample without replacement \(N\) individuals with the probability of picking an individual at position \(y\) being proportional to \(e^{\beta y}\). The biological interpretation of this procedure will be discussed in Section 1.4. Our choice of the exponential point process in the reproduction phase may seem artificial at first sight. Our motivation is two-fold. First, the model turns out to be exactly solvable. Secondly, extremal theory states that the extremes of a long sequence of Gaussian random variables converges (after scaling and centering) to the same exponential point process [7]. Since fitness waves are usually driven by extremal particles [1], our model should be a good approximation of a model where individuals produce a large number of offspring with a Gaussian perturbation of their parental fitness. For the original Brunet-Derrida model, more "realistic" models (which do not rely on the assumption of an exponential birth process) have been shown to exhibit an analog qualitative behavior, see e.g. [1]. The previous model interpolates between several models in the literature. The case \(\gamma=1\) or \(\beta=\infty\) coincides with the integrable exponential model of Brunet and Derrida [9, 10, 8], where the \(N\)-right-most individuals are selected after the reproduction step. The case \(\gamma=\infty\) and \(\beta>1\) was originally studied by Cortines and Mallein in [13]. In both cases, the authors showed that the limiting genealogy of the population is described by the Bolthausen-Sznitman coalescent [6, 36]. ### Main results We investigate the genealogy and the speed of evolution of the population in the case \(\beta\in(0,1)\) and \(\gamma\in(1,\infty)\). Not surprisingly, stronger selection always increases the speed of the fitness wave. See Figure 4. However, the model exhibits a surprising phase transition when investigating the genealogical structure of the population. More specifically, we exhibit a phase transition in the parameter space \((\gamma,\beta)\). For a fixed value of \(\gamma\), define the critical parameter \[\beta_{c}\equiv\beta_{c}(\gamma):=1-\frac{1}{\gamma}.\] The regime \(\beta<\beta_{c}\) will be referred to as the _weak selection_ regime, and the regime \(\beta>\beta_{c}\) as the _strong selection_ regime. **Relative fitness vs absolute fitness.** To quantify relative fitness within the fitness wave, we consider the ancestry a randomly sampled point in the population \(1\) generation in the past. We rank individuals in the parental population in decreasing order according to their fitness. (The fittest individual has rank \(1\) and the less fit individual has rank \(N\).) Let \(R_{1}^{N}\) be the parental rank of a randomly sampled individual in today's population. We prove the following result. **Theorem 1.1** (Parent rank).: _As \(N\to\infty\),_ **i) Weak selection.** _If_ \(\beta<\beta_{c}\)_, then_ \(R_{1}^{N}\) _converges in distribution to a random variable_ \(R_{1}^{\infty}\) _such that_ \(\mathbb{P}\left(R_{1}^{\infty}=k\right)=\mathbb{E}\left[\eta_{k}^{\infty}\right]\) _where the (decreasingly-ordered) random mass partition_ \((\eta_{1}^{\infty},\eta_{2}^{\infty},\ldots)\) _is Poisson-Dirichlet_\((1-\beta,0)\) _distributed (see Definition_ C.1 _in the Appendix)._ **ii) Strong selection.** _If_ \(\beta>\beta_{c}\)_, then for every_ \(k\in\mathbb{N}\)_,_ \(\lim_{N\to\infty}\mathbb{P}\left(R_{1}^{N}\leq k\right)=0\) Let us now provide an intuitive interpretation of the former mathematical statement. In the weak selection regime, extremal individuals generate the next generation since the ancestor a sampled individual has a rank whose distribution converges to a finite r.v. as \(N\to\infty\). In words, the relative fitness of individuals located at the edge of the front becomes predominant compared to the aggregated fitness of the particles far from the edge (the bulk) and generate the next generation. This is in sharp contrast with the strong selection regime, where the next generation is now generated by the bulk. This translates into the fact that the ancestor's rank goes to \(\infty\) (or equivalently that the probability to find the ancestor at finite rank becomes negligible in the previous result) as the size of the population \(N\) goes to \(\infty\). In this regime, the fitness of extremal particles is small in comparison to the aggregated fitness of the bulk. To summarize, while the absolute fitness tends to be higher in the strong selection regime (higher speed of selection, see Figure 4), the relative fitnesses between particles is lower so that particles of the edge of the front are not fit enough to generate the next generation. **Semi-pulled vs fully pulled-waves.** In the previous section, we investigated the genealogy one generation backwards in time in order to quantify the contribution of the extremal parental particles to the next generation. We now quantify the effect of selection strength on genetic diversity. This is done by sampling \(n\) individuals from the extant population and tracing their genealogy backwards in time until reaching their most recent common ancestor (MRCA). In the strong selection regime (\(\beta>\beta_{c}\)), we recall that the ancestor of the population \(1\) generation in the past is likely to be found in the bulk. However, we find that the MRCA of the population is likely to be _the_ extremal particle of the fitness wave in some earlier generation -- see Fig 2(b). Such propagation processes have often been referred to in the literature as _"pulled"_ waves, because the fitness wave is pulled along by the action of some extreme individual far in the past [43]. Mathematically, this often translates into the convergence of the genealogy to the celebrated Bolthausen-Sznitman coalescent [6, 31, 36]. In this scenario, the ancestry of the extant population traces backward in time to a pioneer located at the front of the fitness wave and whose descendants eventually invaded the whole population. This is consistent with previous findings in [10, 8] where the culling phase consists in deterministically selecting the \(N\)-fittest individuals after the reproduction phase (truncation selection, \(\beta=\infty\) in the present model). The same universal behavior has been found in many population models (see [39, 13] for rigorous results, and [30, 15, 16] for non-rigorous arguments) leading to propose the Bolthausen-Sznitman coalescent as a new null model for the genealogy of rapidly adapting populations and motivating its widespread study, see e.g. [17, 18, 23, 26]. One of the interesting features of the present model is that the Bolthausen-Sznitman universal behavior breaks down when the selection strength \(\beta\) goes below the critical parameter \(\beta_{c}\). In the weak selection regime, we uncover a new type of front propagation that will be referred to as the _fully-pulled wave_. In contrast, the Bolthausen-Sznitman regime will now be referred to as a _semi-pulled wave_. We now justify the terminology. What distinguishes semi-pulled and fully-pulled waves? As previously exposed, in the semi-pulled regime (or Bolthausen-Sznitman regime \(\beta>\beta_{c}\)), the MRCA corresponds to an exceptionally fit individual in the far past whose descendence eventually invaded the whole population. However, the fitness wave observed "at a typical generation" is in fact mainly driven by the bulk. Mathematically, this translates into the property that the ancestor of an individual only one generation in the past is located inside the fitness wave. This is the content of Theorem 1.1. Now, if we trace the genealogy of a single individual backwards in time, her ancestral lineage will typically be confined within the bulk of the fitness wave until it eventually jumps to the extremal particle at the time when the front individual happens to have a much higher relative fitness when compared to the rest of the population. The MRCA of the whole population is then likely to be found at those exceptional generations. See Figures 2(b) and 3. The situation is drastically different in the fully-pulled regime (\(\beta<\beta_{c}\)). In this phase, the fitness wave is "fully" driven by the front in the sense that ancestral lineages always remain in the vincinity of the right-most particle. Again, ancestral lineages coalesce at the front of the fitness wave but in contrast to the semi-pulled regime, the coalescence occurs in finite time. See Figures 2(a) and 3. To summarize, semi-pulled and fully-pulled regimes share the common feature that they are both driven by extremal particles of the fitness wave _in the past_. However, in the semi-pulled regime (strong selection) the ancestral structure of the population is driven by exceptional events at the front. This is in sharp contrast with the fully-pulled regime (weak selection) where the genealogy is governed by the generic behavior of extremal particles. In our formal mathematical statement below, the distinction between the two regimes is encapsulated in two ways. First, the MRCA's rank differs. In the semi-pulled regime, the rank is \(1\) (corresponding to the right-most particle) whereas the rank distribution in the fully-pushed regime is a power law (so that the second right-most particle may contribute to the front). Secondly, the time to the MRCA (TMRCA) is of order \(1\) and \(\log(N)\) for the fully and semi-pulled regimes respectively. This difference lies in the observation that in the semi-pulled case, one needs to wait until an exceptional generation to see two lineages coalesce. In the fully-pulled regime, the effective population size reduces to the extreme particles at the front so that coalescence occurs as if the population size was of order \(1\), _regardless of the census size \(N\)_. In order to make the previous discussion more precise, we now provide an informal version of our two main Theorems; a formal version will be given in Section A. In the following, we let \(T_{2}^{N}\) denote the number of generations before a pair of randomly chosen individuals find their MRCA in a population consisting of \(N\) particles. We also denote by \(R_{2}^{N}\) the rank in generation \(T_{2}^{N}\) of the MRCA, when individuals are ordered decreasingly according to their fitness. **Theorem 1.2** (Genealogy).: _As \(N\to\infty\)_ **i) Weak selection (fully-pulled).**: _The genealogy converges to the discrete time Poisson-Dirichlet_ \((1-\beta,0)\) _coalescent. Further,_ _(shallow genealogy) \(\mathbb{E}\left[T_{2}^{N}\right]\sim\frac{1}{\beta}\)._ _(rank) The rank_ \(R_{2}^{N}\) _satisfies a power law. Namely, there exists a constant_ \(c\) _such that_ \[\mathbb{P}\left(R_{2}^{N}=i\right)\ \approx\ \frac{c}{i^{\frac{2}{1-\beta}}},\ \text{ for }i>>1.\] Figure 2: (a) Schematic representation of a fully-pulled wave. Ancestral lineages (black lines) are always located at the front and their coalescence time is of order \(1\) regardless of the population size. (b) Schematic representation of a semi-pulled wave. Ancestral lines navigate inside the bulk but jump to the edge of the front at exceptional times when the right-most individual is abnormally fitter than the rest of the population. Coalescence of ancestral lineages occur at those extreme events. The typical time scale for observing those exceptional events is \(\log(N)\). **ii) Strong selection (semi-pulled).**: _The genealogy converges to the Bolthausen-Sznitman coalescent in a time scale_ \(O(\log(N))\)_. Further,_ _(deep genealogy)_: \(\mathbb{E}\left[T_{2}^{N}\right]\sim\chi\log N,\quad\text{where }\chi(\beta)\equiv\chi(\beta)\coloneqq\frac{1-\gamma(1-\beta)}{\beta}>0\)_._ _(rank)_: \(\mathbb{P}\left(R_{2}^{N}=1\right)\approx 1\)__ **Theorem 1.3** (Speed of evolution).: _Let \(\nu_{N}\) be the speed of the fitness wave. As \(N\to\infty\),_ **i) Weak selection (fully-pulled).**: \[\nu_{N}=-(\gamma-\frac{1}{1-\beta})\log(N)+\mathbb{E}\left[\log(Y_{\beta}) \right]+o\left(1\right)\] (1) _where_ \(Y_{\beta}\) _is the positive_ \((1-\beta)\)_-stable law with Laplace transform_ \(\mathbb{E}\left[e^{-\lambda Y_{\beta}}\right]=\exp\left\{-\Gamma(\beta)\lambda ^{1-\beta}\right\}\)_._ **ii) Strong selection (semi-pulled).**: \[\nu_{N}=\log\left(\chi\log N\right)+o\left(1\right),\] (2) A precise statement of those results together with some heuristics can be found in Section A. ### Biological interpretation Recall that selection occurs in two successive steps: at first we only retain the \(N^{\gamma}\) right-most particles (truncation phase); secondly, we randomly sample \(N\) particles without replacement (natural selection phase). During the truncation phase, the \(N^{\gamma}\)-fittest individuals saturate the population size supported by the environment. During the "natural" selection step, the population comes back to its equilibrium size by randomly selecting individuals according to their relative fitness. Figure 3: Estimated distributional tails of the rank (by fitness) of the parent in the previous generation \(R_{1}^{N}\) and of the MRCA \(R_{2}^{N}\), for \(N=1000\) and \(\gamma=2\) so that \(\beta_{c}=0.5\). The slope of the black dotted line corresponds to the power-law predictions in the weak selection regime of Theorem 1.2. Observe that the decay in the tail probabilities is much sharper when comparing the two blue curves (strong selection regime) than between the two green curves (weak selection regimes), evincing that \(R_{1}^{N}\) and \(R_{2}^{N}\) have different orders of magnitude in the strong selection regime, whereas in the weak selection regime they remain of the same order. This is consistent with our distinction between pulled and fully-pulled waves. The main objective of the truncation phase is to keep the number of offspring constant before the natural selection phase. Alternatively, this finite size constraint could be enforced by assuming that every individual in the parental population has only a finite number of offspring. For instance, for every individual, one could only retain the first \(N^{\gamma-1}\) rightmost atoms in the Poisson Point Process encoding the position of its offspring. By doing so, the total number of offspring before natural selection would be given by \(N^{\gamma}=N^{\gamma-1}\times N\), as in the present model. While this alternative model is presumably more realistic, it turns out to be much more challenging to solve than our model. However, [10] have shown that that the qualitative behavior is qualitatively similar. As mentioned above, the natural selection phase is noisy and the level of noise is captured by the \(\beta\) parameter of our model (\(\beta=\infty\) consists in selecting the \(N\) fittest individuals, \(\beta=0\) consists in selecting individuals uniformly at random). This feature of our model reflects that the survival of individuals depends not only on their relative fitness but also on random and/or confounding factors. Random factors may cause the accidental death of high-fitness individuals, or the lucky survival of less fit individuals. On the other hand, confounding factors may "obscure" the "true" fitness of an individual at critical moments during his lifetime, thus reducing (or increasing) his chances of survival. For example, we can alternatively interpret the second selection step as an "artificial" selection step instead of natural selection. Under this interpretation, confounding factors may amount to the inability of the experimenter to clearly asses the fitness of an individual. Think of selecting individuals based only on a restricted set of phenotypic traits, or on traits that are not directly correlated with the potential survival and/or reproductive success of the individual. This interpretation begs a natural question: * What is the effect of lowering selection noise? i.e., how does the speed of evolution and the genealogy react to a change in \(\beta\)? Intuitively one may think that a higher \(\beta\) (reduction of the selection noise) should have two main effects. First, the speed of evolution should increase, and secondly, genealogies should become shallower, i.e., lowering selection noise should reduce the TMRCA of the population. Our theoretical predictions suggest a less intuitive story. Looking back at Theorem A.1, we found that * If \(\beta<\beta_{c}\) (weak selection regime), then the coalescent converges to the Poisson Dirichlet coalescent (without rescaling) and the speed of evolution is given by \[-\alpha\log(N)+\mathbb{E}\left[\log(Y_{\beta})\right]+o\left(1\right).\] * If \(\beta>\beta_{c}\) (strong selection regime), the genealogy converges to the Bolthausen-Sznitman coalescent on a time scale \(\chi\log N\) and the speed of evolution is given by \[\log(\chi\log(N))+o(1).\] As expected, the speed of evolution is increasing with \(\beta\), but the relationship between speed and \(\beta\) is highly non-linear (see right panel in Figure 4). As a matter of fact, as \(N\to\infty\) and when \(\beta>\beta_{c}\), the effect of \(\beta\) on speed can only be observed at second order. This suggests that reducing selection noise has only a sizeable effect for highly noisy selection processes (i.e., when \(\beta<\beta_{c}\)). In other words, the marginal gain on improving our ability to accurately assess individuals fitness would decrease with \(\beta\). Secondly, and in sharp contrast to our previous intuition, the depth of genealogies is not monotone in \(\beta\). As can be seen from Figure 4 (left panel), the average TMRCA for two sampled individuals decreases on \((0,\beta_{c})\), but surprisingly, it _increases_ on the interval \((\beta_{c},\infty)\). As a consequence, our model exhibits a rather intriguing behavior, where the depth of genealogies (i.e., genetic diversity) does not always reduce when the strength of selection increases. ### Pushed and pulled waves Intricate phase transitions in front propagations have been observed in previous works [5, 33, 42, 20]. One interesting example is a population whose large scale behavior is described by the noisy F-KPP equation with Allee effect [4, 5] \[\partial_{t}u=\frac{1}{2}\partial_{xx}u+u(1-u)(1+Bu)+\sqrt{\frac{u(1-u)}{N}}\eta \tag{3}\] where \(\eta\) is space-time white noise, \(B>0\) encodes the magnitude of cooperation in the population and \(N\) is a large demographic parameter. When \(B<2\), the population is mainly driven by extreme individuals and propagates according to a semi-pulled wave. Analogously to the present fitness wave in the strong selection regime, the genealogy is conjectured to converge to the genealogy. When \(B>2\), a transition occurs but this transition is drastically different from the one observed in this work. The wave is now _pushed_ in the sense that the dynamics and the genealogy are mainly driven by the _bulk_, i.e. far from the edge of the front. In particular, the MRCA of the population is likely to be found far from the edge of front. In turn, the pushed regime is itself partionned into 2 sub-regimes: the semi-pushed regime (\(B\in(2,4)\)) and the fully-pushed regime (\(B>4\)). In the fully-pushed regime, the fluctuations at the edge of the front obey the central limit theorem and the genealogy is conjectured to converge to the Kingman coalescent [27] on a time scale of order \(N\). In the the semi-pushed regime the fluctuations are described by a heavy tailed distribution and the genealogy is conjectured to converge to a Beta coalescent [38] on a time scale of order \(N^{a}\) with \(a<1\). In particular, the genealogical depth reduces when transitioning from the fully-pushed to semi-pushed regime. Let us now compare those results with the present fitness wave model. Our model does not exhibit a pushed regime since the MRCA is always found at the front. However, and analogously to (3), the pulled regime is partioned into two sub-regimes: the semi-pulled and fully-pulled regime. Analogously to the pushed case, we observe that the genealogical depth also reduces when transitioning from the fully-pulled to the semi-pulled regime. As a consequence, we observe Figure 4: The theoretical approximations for \(\mathbb{E}\left[T_{2}^{N}\right]\) on the left, and the speed of selection (\(\nu_{N}\)) on the right, along with simulated values (black circles), as functions of the parameter \(\beta\) with fixed \(\gamma=2\) (thus \(N^{\gamma}=10^{6}\)) for which the phase transition occurs is at \(\beta_{c}=1/2\). The genetic diversity is increasing with the selection strength in the strong selection regime. Further, for large values of \(N\), the genetic diversity is typically higher in the strong selection regime as compared to the weak selection regime. a similar transition to (3), with the notable difference, that this transition is now driven by the particles at the edge of the front, as opposed to the particles in the bulk. The previous discussion is summarized in Table 1. ### Conclusion The latter observations have some interesting biological implications. A standard approach to quantifying the genetic diversity within a population consists in describing its genealogical structure [44, 19, 21]. If one samples two individuals uniformly at random in the extant population, the number of observed single-nucleatide polymorphism (SNPs) is directly related to the TMRCA. A common belief in population genetics is that a genetic signature of natural selection should be a reduction in genetic diversity [22, 25, 24, 12, 14, 11], and the presence of natural selection has been advocated to explain the large discrepancy between the effective population size and the census population size [37, 11, 41]. As a consequence, increasing selection should intuitively reduce genealogical depths. Our model tells a radically different story. First, the genetic diversity is typically lower in the weak selection regime (the expected number of SNPs is of order 1) than in the strong selection regime (where the expected number of SNPs is of order \(\log(N)\)). Further, the genetic diversity is a highly non-monotone function of selection strength. It is decreasing in the weak selection regime, and it is (surprisingly) increasing in the strong selection regime. This apparent paradox is resolved by a close look at Figure 1. In the fully-pulled regime (weak selection), the children are typically selected from the bulk in the culling phase. In contrast, children are selected from the front in the pulled regime (strong selection). It turns out that the selected particles are more sparse in the bulk than in the front. See again Figure 1. In other words, even if the _absolute fitnesses_ are smaller in the weak selection regime (bulk versus front), relative fitnesses tend to be much larger. As a consequence, reducing selection may reduce selective differentials among particles, which in turn dictates a shallower genealogy. A consequence of the previous findings is that very different levels of selection could induce the same genetic diversity. Thus, inferring the magnitude of natural selection from genetic data may raise some important conceptual issues. \begin{table} \begin{tabular}{||c|c|c||c|c||} \hline & Fully-Pushed & Semi-Pushed & Semi-Pulled & Fully-Pulled \\ \hline \hline TMRCA & \(O(N)\) & \(O(N^{a})\) & \(O(\log(N))\) & \(O(1)\) \\ \hline Genealogy & Kingman & Beta\((2-a,a)\) & Bolthausen-Sznitman & PD\((1-\beta,0)\) \\ \hline Rank in 1 generation & \(>>1\) (bulk) & \(>>1\) (bulk) & \(>>1\) (bulk) & power law \\ \hline Rank of MRCA & \(>>1\) (bulk) & \(>>1\) (bulk) & \(1\) & power law \\ \hline \end{tabular} \end{table} Table 1: Pushed/pulled transitions for the noisy F-KPP equation (red/purple) vs noisy Brunet-Derrida (blue/purple). Main Theorems ### Statements Let \(n\in\mathbb{N}\) with \(n<N\). Let \((\Pi_{n}^{N};t\in[N])\) be the \(n\)-coalescent process (or genealogy) describing the genealogy of our process (for a population of size \(N\)). See Proposition C.1 in Section C for a formal definition. Also, \((X_{1}^{N}(t),\cdots,X_{N}^{N}(t))\) will denote the random point set describing the positions of the particles after \(t\) generations, listed in the sampling order of the selection step from generation \(t-1\) to generation \(t\). Finally, we let \(T_{2}^{N}\) denote the number of generations before a pair of randomly chosen individuals find their MRCA in the population with \(N\) particles; and denote by \(R_{2}^{N}\) the rank in generation \(T_{2}^{N}\) of the MRCA when individuals are ordered decreasingly according to their fitness (position on the real line). **Theorem A.1** (Genealogy).: _As \(N\to\infty\)_ **i) Weak selection.**: _Let_ \(\Pi_{n}^{\infty}\) _be the_ \(n\)_-coalescent constructed from the Poisson-Dirichlet_ \((1-\beta,0)\) _mass partition_ _[_32_]_ _(see also Definition_ C.1_). Then_ \[(\Pi_{n}^{N}(t);t\in\mathbb{N})\Rightarrow(\Pi_{n}^{\infty}(t);t\in\mathbb{N})\] _in the product topology for_ \((\mathscr{P}_{n})^{\mathbb{N}}\)_. Here, the_ \(\Rightarrow\) _sign refers to a convergence in distribution._ \(\mathscr{P}_{n}\) _is the space of partitions on_ \([n]\)_. Further,_ \[\lim_{N\to\infty}\mathbb{E}\left[T_{2}^{N}\right]=\beta^{-1}\] _and_ \[\forall i\geq 1,\quad\lim_{N\to\infty}\mathbb{P}\left(R_{2}^{N}=i\right)= \frac{\mathbb{E}\left[(\eta_{i}^{\infty})^{2}\right]}{\beta},\] _where the ordered frequencies_ \((\eta_{1}^{\infty},\eta_{2}^{\infty},\ldots)\) _are Poisson-Dirichlet(_\(1-\beta\)_,_0_) distributed. Moreover,_ \[\lim_{i\to\infty}i^{2/(1-\beta)}\mathbb{E}\left[(\eta_{i}^{\infty})^{2}\right] =\frac{\Gamma\left(1+\frac{2}{1-\beta}\right)}{2\Gamma(\beta)^{\frac{2}{1- \beta}}}.\] **ii) Strong selection.**: _Define_ \[\chi(\beta,\gamma)\equiv\chi\coloneqq\frac{1-\gamma(1-\beta)}{\beta} \tag{4}\] _The time-scaled process_ \((\Pi_{n}^{N}(\lfloor\frac{t}{\chi\log(N)}\rfloor);t\geq 0)\) _converges in distribution to the_ \(n\)_-Bolthausen-Sznitman coalescent_ _[_6_]_ _in the Skorohod topology_ \(D([0,\infty),\mathscr{P}_{n})\)_. Further,_ \[\mathbb{E}\left[T_{2}^{N}\right]\sim\chi\log N.\] _and_ \[\lim_{N\to\infty}\mathbb{P}\left(R_{2}^{N}=1\right)=1.\] **Theorem A.2** (Speed of evolution).: _The two limits \(\lim_{t\to\infty}\max_{j\leq N}\frac{X_{j}^{N}(t)}{t},\lim_{t\to\infty}\min_{ j\leq N}\frac{X_{j}^{N}(t)}{t}\) exist a.s. and there exists a deterministic constant \(\nu_{N}\) such that_ \[\nu_{N}=\lim_{t\to\infty}\max_{j\leq N}\frac{X_{j}^{N}(t)}{t}=\lim_{t\to \infty}\min_{j\leq N}\frac{X_{j}^{N}(t)}{t}. \tag{5}\] _Further, as \(N\to\infty\),_ **i) Weak selection.**: \[\nu_{N}=-(\gamma-\frac{1}{1-\beta})\log(N)+\mathbb{E}\left[\log(Y_{\beta})\right]+o \left(1\right)\] (6) _where_ \(Y_{\beta}\) _is the positive_ \((1-\beta)\)_-stable law with Laplace transform_ \(\mathbb{E}\left[e^{-\lambda Y_{\beta}}\right]=\exp\left\{-\Gamma(\beta)\lambda ^{1-\beta}\right\}\)_._ **ii) Strong selection.**: \[\nu_{N}=\log\left(\chi\log N\right)+o\left(1\right),\] (7) ### Heuristics and outline of the proof We start by some introductory definitions. Let \(\boldsymbol{\eta}=(\eta_{i})_{i}\) i.e. a random vector of frequencies summing up to \(1\) and whose entries are arranged in non-increasing order. Let \((\boldsymbol{\eta}(t);t\geq 1)\) be independent copies of \(\boldsymbol{\eta}\). We construct a discrete genealogy as follows. At step \(0\), start with \(n\) leaves and construct a random tree from the leaves to the root recursively as follows. At distance \(t\in\mathbb{N}\) from the leaves, each remaining lineage is assigned a random and independent integer whose distribution is prescribed by \(\eta(t)\). (A lineage is assigned \(i\) with probability \(\eta_{i}(t)\)). All the lineages carrying the same mark merge into a single lineage at \(t+1\). We continue until one lineage remains which corresponds to the root. This is the Kingman paintbox procedure. See Section C for a formal definition. We now consider a specific random mass partition. 1. Let \((E_{i})\) be i.d.d. standard exponential random variables and define \(w_{i}:=E_{1}+\cdots+E_{i}\) for \(i\in[N^{\gamma}]\). 2. Given the vector \(w_{1}>\cdots>w_{N^{\gamma}}\), sample (without replacement) \(N\) entries according to the sampling weights \((w_{i}^{\beta})_{i=1}^{N}\). Let \(\bar{w}_{1}>\cdots>\bar{w}_{N}\) be the resulting set and define the mass partition \(\eta^{N}\ =\left(\frac{\bar{w}_{i}}{\sum_{i=1}^{N}\bar{w}_{i}}\right)_{i=1}^{N}\). The key observation is that the genealogy of our population model is identical in law to the genealogy constructed from the the mass partition \(\eta^{N}\). See Proposition C.1 for more details. As we shall see, the phase transition from the weak to the strong selection regime results from the limiting behavior of the mass partition \(\eta^{N}\) as \(N\to\infty\). **Weak selection regime.** Let \(\beta<\beta_{c}\) and define \(\alpha=\gamma-\frac{1}{1-\beta}>0\). Set \[\forall y>0,\ \ L(y)\ =\ \#\{\bar{w}_{i}:N^{\alpha}\bar{w}_{i}\geq y\},\] Let us first interpret this quantity. For \(i>>1\), the law of large numbers imply that \(w_{i}\approx\frac{1}{i}\). As a consequence, \(L(y)\) can be interpreted as the number of elements in \((w_{i})_{i=1}^{N^{\gamma}}\) that are selected in the \(\beta\)-sampling (natural selection) and whose rank is smaller than \(N^{\alpha}/y\). For the sake of simplicity, let us assume that we sample with replacement. Then \[\mathbb{P}\ (L(y)=n)\ =\ \binom{N}{n}p^{n}(1-p)^{N-n},\ \ \mbox{where}\ p=\frac{\sum_{N^{\alpha}w_{i}\geq y}w_{i}^{ \beta}}{\sum_{i=1}^{N}w_{i}^{\beta}}\] Since \(\beta<1\) and \(\alpha>0\), this implies that \[p\ \ \approx\frac{\sum_{i\leq N^{\alpha}/y}\frac{1}{\beta^{\beta}}}{\sum_{i=1}^{ N}\frac{1}{\beta^{\beta}}}\ \ \approx\frac{1}{N}(\frac{1}{y})^{1-\beta}\] so that \[\mathbb{P}\ (L(y)=n) \ \ \approx\ \ \frac{1}{n!}e^{-\frac{1}{y^{1-\beta}}}(\frac{1}{y^{1- \beta}})^{n}\] Let us now consider \(Z_{1}>\cdots Z_{N}>\cdots\) be the atoms of a PPP\(((1-\beta)\frac{dx}{x^{\beta-2}})\). The previous computations show that \[\mathbb{P}\left(\#\{Z_{i}:Z_{i}\geq y\}=n\right)\ =\ \mathbb{P}\left(L(y)=n\right).\] More generally, similar computations show that \[N^{\alpha}(\bar{w}_{1},\bar{w}_{2},\cdots)\ \Longrightarrow(Z_{1},Z_{2},\cdots).\] Let us now consider the mass partition \(\eta=(\frac{Z_{i}}{\sum_{j}Z_{j}})\). It is known that \(\eta\) is distributed as the Poisson-Dirichlet mass partition with parameter \((1-\beta,0)\)[32]. The previous convergence result shows that the genealogy converges to the genealogy generated from this mass partition. Our results in the weak selection regime then easily follows from well known properties of the Poisson Dirichlet mass partition. **Strong Selection.** Let us now consider \(\beta_{c}<\beta<1\). For the sake of simplicity, let us again assume that sampling occurs with replacement and define \(V_{N}(i)\) the number of times we draw \(w_{i}\) in the \(\beta\)-sampling. Using the law of large number, we have \[V_{N}(1)\ =\ NO(\frac{1}{\sum_{i=1}^{N^{\gamma}}w_{i}})\ =\ O(N^{1-\gamma(1-\beta)})\] In the strong selection regime, we have \(1-\gamma(1-\beta)>0\). This implies that the rightmost individual is picked with a high probability. A similar computation shows that if \[i<<N^{\chi},V_{N}(i)>>1,\ \text{and}\quad i>>N^{\chi},\ V_{N}(i)<<1\] so that all the entries with rank smaller than \(N^{\chi}\) are likely to be picked, whether an entry with rank higher than \(N^{\chi}\) is likely to be missed. As a consequence, the \(\beta\)-sampling (i.e., natural selection) becomes equivalent to truncation selection at rank \(N^{\chi}\) as the size of the population gets large. The second part of Theorem 1.2 is a direct consequence of [8] for an "effective" population size \(N^{\chi}\). ## Appendix B Notation We will write \[f(x)\sim g(x)\ \text{as}\ x\to a\ \text{if}\ \lim_{x\to a}\frac{f(x)}{g(x)}=1;\quad f(x)=o\left(g(x) \right)\ \text{as}\ x\to a\ \text{if}\ \lim_{x\to a}\frac{f(x)}{g(x)}=0;\] \[\text{and}\ f(x)=\mathcal{O}\left(g(x)\right)\ \text{as}\ x\to a\ \text{if}\ \lim_{x\to a}\frac{f(x)}{g(x)}<+\infty.\] In the following, \(\Xi\) will denote a Poisson Point Process (PPP) \(\Xi\) on \((0,\infty)\) with intensity measure \(\frac{dx}{x^{2}}\). We will also write \(\mathbb{P}^{\Xi}\) for the probability measure conditioned on a realization of \(\Xi\), and denote by \((w_{k})_{k\geq 1}\) its atoms listed in decreasing order (note that the intesity measure is finite on any interval \((a,+\infty),a>0\)). Finally, define the quantity \[\alpha(\beta,\gamma)\equiv\alpha\coloneqq\gamma-\frac{1}{1-\beta}. \tag{8}\] ## Appendix C A Poissonian construction In this section, we recall some results from [13], where the genealogy and the speed of evolution are expressed in terms of (the PPP) \(\Xi\). Note that the Poisson mapping theorem implies that \(\Xi\) is identical in law to \((\frac{1}{E_{1}+\cdots+E_{i}})_{i\in\mathbb{N}}\) where \((E_{i})\) is a sequence of i.i.d. exponential random variables. Let \(\boldsymbol{\eta}^{N}=(\eta^{N}_{i})_{i\in[N]}\) be a random mass partition of the unit interval made of \(N\) elements, i.e. a random vector of frequencies summing up to \(1\) and whose entries are arranged in non-increasing order. Let \((\boldsymbol{\eta}^{N}(t);t\geq 1)\) be independent copies of \(\boldsymbol{\eta}^{N}\). We construct a discrete time coalescent process on \([n]\) as follows. At step \(0\), start with \(n\) singletons. Given the state of the partition at time \(t\in\mathbb{N}\), merge the blocks according to the standard Kingman paintbox procedure on the mass partition \(\boldsymbol{\eta}^{N}(t+1)\). This is done in three steps: (1) index the blocks according to their least element, (2) throw independent uniform random variables \((U_{i};i\in\mathbb{N})\) on \([0,1]\), and (3) for every \(k\in[N]\), merge all the blocks \(i\) with \(U_{i}\in[\sum_{j=1}^{k-1}\eta^{N}_{j}(t+1),\sum_{j=1}^{k}\eta^{N}_{j}(t+1)]\) with the convention that \(\sum_{1}^{0}=0\). In the following, we refer to the resulting coalescent as the (discrete time) \(n\)-coalescent generated from the mass partition \(\boldsymbol{\eta}^{N}\) (see [29, 13, 40] for general crietria for the weak convergence of these coalescents to \(\Xi\)- and \(\Lambda\)-coalescents). In the following we will also deal with the following (infinite-size) well-known random mass partitions. **Definition C.1**.: _[Poisson-Dirichlet(\(a,0\)) mass partition [32]] Let \(\Delta_{1}>\Delta_{2}>\cdots\) be the atoms of a PPP( \(Cx^{-a-1}dx\) ) on \((0,\infty)\) arranged in decreasing order, where \(C>0\) and \(a\in(0,1)\) (these atoms correspond to the jumps of a \(a\)-stable subordinator in the time interval \([0,1]\)). Then we call the partition \(\left(\frac{\Delta_{1}}{\sum_{j=1}^{m}\Delta_{j}},\frac{\Delta_{2}}{\sum_{j=1 }^{m}\Delta_{j}},\cdots\right)\) the Poisson-Dirichlet(\(a,0\)) mass partition._ Let us now continue with the study of the population introduced in Section 1.2. Recall the atoms of \(\Xi\). listed in decreasing order \(w_{1}>w_{2}>\cdots\). Sample without replacement \(N\) indices \(I^{N}\coloneqq\left(I^{N}_{1},\ldots,I^{N}_{N}\right)\) among \([N^{\gamma}]\coloneqq\{1,\ldots,\lceil N^{\gamma}\rceil\}\), assuming that the relative weight of \(i\) is given by \(w^{\beta}_{i}\). Let \(\hat{I}^{N}\) be the permutation of the vector \(I^{N}\) so that \(\hat{\boldsymbol{W}}^{N}\coloneqq(w_{\hat{I}^{N}_{1}},\ldots,w_{\hat{I}^{N}_{N }})\) is arranged in decreasing order. Finally, consider the random mass partition \[\boldsymbol{\eta}^{N}\ =\frac{\hat{\boldsymbol{W}}^{N}}{\left\|\hat{\boldsymbol{W}}^{ N}\right\|_{\ell^{1}}}=\ \frac{1}{\sum_{k=1}^{N}w_{\hat{I}^{N}_{k}}}\ (w_{\hat{I}^{N}_{1}},\cdots,w_{\hat{I}^{N}_{N }}). \tag{9}\] In the following we provide a reformulation of Lemma 1.6 in [13] and for self-consistency we give a sketch of proof below. **Proposition C.1**.: _Consider the population model introduced in Section 1.2. Let \(T\in\mathbb{N}^{*}\) and define \((\tilde{\Pi}^{N}_{n}(t);t\in[T])\) as_ \[i\stackrel{{\tilde{\Pi}^{N}_{n}(t)}}{{\sim}}j\text{ whenever $i$ and $j$ have the same ancestor in generation $T-t$}, \tag{10}\] _for \(i,j\in[n]\). If \((\Pi^{N}_{n}(t);t\in\mathbb{N})\) denotes the \(n\)-coalescent generated from the mass partition \(\boldsymbol{\eta}^{N}\) as in (9), then_ \[\left(\tilde{\Pi}^{N}_{n}(t);t\in[T-1]\right)\ =\ \left(\Pi^{N}_{n}(t);t\in[T-1] \right)\ \text{ in law.}\] Before providing an sketch of the proof we first state some basic Poisson point processes identities. **Lemma C.2**.: _The following random sets of points are equal in distribution:_ * _PP_\((e^{-s}ds)\)_._ * \(\{\log(w)\}_{w\in\Xi}\)_._ * \(\{\log(\frac{1}{E_{1}+\cdots+E_{k}})\}_{k\geq 1}\)_, where_ \(E_{1},E_{2},\ldots\) _are standard exponential r.v.s._ Proof.: The result easily follows from the standard Poisson Point Process transformation identities (see e.g., the Mapping Theorem in [28]). Sketch of proof of Proposition c.1.: In order to gain some intuition, let us first consider the standard Brunet Derrida model. In this model, we start with an arbitrary configuration \[x_{1}^{0}>\cdots>x_{N}^{0}\] Each individual produces an infinite number of offspring according to an independent \(PPP(e^{-(x-x_{i}^{0})})\) respectively. The selection phase then consists in selecting the \(N\)-rightmost children of the resulting offspring point process. It can readily be checked that this model coincides with our model when \(\gamma,\beta=\infty\). Let \(y_{1}^{1}>y_{2}^{1}>\cdots>y_{N^{\gamma}}\) be the set of particles after a branching step. The set \((y_{i}^{1})\) is easily seen to be identical in law to a translated \(PPP(e^{-x})\), where the translation only depends on the configuration at generation \(0\). In order to see that, we note that the probability to find an offspring in a small interval \([x,x+dx]\) is given by \[\sum_{i=1}^{N}e^{-(x-x_{i}^{0})}dx\ =\ e^{-(x-X_{eq})}dx,\ \ \mbox{where}\ X_{eq}=\ln(\sum_{i=1}^{N}e^{x_{i}^{0}}).\] Further, the probability that the parent of an offspring at \(x\) is given by the \(i^{th}\) individual \(x_{i}^{0}\) at generation \(0\) is informally given by \[\eta_{i}^{N} = \frac{e^{-(x-x_{i}^{0})}dx}{\sum_{j=1}^{N}e^{-(x-x_{j}^{0})}dx} \tag{11}\] \[= \frac{e^{x_{i}^{0}}}{\sum_{j=1}^{N}e^{x_{j}^{0}}},\] where the first equality corresponds to the probability that \(x_{i}^{0}\) leaves an offspring at site \(x\) conditional on finding any offspring at this location. Note that this probability is independent of the the location of the offspring. From the previous observations, we get that 1. Regardless of the initial state of the system at time \(t=0\), the configuation at \(t>0\) is an independent \(PPP(e^{-x}dx)\) translated by a random number that only depends on the status of the system in the previous generation. 2. Combining this with (11), the coalescent \((\Pi_{n}^{N}(t);t\in[T-1])\) is identical in law to \((\tilde{\Pi}_{n}^{N}(t);t\in[T-1])\), where \(\tilde{\Pi}_{n}^{N}(t)\) is the coalescent associated to the mass partition \[\eta_{i}^{N}=\frac{e^{x_{i}}}{\sum_{j=1}^{N}e^{x_{j}}},\] and where \((x_{i})\) are the atoms of a \(PPP(e^{-x}dx)\) arranged in decreasing order. Note that the equality only holds up to \(T-1\) since the system only reaches equilibrium after \(1\) generation. Let us now consider the noisy Brunet and Derrida model. As before, after the reproduction phase, the children point process corresponds to the \(N^{\gamma}\)-rightmost points of a translated \(PPP(e^{-x}dx)\). Next, in the selection phase, we select \(N\) particles according to the sampling weights \[\left(w_{i}^{\beta}=e^{\beta y_{i}^{1}}\right)_{i=1}^{N^{\gamma}}.\] This yields a subset \((y^{1}_{I_{1}},\cdots,y^{1}_{I_{N}})\subset(y^{1}_{i})_{i=1}^{N^{\gamma}}\), where \(I_{j}\) is the index of the \(j^{th}\) sampled individual. As in the simplified Brunet and Derrida model, we see that the cloud of point reaches a stationary state after a single generation (up to translation), i.e., the configuration is obtained by randomly sampling from the \(N^{\gamma}\)-rightmost particles of a \(PPP(e^{-x}dx)\). Further, by the exact same argument as before, the genealogical structure on \([T-1]\) coincides with the coalescent associated to the mass partition \[\eta^{N}_{i}\ =\ \frac{e^{x_{I_{i}}}}{\sum_{i=1}^{N}e^{x_{I_{j}}}},\] where \((x_{i})\) is a \(PPP(e^{-x}dx)\), and \((x_{\hat{I}_{1}},\cdots,x_{\hat{I}_{N}})\) is obtained by randomly sampling among the \(N^{\gamma}\)-rightmost points according to the weights \((w^{\beta}_{i}:=e^{\beta x_{j}})_{j=1}^{N^{\gamma}}\), and subsequently ordering the resulting vector decreasingly. The result then follows by a direct application of the second item of Lemma C.2. **Proposition C.3** (Lemma 1.5. in [13]).: _The two limits_ \[\lim_{t\to\infty}\max_{j\leq N}\frac{X^{N}_{j}(t)}{t},\lim_{t\to\infty}\min_{ j\leq N}\frac{X^{N}_{j}}{t}\] _exist a.s. and are equal to_ \[\nu_{N}=\mathbb{E}\left[\log\left(\sum_{k=1}^{N}w_{I^{N}_{k}}\right)\right]. \tag{12}\] Sketch of the proof.: Recall from Proposition C.2 that at every generation, the cloud of points is an exponential Poisson Point Process translated by a random factor \(X_{eq}(t)=\ln(\sum_{i=1}^{N}e^{x_{i}(t)})\) which only depends on the configuration \((x_{i}(t))\) at time \(t\). Similar Poisson point process identities as those of Proposition C.1 yield \[X^{N}_{eq}(t+1)-X^{N}_{eq}(t)\stackrel{{ d}}{{=}}\log(\sum_{k=1 }^{N}w_{I^{N}_{k}}).\] The branching steps being i.i.d., and by the law of large numbers, we have \[\lim_{t\to\infty}\frac{X^{N}_{eq}(t)}{t}\stackrel{{ a.s.}}{{=}} \mathbb{E}\left[\log\left(\sum_{i=1}^{N}w_{I_{i}}\right)\right]. \tag{13}\] It is then easy to deduce the rest of the Proposition from there. ## Appendix D Preliminary results **Lemma D.1**.: _Recall that \(\Xi\) is a PPP on \(\mathbb{R}_{+}\) with intensity measure \(\frac{dx}{x^{2}}\) and \((w_{i})\) is the list of atoms listed in decreasing order. For almost every realization of \(\Xi\), \(kw_{k}\to 1\) a.s \(k\to\infty\)._ Proof.: Let \((\mathcal{E}_{i})\) be an i.i.d. sequence of standard exponential r.v.'s. By Lemma C.2, \(\left((\sum_{i=1}^{k}\mathcal{E}_{i})^{-1}\right)_{k=1}^{\infty}\) is identical in law with \((w_{i})\). By the strong law of large numbers, we have \(\frac{\sum_{i=1}^{k}\mathcal{E}_{i}}{k}\to 1\) a.s. when \(k\to\infty\). From the previous lemma, the law of large numbers (LLN) applied to the sequence \(\left(w_{i}\right)_{i\geq 1}\) which gives \(w_{i}\stackrel{{ a.s.}}{{\sim}}i^{-1}\) as \(i\to\infty\). By a standard Riemann integral approximation argument, for \(a<1\) and as \(b\to\infty\), \[\sum_{i=1}^{b}w_{i}^{a}\approx\frac{1}{1-a}b^{1-a}\] with high probability. In the next results, we provide large deviation estimates for the probability to deviate from this LLN. **Lemma D.2**.: _Let \(c>1-\beta\) and \(\delta\in[0,1)\). Define_ \[E_{N}\ =\ \left\{\sum_{i=\lceil N^{\delta}\rceil}^{N}w_{i}^{\beta}\leq c^{-1}N^{1- \beta}\right\}.\] _Then for every \(\eta>0\), we have \(\mathbb{P}\left(E_{N}\right)=o(N^{-\eta})\)._ Proof.: By Lemma C.2 and standard Cramer large deviation estimates for sums of i.i.d exponential r.v.'s, \(\mathbb{P}\left(w_{N}\geq\frac{N^{-1}}{1-\epsilon}\right)\) and \(\mathbb{P}\left(w_{\lceil N^{\delta}\rceil}\leq\frac{N^{-\delta}}{1+\epsilon}\right)\) are of order \(o\left(N^{-\eta}\right)\) for any choice of \(0<\epsilon<1\). Thus \[\mathbb{P}\left(E_{N}\right) =\mathbb{P}\left(E_{N};w_{N}<\frac{N^{-1}}{1-\epsilon},w_{\lceil N ^{\delta}\rceil}>\frac{N^{-\delta}}{1+\epsilon}\right)+o\left(N^{-\eta}\right)\] \[\leq\mathbb{P}\left(\int_{\frac{N^{-1}}{1-\epsilon}}^{\frac{N^{ -\delta}}{1+\epsilon}}\ x^{\beta}\Xi(dx)\leq\frac{N^{1-\beta}}{c}\right)+o \left(N^{-\eta}\right).\] It only remains to prove that for some appropriate choice of \(\epsilon>0\) we have \[\mathbb{P}\left(\int_{\frac{N^{-1}}{1-\epsilon}}^{\frac{N^{-\delta}}{1+ \epsilon}}x^{\beta}\Xi(dx)\leq\frac{1}{c(1-\epsilon)^{1-\beta}}\left(N(1- \epsilon)\right)^{1-\beta}\right)=o\left(N^{-\eta}\right). \tag{14}\] Using Markov's inequality and Campbell's formula, for any \(a_{1}<a_{2}\) and \(b>0\) we have \[\mathbb{P}\left(\int_{a_{1}}^{a_{2}}x^{\beta}\Xi(dx)\leq b^{-1}a_ {1}^{\beta-1}\right) \leq e^{b^{-1}a_{1}^{\beta-1}}\mathbb{E}\left[\exp\left\{-\int_{a _{1}}^{a_{2}}x^{\beta}\Xi(dx)\right\}\right]\] \[=\exp\left\{b^{-1}a_{1}^{\beta-1}-\int_{a_{1}}^{a_{2}}\frac{1-e^{ -x^{\beta}}}{x^{2}}dx\right\}.\] Next, for every \(x>0\), \(1-e^{-x^{\beta}}=x^{\beta}e^{-\theta}\) for some \(\theta\in[0,x^{\beta}]\). Thus \[1-e^{-x^{\beta}}\geq e^{-a_{2}^{\beta}}x^{\beta}\] for all \(x\in[a_{1},a_{2}]\). As a consequence, \[\log\left(\mathbb{P}\left(\int_{a_{1}}^{a_{2}}x^{\beta}\Xi(dx)< ba_{1}^{\beta-1}\right)\right)\leq b^{-1}a_{1}^{\beta-1}-e^{-a_{2}^{\beta}}\int_{a_{1}}^{a_{2}}x^{ \beta-2}dx \tag{15}\] \[= a_{1}^{\beta-1}b^{-1}\left(1-\frac{b\left(1-(a_{2}/a_{1})^{\beta -1}\right)}{1-\beta}e^{-a_{2}^{\beta}}\right).\] Set \(a_{1}\equiv a_{1}^{N}=\frac{N^{-1}}{1-\epsilon}\), \(a_{2}\equiv a_{2}^{N}=\frac{N^{-\delta}}{1+\epsilon}\) and \(b=c(1-\epsilon)^{1-\beta}\) where we recall that \(c\) is a constant larger than \(1-\beta\). Take \(0<\epsilon<1\) small enough to ensure that \(\frac{b}{1-\beta}>1\), then, since \(a_{2}^{N}/a_{1}^{N}\to 0\), we obtain \(\left(1-\frac{b\left(1-(a_{2}/a_{1})^{\beta-1}\right)}{1-\beta}e^{-a_{2}^{ \beta}}\right)<0\) for \(N\) large enough. Substituting in (15) we get (14). **Lemma D.3**.: _Let \(2c\in(0,1-\beta)\) and_ \[E_{N}\ =\ \left\{\sum_{i=1}^{N}w_{i}^{\beta}\geq c^{-1}N^{(1-\beta)}\right\}.\] _Then there exists \(\eta>0\) such that \(\mathbb{P}\left(E_{n}\right)=o(N^{-\eta})\)._ Proof.: As in the previous Lemma, we have \(\mathbb{P}\left(w_{N}<\frac{N^{-1}}{1+\epsilon}\right)=o(N^{-\eta})\) for every \(\eta>0\). Thus \[\mathbb{P}\left(E_{N}\right)\leq\mathbb{P}\left(\int_{\frac{N^{-1}}{1+\epsilon} }^{\infty}x^{-\beta}\Xi(dx)\geq\frac{N^{(1-\beta)}}{c}\right)+o\left(N^{-\eta} \right).\] It remains to prove that with an appropriate choice of \(\epsilon>0\), the first term in the right hand side is of order \(o\left(N^{-\eta}\right)\) for some \(\eta>0\). Using Markov's inequality and Campbell's formula as before, for any \(a<b<\infty\) and \(A>0\) we compute, \[\mathbb{P}\left(\int_{a}^{b}x^{\beta}\Xi(dx)\geq A\right) \leq\mathbb{E}\left[\exp\left\{\int_{a}^{b}x^{\beta}\Xi(dx)\right\} \right]e^{-A}\] \[=\exp\left\{-\int_{a}^{b}\frac{1-e^{x^{\beta}}}{x^{2}}dx-A\right\}.\] We have \(1-e^{x^{\beta}}\geq-x^{\beta}e^{b^{\beta}}\) for all \(0<x<b\). Thus \[\log\left(\mathbb{P}\left(\int_{a}^{b}x^{-\beta}\Xi(dx)\geq A \right)\right) \leq e^{b^{\beta}}\int_{a}^{b}x^{\beta-2}dx-A\] \[\leq\frac{a^{\beta-1}}{1-\beta}\left(e^{b^{\beta}}-A\frac{(1- \beta)}{a^{\beta-1}}\right).\] Take \(a\equiv a_{N}=\frac{N^{-1}}{1+\epsilon}\) and \(A\equiv A_{N}=\frac{N^{(1-\beta)}}{2c}\). By the previous estimate and Markov's inequality \[\mathbb{P}\left(\int_{\frac{N^{-1}}{1+\epsilon}}^{\infty}x^{-\beta }\Xi(dx)\geq\frac{N^{(1-\beta)}}{c}\right) \leq\mathbb{P}\left(\int_{b}^{\infty}x^{\beta}\Xi(dx)\geq\frac{N^ {(1-\beta)}}{2c}\right)+\mathbb{P}\left(\int_{a_{N}}^{b}x^{\beta}\Xi(dx)\geq \frac{N^{(1-\beta)}}{2c}\right)\] \[\leq 2ca_{N}^{1-\beta}\mathbb{E}\left[\int_{b}^{\infty}x^{\beta} \Xi(dx)\right]+\exp\left\{\frac{a_{N}^{\beta-1}}{1-\beta}\left(e^{b^{\beta}}- \frac{1-\beta}{2c(1+\epsilon)^{1-\beta}}\right)\right\}.\] Since \(\mathbb{E}\left[\int_{b}^{\infty}x^{\beta}\Xi(dx)\right]=\int_{b}^{\infty}x^{ \beta-2}dx\) the first term above is of order \(\mathcal{O}\left(a_{N}^{(1-\beta)}\right)=\mathcal{O}\left(N^{\beta-1}\right)\), whereas the second term vanishes exponentially fast whenever \(b,\epsilon>0\) are chosen small enough to ensure \(e^{b^{\beta}}-\frac{1-\beta}{2c(1+\epsilon)^{1-\beta}}<0\). **Corollary D.4**.: _Let \(a>0,b\geq 0\) and \(\delta\geq 0\). Assume that \(A:=a+\beta-1>0\). Then_ \[\lim_{N\to\infty}N^{-\chi\beta+bA}\mathbb{E}\left(\sum_{k=1}^{N}w_{k}^{a} \mathbbm{1}_{w_{I_{k}^{N}}\leq\frac{\delta}{N^{b}}}\right)\leq(1-\beta)\frac{ \delta^{A}}{A}.\] Proof.: Using that \[\mathbb{P}^{\,\Xi}\left(I_{i}^{N}=k\right)\leq\frac{w_{k}^{\beta}}{\sum_{j=N} ^{N}w_{j}^{\beta}},\quad\forall 1\leq i\leq N,1\leq k\leq\lceil N^{\gamma}\rceil, \tag{16}\] and thus, summing over \(i\), \(\mathbb{P}^{\,\Xi}\left(k\in I^{N}\right)\leq Nw_{k}^{\beta}/\sum_{j=N}^{N^{ \gamma}}w_{j}^{\beta}\) for every \(k\), note that \[\mathbb{E}^{\Xi}\bigg{(}\sum_{k=1}^{N}w_{I_{k}^{N}}^{a}\mathbbm{1}_{w_{I_{k}^{ N}}<\frac{\delta}{N^{b}}}\bigg{)}\leq N\,\sum_{k=1}^{\lceil N^{\gamma} \rceil}\,w_{k}^{a}\mathbbm{1}_{w_{k}<\frac{\delta}{N^{b}}}\,\frac{w_{k}^{ \beta}}{\sum_{j=N}^{N^{\gamma}}w_{j}^{\beta}}.\] We will also use the trivial bound \[\sum_{k=1}^{N}w_{I_{k}^{N}}^{a}\mathbbm{1}_{w_{I_{k}^{N}}<\frac{\delta}{N^{b}} }\leq N\,\,(\frac{\delta}{N^{b}})^{a}. \tag{17}\] Let \(\bar{E}_{N}=\left\{\sum_{i=N}^{\left\lceil N^{\gamma}\right\rceil}w_{i}^{\beta} \leq c^{-1}N^{\gamma(1-\beta)}\right\}\), then by Lemma D.2 (replace \(N\) and \(\delta\) therein by \(\left\lceil N^{\gamma}\right\rceil\) and \(1/\gamma\) respectively), \[\mathbb{E}\bigg{(}\sum_{k=1}^{N}w_{I_{k}^{N}}^{a}\mathbbm{1}_{w_ {I_{k}^{N}<\frac{1}{N^{b}}}}\bigg{)} \leq N\ \mathbb{E}\left(\sum_{k=1}^{\left\lceil N^{\gamma}\right\rceil}\ w _{k}^{a}\mathbbm{1}_{w_{k}<\frac{\delta}{N^{b}}}\ \frac{w_{k}^{\beta}}{\sum_{j=N}^{\left\lceil N^{\gamma}\right\rceil}w_{j}^{ \beta}};\bar{E}_{N}^{c}\right)\ +\ N\ (\frac{\delta}{N^{b}})^{a}\mathbb{P}\left(\bar{E}_{N}\right)\] \[\leq cN^{1-\gamma(1-\beta)}\ \mathbb{E}\left(\sum_{k=1}^{\left\lceil N^{ \gamma}\right\rceil}\ w_{k}^{a+\beta}\mathbbm{1}_{w_{k}<\frac{\delta}{N^{b}}} \right)+N\ (\frac{\delta}{N^{b}})^{a}\mathbb{P}\left(\bar{E}_{N}\right)\] \[= cN^{1-\gamma(1-\beta)}\int_{0}^{\frac{\delta}{N^{b}}}\ \frac{dx}{x^{2-a-\beta}}+N\ ( \frac{\delta}{N^{b}})^{a}\mathbb{P}\left(\bar{E}_{N}\right).\] The result follows from Lemma D.2 and subsequently taking \(c\downarrow 1-\beta\). **Corollary D.5**.: _Let \(a>0,b\geq 0\). Assume that \(A=a+\beta-1<0\). Then_ \[\lim_{N\to\infty}N^{-\chi\beta+bA}\mathbb{E}\left(\sum_{k=1}^{N}w_{I_{k}^{N}}^ {a}\mathbbm{1}_{w_{I_{k}^{N}\geq\frac{1}{N^{b}}}}\right)\ \leq\ (1-\beta)\frac{1}{|A|}.\] Proof.: This can be proved analogously to the previous result by Lemma D.2. By the same argument as in the previous corollary, intersecting again with the set \(\bar{E}_{N}=\left\{\sum_{i=N}^{\left\lceil N^{\gamma}\right\rceil}w_{i}^{ \beta}\leq c^{-1}N^{\gamma(1-\beta)}\right\}\), but now using the trivial bound \[\sum_{k=1}^{N}w_{I_{k}^{N}}^{a}\mathbbm{1}_{w_{I_{k}^{N}\geq N^{-b}}}\leq Nw_{1}^{a} \mathbbm{1}_{w_{1}\geq\frac{1}{N^{b}}}\] instead of (17), we obtain \[\mathbb{E}\bigg{(}\sum_{k=1}^{N}w_{I_{k}^{N}}^{a}\mathbbm{1}_{w_ {I_{k}^{N}>\frac{1}{N^{b}}}}\bigg{)} \leq N\ \mathbb{E}\left(\sum_{k=1}^{\left\lceil N^{\gamma}\right\rceil}\ w _{k}^{a}\mathbbm{1}_{w_{k}>\frac{1}{N^{b}}}\ \frac{w_{k}^{\beta}}{\sum_{j=N}^{\left\lceil N^{\gamma} \right\rceil}w_{j}^{\beta}},\bar{E}_{N}^{c}\right)\ +\ N\mathbb{E}\left(w_{1}^{a}\mathbbm{1}_{w_{1}>N^{-b}},\bar{E}_{N}\right)\] \[\leq cN^{1-\gamma(1-\beta)}\ \mathbb{E}\left(\sum_{k=1}^{\left\lceil N^{ \gamma}\right\rceil}\ w_{k}^{a+\beta}\mathbbm{1}_{w_{k}>\frac{1}{N^{b}}} \right)+N\left\|w_{1}^{a}\mathbbm{1}_{w_{1}>N^{-b}}\right\|_{\mathfrak{L}^{2}} \left(\mathbb{P}\left(\bar{E}_{N}\right)\right)^{1/2}.\] Using Lemma C.2 we have \(\mathbb{E}\left[w_{1}^{2a}\mathbbm{1}_{w_{1}>N^{-b}}\right]=\int_{N^{-b}}^{ \infty}x^{-2a}e^{-x}dx\) which is of polynomial order on \(N\) for any choice of \(a\). Thus Lemma D.2 acetrains that the second term above is of order \(o\left(N^{\chi\beta-bA}\right)\). For the first term we compute, using the hypothesis \(A<0\), \[\mathbb{E}\left(\sum_{k=1}^{\left\lceil N^{\gamma}\right\rceil}\ w _{k}^{a+\beta}\mathbbm{1}_{w_{k}>\frac{1}{N^{b}}}\right) =\int_{\frac{1}{N^{b}}}^{\infty}\frac{dx}{x^{2-\alpha+\beta}}\] \[=\frac{1}{|A|}N^{-bA}.\] **Remark**.: _Corollaries D.4 and D.5 remain valid if we replace the sampled indices \(I^{N}\) (sampled without replacement) by a sample \(J^{N}\) with replacement from \(\left[N^{\gamma}\right]\) with the same weights \(\left(w_{i}^{\beta}\right)_{i=1}^{\left\lceil N^{\gamma}\right\rceil}\) as for \(I^{N}\). Indeed, this follows by observing that even in this case the key bound in (16) remains valid._ **Lemma D.6**.: _Let \((E_{N})\) be a sequence of events such that, for some \(\eta^{\prime}>0\),_ \[\lim_{N\to\infty}N^{\eta^{\prime}}\mathbb{P}\left(E_{N}\right)\ =\ 0.\] _Then, for every \(\eta>0\)_ \[\lim_{N\to\infty}\mathbb{E}\left[|\log(\sum_{k=1}^{N}w_{I_{k}^{N}})|^{\eta};E_ {N}\right]\ =\ 0.\] Proof.: Let \(\eta>0\) be fixed. Since \(Nw_{\lceil N^{\gamma}\rceil}\leq\sum_{k=1}^{N}w_{I_{k}}\leq Nw_{1}\) we have \[\mathbb{E}\left[\left|\log\left(\sum_{k=1}^{N}w_{I_{k}}\right) \right|^{\eta};E_{N}\right]\leq \mathbb{E}\left[\log^{\eta}(Nw_{1});\sum_{i=1}^{N}w_{I_{k}}>1,E_ {N}\right]+\mathbb{E}\left[\log^{\eta}(w_{\lceil N^{\gamma}\rceil}^{-1}/N); \sum_{i=1}^{N}w_{I_{k}}\leq 1,E_{N}\right]\] \[\leq \mathbb{E}\left[\log^{\eta}(Nw_{1});Nw_{1}>1,E_{N}\right]+ \mathbb{E}\left[\log^{\eta}(w_{\lceil N^{\gamma}\rceil}^{-1}/N);w_{\lceil N^ {\gamma}\rceil}^{-1}/N\geq 1,E_{N}\right]. \tag{18}\] We first deal with the first term on the RHS of the inequality. Let \(\epsilon>0\). There exists a constant \(c_{\epsilon,\eta}\) such that for every \(x\geq 1\), we have \(\log^{\eta}(x)\leq c_{\epsilon,\eta}w^{\epsilon}\). By Cauchy-Schwarz inequality, \[\mathbb{E}\left[\log^{\eta}(Nw_{1});Nw_{1}>1,E_{N}\right] \leq c_{\epsilon,\eta}N^{\epsilon}\mathbb{E}(w_{1}^{\epsilon};E_{N})\] \[\leq c_{\epsilon,\eta}N^{\epsilon}\mathbb{E}(w_{1}^{2\epsilon})^{1/2} \mathbb{P}\left(E_{N}\right)^{1/2}.\] Recall that \(w_{1}\) is identical to the inverse of a standard exponential r.v.. As a consequence, the latter expectation is finite for \(\epsilon<1/2\). Further, picking \(\epsilon\) small enough such that \(\epsilon<2\eta^{\prime}\), the assumption of our lemma implies that \(N^{\epsilon}\mathbb{P}\left(E_{N}\right)^{1/2}\to 0\) so that the first term on the RHS of (18) must go to \(0\). We turn to the second term. By a similar argument as before, for any \(\epsilon>0\), we have \[\mathbb{E}\left[\log^{\eta}(w_{\lceil N^{\gamma}\rceil}^{-1}/N);w_{\lceil N^ {\gamma}\rceil}^{-1}/N\geq 1,E_{N}\right]\leq\frac{c_{\epsilon}}{N^{ \epsilon}}\mathbb{E}(\frac{1}{w_{N^{\epsilon}}^{2\epsilon}})^{1/2}\mathbb{P} \left(E_{N}\right)^{1/2}\] Lemma C.2 together with Stirling's approximation yield \(\left\|w_{\lceil N^{\gamma}\rceil}^{-\epsilon}\right\|_{L^{2}}^{2}=\frac{ \Gamma(\lfloor N^{\gamma}\rceil+2\epsilon)}{\Gamma(\lceil N^{\gamma}\rceil) }\sim N^{\gamma 2\epsilon}\). Thus, taking \(\epsilon\ 0<\epsilon(\gamma-1)<2\eta^{\prime}\), the second term in (18) is also vanishing. ## Appendix E Weak selection regime In this section, we assume throughout that \(\beta<\beta_{c}(\gamma)=1-\frac{1}{\gamma}\), so that \[\alpha(\beta,\gamma)\equiv\alpha=\gamma-\frac{1}{1-\beta}>0. \tag{19}\] ### Convergence of some mass partitions Define \(\hat{\mathbf{V}}^{N}\) the random vector constructed as \(\hat{\mathbf{W}}^{N}\) (see Section C) except that sampling is done _with_ replacement, and let \(\tilde{\mathbf{\eta}}^{N}\) be its corresponding random mass partition. As for \(I^{N}\), we denote by \(J^{N}=(J_{1}^{N},\cdots,J_{N}^{N})\) the vector of sampled indices with replacement arranged in their sampling order, and by \(\hat{J}^{N}\) the permutation of \(J^{N}\) so that \(w_{j_{1}^{N}}>\cdots>\hat{w}_{j_{N}^{N}}\). We can couple the two sets of sampled indices \(\{I^{N}\}\) and \(\{J^{N}\}\), and thus \(\tilde{\mathbf{\eta}}^{N}\) and \(\mathbf{\eta}^{N}\), as follows. Let \(\#J^{N}\) be the cardinality of \(\{J^{N}\}\). Given \(\{J^{N}\}\), sample \(r^{N}:=N-\#J^{N}\) indices in the set \([N^{\gamma}]\setminus\{J^{N}\}\) without replacement. Let \(\{K^{N}\}\) be the resulting set. We state the following result without a proof. **Lemma E.1**.: _The set of indices \(\{J^{N}\}\cup\{K^{N}\}\) is identical in law to \(\{I^{N}\}\)._ The aim of the present section is dedicated to the proof the following result. **Proposition E.2**.: _Assume \(\tilde{\boldsymbol{\eta}}^{N}\) and \(\boldsymbol{\eta}^{N}\) are coupled as previously exposed. Then_ \[\left(\tilde{\boldsymbol{\eta}}^{N},\boldsymbol{\eta}^{N}\right)\Longrightarrow \left(\boldsymbol{\eta}^{\infty},\boldsymbol{\eta}^{\infty}\right)\] _where each coordinate is endowed with the \(\ell^{1}(\mathbb{R}_{+})\equiv\ell^{1}\) topology (the \(L^{1}\) topology on the set of non-negative real sequences), the joint convergence is meant in the product topology, and \(\boldsymbol{\eta}^{\infty}\) has the Poisson-Dirichlet \((1-\beta,0)\) distribution._ We break up the proof of the proposition into several intermediary results. **Lemma E.3**.: _Let \((Z_{j})_{j\geq 1}\) be the atoms of a Poisson random measure on \(\mathbb{R}_{+}^{\ast}\) with intensity \((1-\beta)x^{\beta-2}dx\) ranked in decreasing order. Then, for the scaled random vector \(N^{\alpha}\hat{\boldsymbol{V}}^{N}\) we have, as \(N\to\infty\),_ \[\left(N^{\alpha}w_{\hat{J}^{N}_{1}},\dots,N^{\alpha}w_{\hat{J}^{N}_{N}},0, \dots\right)\Rightarrow\left(Z_{1},Z_{2},\dots\right).\] _where the convergence is meant in \(\ell^{1}(\mathbb{R}_{+})\)._ Proof.: We provide an argument along the same lines as Lemmas 20 and 21 in [38]. Since clearly \((Z_{1},\dots,Z_{\ell},0,\dots)\Rightarrow(Z_{i})\) as \(\ell\to\infty\), it remains to prove the conditions of [Theorem 3.2 [3]]. Mainly, that \[\left(N^{\alpha}w_{\hat{J}^{N}_{1}},\dots,N^{\alpha}w_{\hat{J}^{N}_{\ell}} \right)\Rightarrow(Z_{1},\dots,Z_{\ell}) \tag{20}\] (convergence for fixed \(\ell\)), and the uniform distance bound in probability given by condition (3.8) therein. Since the \(\ell^{1}\) distance between the complete vector \(\left(N^{\alpha}w_{\hat{J}^{N}_{1}},\dots,N^{\alpha}w_{\hat{J}^{N}_{N}},0, \dots\right)\) and the truncated vector \(\left(N^{\alpha}w_{\hat{J}^{N}_{1}},\dots,N^{\alpha}w_{\hat{J}^{N}_{\ell}},0, \dots\right)\) is \(N^{\alpha}\sum_{i=\ell+1}^{N}w_{\hat{J}^{N}_{i}}\), in our case this condition can be written as \[\forall\epsilon>0,\lim_{\ell\to\infty}\limsup_{N\to\infty}\mathbb{P}\left(N^ {\alpha}\sum_{k=\ell+1}^{N}w_{\hat{J}^{N}_{k}}\geq\epsilon\right)=0. \tag{21}\] We will prove (20) conditional on a realization of \(\Xi\) such that \(kw_{k}\to 1\) (note that by Lemma D.1, the set of such realizations has probability 1). For any collection of positive real numbers \(\infty=z_{0}>z_{1}\geq z_{2}\geq\dots\geq z_{\ell}\), for \(1\leq i\leq\ell\), define the random variable \[\mathbf{L}^{N}_{i}:=\{k\in J^{N}\ :\ z_{i}\leq N^{\alpha}w_{k}<z_{i-1}\},\quad \text{ and }L^{N}_{i}:=\#\mathbf{L}^{N}_{i}.\] where we recall that \(J^{N}\) refers to the indices in our sampling with replacement. Let \(n_{1},\dots,n_{\ell}\in\mathbb{N}\). Let \(\mathbb{P}^{\Xi}\) be the (regular) probability measure conditioned on \(\Xi\). We have the multinomial formula \[\mathbb{P}^{\Xi}\left(L^{N}_{1}=n_{1},\dots,L^{N}_{\ell}=n_{\ell}\right)\] \[=\frac{\left(N\right)_{n_{1}+\dots+n_{\ell}}}{n_{1}!\dots n_{\ell }!}\frac{\prod_{i=1}^{\ell}\left(\sum_{k:N^{\alpha}w_{k}\in[z_{i},z_{i-1})}w_{ k}^{\beta}\right)^{n_{i}}}{\left(\sum_{k=1}^{N^{\gamma}}w_{k}^{\beta} \right)^{n_{1}+\dots+n_{\ell}}}\left(1-\frac{\sum_{k:N^{\alpha}w_{k}\in[z_{i}, z_{i-1})}\ w_{k}^{\beta}}{\sum_{k=1}^{N^{\gamma}}w_{k}^{\beta}}\right)^{N-n_{1}- \dots-n_{\ell}}.\] Assuming that \(kw_{k}\to 1\) as \(k\to\infty\), this easily implies, as \(N\to\infty\), \[(N)_{n_{1}+\dots+n_{\ell}}\sim N^{n_{1}+\dots+n_{\ell}},\] \[\sum_{k=1}^{\lceil N^{\gamma}\rceil}w_{k}^{\beta}\ \sim\ \frac{N^{\gamma(1-\beta)}}{(1-\beta)},\] \[\left(\sum_{k:N^{\alpha}w_{k}\in[z_{i},z_{i-1})}w_{k}^{\beta} \right)^{n_{i}}\sim\left(\frac{\left(N^{\alpha}/z_{i}\right)^{(1-\beta)}}{1- \beta}-\frac{\left(N^{\alpha}/z_{i-1}\right)^{(1-\beta)}}{1-\beta}\right)^{n_{ i}},\] and \[\lim_{N\to\infty}\left(1-\frac{\sum_{k:N^{\alpha}w_{k}\geq z_{\ell}} \,w_{k}^{\beta}}{\sum_{k=1}^{N^{\gamma}}w_{k}^{\beta}}\right)^{N-n_{1}-\cdots-n _{\ell}} =\exp\left\{-\lim_{N\to\infty}(N-n_{1}-\cdots-n_{\ell})\frac{N^{ \alpha(1-\beta)}z_{\ell}^{\beta-1}}{N^{\gamma(1-\beta)}}\right\}\] \[=e^{-z_{\ell}^{\beta-1}}.\] As a consequence, \[\mathbb{P}^{\,\Xi}\left(L_{1}^{N}=n_{1},\ldots,L_{\ell}^{N}=n_{ \ell}\right)\] \[\sim\frac{N^{n_{1}+\cdots+n_{\ell}}}{n_{1}!\ldots n_{\ell}!}\frac {\prod_{i=1}^{\ell}\left((N^{\alpha}/z_{i})^{1-\beta}-(N^{\alpha}/z_{i-1})^{1 -\beta}\right)^{n_{i}}}{N^{\gamma(1-\beta)(n_{1}+\cdots+n_{\ell})}}e^{-z_{\ell }^{\beta-1}}\] \[=\frac{N^{n_{1}+\cdots+n_{\ell}}}{n_{1}!\ldots n_{\ell}!}\frac{ \prod_{i=1}^{\ell}\left(z_{i}^{\beta-1}-(z_{i-1})^{\beta-1}\right)^{n_{i}}}{ N^{n_{1}+\cdots+n_{\ell}}}e^{-z_{\ell}^{\beta-1}}\] \[=\prod_{i=1}^{\ell}\frac{e^{-(z_{i}^{\beta-1}-z_{i-1}^{\beta-1}) }\left(z_{i}^{\beta-1}-(z_{i-1})^{\beta-1}\right)^{n_{i}}}{n_{i}!}\] \[=\mathbb{P}\left(Z([z_{i},z_{i-1}))=n_{i}\text{ for }1\leq i\leq\ell \right).\] Thus \[\lim_{N\to\infty}\mathbb{P}^{\,\Xi}\left(N^{\alpha}w_{j_{1}^{N}} \geq z_{1},\ldots,N^{\alpha}w_{j_{\ell}^{N}}\geq z_{\ell}\right) =\lim_{N\to\infty}\mathbb{P}^{\,\Xi}\left(L_{i}^{N}\geq i\text{ for }1\leq i\leq\ell\right)\] \[=\mathbb{P}\left(Z([z_{i},z_{i-1}))\geq i\text{ for }1\leq i\leq\ell\right)\] \[=\mathbb{P}\left(Z_{1}\geq z_{1},\ldots,Z_{\ell}\geq z_{\ell} \right),\] so that (20) holds by Lemma D.1. Now, by Markov's inequality, (21) is an easy consequence of the following limit \[\lim_{\ell\to\infty}\limsup_{N\to\infty}\mathbb{E}\left[\left(N^{\alpha}\sum_{ k=\ell+1}^{N}w_{j_{k}^{N}}\right)\wedge 1\right]=0. \tag{22}\] To prove the latter, let \(0<\delta<1\). Since \(\sum_{k=1}^{\infty}Z_{k}<\infty\) a.s. we may choose \(\ell\) large enough to ensure \(\mathbb{P}\left(Z_{\ell}>\delta/2\right)<\delta/2\). Having fixed \(\ell\), equation (20) allows us to choose \(N_{1}\) large enough to ensure \(\mathbb{P}\left(N^{\alpha}w_{j_{\ell}^{N}}>\delta\right)<\delta\) for every \(N\geq N_{1}\); thus, using that the sequence \(\left(N^{\alpha}w_{j_{\ell}^{N}}\right)_{\ell}\) is non-increasing in the second line below, we obtain that for \(N\geq N_{1}\) \[\mathbb{E}\left[\left(\sum_{k=\ell}^{N}N^{\alpha}w_{j_{k}^{N}} \right)\wedge 1\right] \leq\mathbb{P}\left(N^{\alpha}w_{j_{\ell}^{N}}>\delta\right)+ \mathbb{E}\left[\sum_{k=\ell}^{N}N^{\alpha}w_{j_{k}^{N}};N^{\alpha}w_{j_{ \ell}}\leq\delta\right]\] \[\leq\delta+\mathbb{E}\left[\sum_{k=1}^{N}N^{\alpha}w_{j_{k}^{N}} \mathbbm{1}_{N^{\alpha}w_{j_{k}^{N}}\leq\delta}\right].\] It is easy to see from the remark to Corollary D.4 (replacing the set of indices \(I^{N}\) by \(J^{N}\) and setting \(b=\alpha\) and \(a=1\) therein) that the second term above is of order \(\mathcal{O}\left(\delta^{\beta}\right)\) as \(\delta\to 0\) for large enough \(N\); thus, (22) follows by taking \(N\) large enough and \(\delta\to 0\). **Corollary E.4**.: _As \(N\to\infty\), \(\tilde{\boldsymbol{\eta}}^{N}\Rightarrow\boldsymbol{\eta}^{\infty}\), where the convergence is meant in the \(\ell^{1}\) topology._ Proof.: By [[32], Prop. 6], \(\left(\frac{Z_{1}}{\sum_{k=1}^{N}Z_{k}},\frac{Z_{2}}{\sum_{k=1}^{N}Z_{k}},\ldots\right)\) has Poisson-Dirichlet\((1-\beta,0)\) distribution. The result follows from the previous lemma, the fact that the mapping \(\nu\to\frac{\nu}{\|\nu\|}\) is continuous in \(\ell^{1}(\mathbb{R}^{+})\setminus\boldsymbol{0}\), the Continuous Mapping theorem [3]. **Lemma E.5**.: _Recall the definition of \(K^{N}\) given at the beginning of the section. There exists a small enough \(\delta>0\) such that_ \[\mathbb{P}\left(\#\left\{k\in K^{N}\colon w_{k}\geq\frac{1}{N^{\alpha+\delta}} \right\}=0\right)\to 1\quad\text{as }N\to\infty.\] Proof.: It is sufficient to find \(\delta>0\) such that for any realization of \(\Xi\) satisfying \(kw_{k}\to 1\), we have \[\mathbb{E}^{\Xi}\left[\#\left\{k\in K^{N}\colon w_{k}\geq\frac{1}{N^{\alpha+ \delta}}\right\}\right]\to 0,\ \text{ as }N\to\infty.\] Recall that \(J^{N}\) denotes the indices sampled with replacement (with possible repetition), and that \(\#J^{N}\) is the number of distinct indices in \(J^{N}\). Given \(J^{N}\), the probability that \(k\notin J^{N}\) is sampled in \(K^{N}\) at any step of the sampling procedure is always bounded from above by \(\frac{w_{k}^{\beta}}{\sum_{j=N}^{\lfloor N^{\gamma}\rfloor}w_{j}^{\beta}}\). As a consequence, \[\mathbb{E}^{\Xi}\left[\#\left\{k\in K^{N}\colon w_{k}\geq\frac{1}{N^{\alpha+ \delta}}\right\}\ |\ J^{N}\right] \leq (N-\#J^{N})\sum_{k:w_{k}\geq\frac{1}{N^{\delta+\alpha}}}\frac{w_{ k}^{\beta}}{\sum_{j=N}^{N^{\gamma}}w_{j}^{\beta}}.\] Integrating over \(J^{N}\), and using the fact that \(kw_{k}\to 1\), we have \[\limsup_{N\to\infty}\mathbb{E}^{\Xi}\left[\#\left\{k\in K^{N} \colon w_{k}\geq\frac{1}{N^{\alpha+\delta}}\right\}\right] \leq \limsup_{N\to\infty}\frac{1}{\sum_{j=N}^{N^{\gamma}}w_{j}^{\beta }}\mathbb{E}^{\Xi}\bigg{(}N-\#J^{N}\bigg{)}\ \sum_{k:w_{k}\geq\frac{1}{N^{\delta+\alpha}}}w_{k}^{\beta}\] \[\leq \limsup_{N\to\infty}\mathbb{E}^{\Xi}\bigg{(}N-\#J^{N}\bigg{)} \frac{N^{(\delta+\alpha)(1-\beta)}}{N^{\gamma(1-\beta)}}.\] In order to evaluate the expected value on the RHS, we will use the simple bound \[N-\#J^{N}\ \leq\ \sum_{(i,j)\in[N]\colon i<j}1_{J^{N}_{i}=J^{N}_{j}}.\] We use again the fact that the probability that \(w_{k}\) is sampled in \(J^{N}\) at any step of the procedure is bounded from above by \(\frac{w_{k}^{\beta}}{\sum_{j=N}^{\lfloor N^{\gamma}\rfloor}w_{j}^{\beta}}\). As a consequence, \[\mathbb{E}^{\Xi}\bigg{(}N-\#J^{N}\bigg{)} \leq \sum_{(i,j)\in[N]\colon i<j}\mathbb{P}^{\Xi}(J^{N}_{i}=J^{N}_{j})\] \[= \sum_{(i,j)\in[N]\colon i<j}\sum_{k=1}^{\lceil N^{\gamma}\rceil} \mathbb{P}^{\Xi}(J^{N}_{i}=J^{N}_{j}=k)\] \[\leq N^{2}\sum_{k=1}^{\lceil N^{\gamma}\rceil}\frac{w_{k}^{2\beta}}{( \sum_{i=N}^{N^{\gamma}}w_{i}^{\beta})^{2}}.\] Combining this with the previous estimates, there exists a deterministic constant \(c\) such that \[\limsup_{N\to\infty}\mathbb{E}^{\Xi}\left[\#\left\{k\in K^{N} \colon w_{k}\geq\frac{1}{N^{\alpha+\delta}}\right\}\right] \leq c\limsup_{N\to\infty}N^{2}\left(\frac{N^{\gamma(1-2\beta)_{+}} \vee\log(N)}{N^{2\gamma(1-\beta)}}\right)\left(\frac{N^{(\alpha+\delta)(1- \beta)}}{N^{\gamma(1-\beta)}}\right)\] \[\leq c\limsup_{N\to\infty}N\left(\frac{N^{\gamma(1-\beta)}}{N^{2 \gamma(1-\beta)}}\right)N^{\delta(1-\beta)}\] \[= c\limsup_{N\to\infty}N^{1-\gamma(1-\beta)+\delta(1-\beta)}.\] In the weak selection regime recall that \(1-\gamma(1-\beta)<0\). Thus, we can choose \(\delta>0\) small enough such that the exponent on the RHS remains negative. This completes the proof of the Lemma. Proof of Proposition e.2.: We first prove the convergence of the scaled weights \(N^{\alpha}\hat{\mathbf{W}}^{N}\) to \((Z_{1},Z_{2},\dots)\) in \(\ell^{1}\). As in Lemma E.4, it is enough to prove \[\left(N^{\alpha}w_{I_{1}^{N}},\dots,N^{\alpha}w_{I_{\ell}^{N}}\right)\Rightarrow \left(Z_{1},\dots,Z_{\ell}\right), \tag{23}\] for fixed \(\ell\), and the analogue of (21). Let \(\epsilon>0\). From Lemma E.5, under our coupling, as \(N\rightarrow\infty\), the sampled atoms \((w_{I_{i}^{N}})\) and \((w_{J_{i}^{N}})\) above level \(N^{-\alpha}\epsilon\) coincide with a probability going to \(1\) as \(N\rightarrow\infty\), i.e., \[\mathbb{P}\left(\forall i\text{ s.t. }N^{\alpha}w_{J_{i}^{N}}>\epsilon\ :\ w_{J_{i}^{N}}=w_{I_{i}^{N}}\right)\to 1 \ \text{ as }N\rightarrow\infty. \tag{24}\] From Proposition E.4, this implies that for every \(z_{1},\cdots,z_{\ell}>0\), we have \[\mathbb{P}\,\left(N^{\alpha}w_{I_{1}^{N}}>z_{1},\dots,N^{\alpha}w_{I_{\ell}^{ N}}>z_{\ell}\right)\Rightarrow\mathbb{P}\,\left(Z_{1}>z_{1},\dots,Z_{\ell}>z_{ \ell}\right).\] For the analogue of (21) we let the reader convince himself that it can be proved by the same argument. Finally, from (24) and Lemma E.5 again, we obtain that the normalized sequences \((\hat{\boldsymbol{\eta}}^{N},\boldsymbol{\eta}^{N})\) jointly converge to \((\boldsymbol{\eta}^{\infty},\boldsymbol{\eta}^{\infty})\). Proof of Theorem a.1 in weak selection regime.: The convergence of the genealogy follows by Proposition (E.2) which gives \[\boldsymbol{\eta}^{N}\Rightarrow\boldsymbol{\eta}^{\infty} \tag{25}\] in the \(\ell^{1}\) topology. Let \(\pi^{N}\) (resp. \(\pi^{\infty}\)) be the exchangeable partition of \(\mathbb{N}\) generated by the paintbox procedure on the mass partition \(\boldsymbol{\eta}^{N}\) (resp. \(\boldsymbol{\eta}^{\infty}\)). An application of Proposition 2.9 [2] and (25) then give \(\pi^{N}\Rightarrow\pi^{\infty}\). This in turn entails the condition (8) [29] (replacing \(c_{N}\) therein by \(1\)). Thus [Theorem 2.1 i) [29]] applies and the convergence of the genealogy follows. We now prove convergence of the coalescence time \(\mathbb{E}(T_{2}^{N})\) for a sample of size \(2\). \(T_{2}^{N}\) is a geometric r.v. (counting the number of trials) with parameter \[c_{N}=\mathbb{E}\left(\sum_{i=1}^{N}(\eta_{i}^{N})^{2}\right). \tag{26}\] Since the function \((\mathbf{x}_{1},\mathbf{x}_{2},\dots)\rightarrow\sum_{i=1}^{\infty}\left( \mathbf{x}_{i}\right)^{2}\) is continuous and bounded in the unit ball of \(\ell^{1}(\mathbb{R}^{+})\), by Proposition E.2, we get \[\lim_{N\rightarrow\infty}\mathbb{E}\left(T_{2}^{N}\right)=\lim_{N\to \infty}c_{N}^{-1}=\mathbb{E}\left[\sum_{i=1}^{\infty}\left(\eta_{i}^{\infty} \right)^{2}\right]^{-1}.\] By equation (6) in [32], \[\mathbb{E}\left[\sum_{i=1}^{\infty}\left(\eta_{i}^{\infty}\right)^{2}\right] =\frac{1}{B(\beta,1-\beta)}\int_{0}^{1}u^{1+\beta-1}(1-u)^{1-\beta-1}du=\frac{ B(1+\beta,1-\beta)}{B(\beta,1-\beta)}=\beta.\] Similarly, given the i.i.d. nature of the merging dynamics across generations, \[\mathbb{P}\,\left(R_{2}^{N}=i\right) =\frac{\mathbb{P}\,\left(\text{Both children share parent $i$ in the previous generation}\right)}{\mathbb{P}\,\left(\text{Both children share the same parent in the previous generation}\right)}\] \[=\frac{\mathbb{E}\,\left[\left(\eta_{i}^{N}\right)^{2}\right]}{c_{N}}. \tag{27}\] Taking the limit as \(N\to\infty\) as before, we obtain \[\lim_{N\to\infty}\mathbb{P}\,\left(R_{2}^{N}=i\right)=\mathbb{E}\left[(\eta_{i}^{ \infty})^{2}\right]/\beta.\] Finally, by Proposition 10\(i)\) and equation (30) in [32], we have as \(i\to\infty\), \[\mathbb{E}\left[(\eta_{i}^{\infty})^{2}\right]\sim\frac{\Gamma\left(1+\frac{2 }{1-\beta}\right)}{2\beta\Gamma(\beta)^{\frac{2}{1-\beta}}}i^{-2/(1-\beta)}.\] Finally, we prove the convergence in distribution of \(R_{1}^{N}\) in the weak selection regime. Proof of Theorem 1.1 i).: Observe that \(\mathbb{P}\,\left(R_{1}^{N}=k\right)=\mathbb{E}\left[\eta_{k}^{N}\right]\) which by (25) converges to \(\mathbb{E}\left[\eta_{k}^{\infty}\right]\). ### Speed of evolution In this section we prove Theorem A.2 in the case where \(\alpha>0\). Recall from (12) that \[\nu_{N}=\mathbb{E}\left[\log\left(\sum_{k=1}^{N}w_{I_{k}}\right)\right]=- \alpha\log(N)+\mathbb{E}\left[\log\left(\sum_{k=1}^{N}N^{\alpha}w_{I_{k}^{N}} \right)\right],\] By Proposition E.2, \[\log\left(\sum_{k=1}^{N}N^{\alpha}w_{I_{k}^{N}}\right)\Longrightarrow\,\, \log\left(\sum_{k=1}^{\infty}Z_{k}\right)\] where \(\mathcal{S}_{1}:=\sum_{k=1}^{\infty}Z_{k}\) has Laplace transform \(\mathbb{E}\left[e^{-s\mathcal{S}_{1}}\right]=\exp\left\{-\Gamma(\beta)s^{1- \beta}\right\}\) (see (12) in [32]). As a consequence, Theorem A.2 for \(\alpha>0\) boils down to the following result. **Proposition E.6**.: _The collection of r.v.'s \(\left\{\log\left(\sum_{k=1}^{N}N^{\alpha}w_{I_{k}^{N}}\right)\right\}_{N}\) is uniformly integrable._ **Lemma E.7**.: _Let \(\eta>1\) and \(\delta>0\). Then there exists \(c>0\) such that_ \[\forall b\geq 1,x\geq 0,\,\,\,\,\log^{\eta}(b+x)\leq cb^{\delta}+cx.\] Proof.: We have \[\log^{\eta}(b+x)= \log^{\eta}(b)+\int_{0}^{x}(\eta-1)\frac{\log^{\eta-1}(b+u)}{b+u}du\] \[= \log^{\eta}(b)+\int_{b}^{b+x}(\eta-1)\frac{\log^{\eta-1}(u)}{u}du\] \[\leq \log^{\eta}(b)+x(\eta-1)\max_{u\geq 1}\frac{\log^{\eta-1}(u)}{u}.\] The result follows from the observation that there exists a constant \(\bar{c}\) such that for every \(b\geq 1\), \(\log^{\eta}(b)\leq\bar{c}b^{\delta}\). Proof of Lemma e.6.: This boils down to proving that for some \(\eta>1\), we have \[\limsup_{N\to\infty}\,\,\mathbb{E}\left[\left|\log\left(N^{\alpha }\sum_{k=1}^{N}w_{I_{k}^{N}}\right)\right|^{\eta};N^{\alpha}\sum_{k=1}^{N}w_{I _{k}^{N}}<1\right]<\infty \tag{28}\] \[\limsup_{N\to\infty}\mathbb{E}\left[\left|\log\left(N^{\alpha} \sum_{k=1}^{N}w_{I_{k}^{N}}\right)\right|^{\eta};N^{\alpha}\sum_{k=1}^{N}w_{I _{k}^{N}}>1\right]<\infty. \tag{29}\] (See e.g. (3.18) in [3].) **Step 1.** We start by proving (28). Let \(E_{N}\) be an arbitrary subset such that, for some \(\eta^{\prime}>0\), \[\lim_{N\to\infty}\ N^{\eta^{\prime}}\mathbb{P}(E_{N})\ =\ 0. \tag{30}\] From Lemma D.6, it is enough to show that for such \(E_{N}\), \[\limsup_{N\to\infty}\ \mathbb{E}\left(\left|\log\left(N^{\alpha}\sum_{k=1}^{N}w_{ I_{k}}\right)\right|^{\eta},\sum_{k=1}^{N}\ N^{\alpha}w_{I_{k}}\leq 1,E_{N}^{c} \right)\ <\ \infty.\] To ease the notation, we write \(J^{N}\equiv J\) and \(I^{N}\equiv I\). We also use the same notation for the ordered set \(\hat{I}\) and \(\hat{J}\). First, \[\mathbb{E}\left(\left|\log\left(N^{\alpha}\sum_{k=1}^{N}w_{I_{k} }\right)\right|^{\eta},\sum_{k=1}^{N}\ N^{\alpha}w_{I_{k}}\leq 1,E_{N}^{c}\right) \leq \mathbb{E}\bigg{(}\left(-\log\left(N^{\alpha}w_{\hat{I}_{1}} \right)\right)^{\eta},N^{\alpha}w_{\hat{I}_{1}}\leq 1,E_{N}^{c}\bigg{)}\] \[= \int_{0}^{\infty}\mathbb{P}\bigg{(}-\log\left(N^{\alpha}w_{\hat{ I}_{1}}\right)\geq u^{\frac{1}{\eta}},N^{\alpha}w_{\hat{I}_{1}}\leq 1,E_{N}^{c} \bigg{)}du\] \[\leq \int_{0}^{\infty}\mathbb{P}\left(N^{\alpha}w_{\hat{I}_{1}}\leq \exp(-u^{\frac{1}{\eta}}),E_{N}^{c}\right)du\] \[= \int_{0}^{\infty}\mathbb{E}\left[\left(1-\frac{\sum_{i:w_{i}\geq \frac{1}{N^{\alpha}}\exp(-u^{\frac{1}{\eta}})}w_{i}^{\beta}}{\sum_{i=1}^{N^{ \gamma}}w_{i}^{\beta}}\right)^{N},E_{N}^{c}\right]du\] \[\leq \int_{0}^{\infty}\mathbb{E}\left[\exp\left(-N\frac{\sum_{i:w_{i }\geq\frac{1}{N^{\alpha}}\exp(-u^{\frac{1}{\eta}})}w_{i}^{\beta}}{\sum_{i=1}^{ N^{\gamma}}w_{i}^{\beta}}\right),E_{N}^{c}\right]du\] where for the third inequality we have used the stochastic inequality \[\max_{1\leq k\leq N}N^{\alpha}w_{J_{k}}\leq\max_{1\leq k\leq N}N^{\alpha}w_{I _{k}}\] (see our coupling \((I^{N},J^{N})\equiv(I,J)\) described at the beginning of Section E.) We now handle the expectation appearing in the RHS of (31). Let \(2c\in(0,1-\beta)\) and define \[E_{N}\ =\ \left\{\sum_{i=1}^{N^{\gamma}}w_{i}^{\beta}>c^{-1}N^{\gamma(1-\beta)} \right\}.\] By Lemma D.3, (30) is satisfied. Further, using (31), we have \[\mathbb{E}\left(\left|\log\left(N^{\alpha}\sum_{k=1}^{N}w_{I_{k} }\right)\right|^{\eta},\sum_{k=1}^{N}\ N^{\alpha}w_{I_{k}}\leq 1,E_{N}^{c}\right) \leq \int_{0}^{\infty}\mathbb{E}\left(\exp(-cN\frac{\sum_{i:w_{i}\geq \frac{1}{N^{\alpha}}\exp(-u^{\frac{1}{\eta}})}w_{i}^{\beta}}{N^{\gamma(1- \beta)}})\right)du\] \[= \int_{0}^{\infty}\mathbb{E}\left(\exp(-cN^{\chi\beta}\sum_{i:w_{ i}\geq\frac{1}{N^{\alpha}}\exp(-u^{\frac{1}{\eta}})}w_{i}^{\beta})\right)du.\] Define \(\bar{c}=\min_{(0,1]}\frac{1}{x^{\beta}}(1-\exp(-cx^{\beta}))>0\). The latter, together with Campbell's formula (Campbell's theorem in [28]), and the inequality \(N^{\chi-\alpha}\leq 1\) in the last line, yield \[\mathbb{E}\left(\exp(-cN^{\chi\beta}\sum_{i:w_{i}\geq\frac{1}{N^{ \alpha}}\exp(-u^{\frac{1}{\beta}})}w_{i}^{\beta}\right)\right)= \exp\bigg{(}\int_{\frac{\exp(-u^{\frac{1}{\beta}})}{N^{\alpha}}} ^{\infty}\left(e^{-c(N^{\chi}x)^{\beta}}-1\right)\frac{dx}{x^{2}}\bigg{)}\] \[= \exp\left(N^{\chi}\int_{\frac{N^{\chi}}{N^{\alpha}}\exp(-u^{\frac {1}{\beta}})}^{\infty}\left(e^{-cx^{\beta}}-1\right)\frac{dx}{x^{2}}\right)\] \[\leq \exp\left(N^{\chi}\int_{\frac{N^{\chi}}{N^{\alpha}}\exp(-u^{\frac {1}{\beta}})}^{1}\left(e^{-cx^{\beta}}-1\right)\frac{dx}{x^{2}}\right)\] \[\leq \exp\bigg{(}-\bar{c}N^{\chi}\int_{\frac{N^{\chi}}{N^{\alpha}}\exp (-u^{\frac{1}{\beta}})}^{1}\frac{dx}{x^{2-\beta}}\bigg{)}\] \[\leq \exp(-\frac{\bar{c}}{1-\beta}\exp((1-\beta)u^{\frac{1}{\beta}})).\] Finally, (28) follows from the observation that \[\int_{0}^{\infty}\exp\bigg{(}-\frac{c}{1-\beta}\exp\Big{(}(1-\beta)u^{\frac{1 }{\beta}}\Big{)}\,\bigg{)}du<\infty.\] **Step 2.** We now prove (29). Let \(\delta>0\). By Lemma E.7, we can pick \(c>0\) to ensure that \[\forall b\geq 1,x\geq 0,\ \ \log^{\eta}(b+x)\leq cb^{\delta}+cx.\] Then, on the set \(\{N^{\alpha}\sum_{k=1}^{N}w_{I_{k}}>1\}\), and using the fact that \(\log(x)\leq x\) for \(x>0\) and \(\log^{\eta}(x)\leq\tilde{c}x\) for some \(\tilde{c}>0\) and all \(x>1\), we have \[\log^{\eta}\left(N^{\alpha}\sum_{k=1}^{N}w_{I_{k}}\right)= \mathbb{1}_{(N^{\alpha}w_{I_{1}}>1)}\log^{\eta}\left(N^{\alpha} \sum_{k=1}^{N}w_{I_{k}}\mathbb{1}_{w_{I_{k}}>N^{-\alpha}}\ +\ N^{\alpha}\sum_{k=1}^{N}w_{I_{k}}\mathbb{1}_{w_{I_{k}}\leq N ^{-\alpha}}\right)\] \[+\mathbb{1}_{(N^{\alpha}w_{I_{1}}\leq 1)}\log^{\eta}\left(N^{ \alpha}\sum_{k=1}^{N}w_{I_{k}}\mathbb{1}_{w_{I_{k}}\leq N^{-\alpha}}\right)\] \[\leq cN^{\delta\alpha}(\sum_{k=1}^{N}w_{I_{k}}\mathbb{1}_{w_{I_{k}}>N^{- \alpha}})^{\delta}+(c+\tilde{c})N^{\alpha}\sum_{k=1}^{N}w_{I_{k}}\mathbb{1}_{w_ {I_{k}}\leq N^{-\alpha}}.\] First, a direct application of Corollary D.4 with \(a,\delta=1\) and \(b=\alpha\), shows that \[N^{\alpha}\sum_{k=1}^{N}\mathbb{E}\big{(}w_{I_{k}};w_{I_{k}}\leq N^{-\alpha} \big{)}\ =\ \mathcal{O}\left(1\right).\] Second, take \(\delta<1-\beta<1\). By Minkowski's inequality with \(q=\delta^{-1}\), we have \[\left(\sum_{k=1}^{N}w_{I_{k}}\mathbb{1}_{w_{I_{k}}>N^{-\alpha}}\right)^{\delta }\leq\sum_{k=1}^{N}w_{I_{k}}^{\delta}\mathbb{1}_{w_{I_{k}}>N^{-\alpha}}.\] Then, by a direct application of Corollary D.5 with \(a=\delta\) and \(b=\alpha\), \[\mathbb{E}\left[N^{\delta\alpha}\big{(}\sum_{k=1}^{N}w_{I_{k}} \mathbb{1}_{w_{I_{k}}>N^{-\alpha}}\big{)}^{\delta}\right] \leq N^{\delta\alpha}\mathbb{E}\left[\sum_{k=1}^{N}w_{I_{k}}^{ \delta}\mathbb{1}_{w_{I_{k}}>N^{-\alpha}}\right]\] \[=\mathcal{O}\left(N^{\alpha\delta+1-\gamma(1-\beta)+\alpha(1- \beta-\delta)}\right)\ =\ \mathcal{O}\left(1\right).\] Strong selection regime ### Selection Step and Fittest Individuals Recall that \(\Xi\) is the PPP on \(\mathbb{R}_{+}\) with intensity measure \(\frac{dx}{x^{2}}\), and that \((w_{i})_{i\geq 1}\) are its atoms listed in decreasing order. As in the previous section, we aim at studying the asymptotic of the mass partition generated from the weighted sampling procedure without replacement, where the weight of index \(i\) is given by \(w_{i}^{\beta}\). In this section, we prove that all the high fitness children positioned above level \(\approx N^{-\chi}\) are all chosen during the selection step with probability converging to \(1\) as \(N\to\infty\) (Proposition F.1); whereas the "aggregated" fitness of individuals below level \(\approx N^{-\chi}\) becomes negligible as \(N\to\infty\) (Proposition F.3). We will use these two heuristics in Section F.2 in order to "reduce" the reproduction+selection steps of our model to those of the original exponential model of Brunet and Derrida [9, 10, 8] with an "effective" population size equal to \(N^{\chi}\). **Proposition F.1**.: _Let \(0<\epsilon<\chi\). Define the event_ \[A_{N,\epsilon}\equiv A_{N}\ :=\ \left\{\left\{i\ :\ w_{i}\geq\frac{1}{N^{\chi- \epsilon}}\right\}\subseteq I^{N}\right\},\] _then_ \[\lim_{N\to\infty}\log(N)\mathbb{P}\ \big{(}A_{N,\epsilon}^{c}\big{)}=0.\] Proof.: For every \(i\in[N^{\gamma}]\) \[\mathbb{P}^{\Xi}(i\notin I^{N})\ \leq\ \exp\left\{N\ln\left(1-\frac{w_{i}^{ \beta}}{\sum_{j=1}^{\lceil N^{\gamma}\rceil}w_{j}^{\beta}}\right)\right\} \ \leq\ \exp\left\{-\frac{Nw_{i}^{\beta}}{\sum_{j=1}^{\lceil N^{\gamma} \rceil}w_{j}^{\beta}}\right\}.\] Summing over \(i\) we obtain \[\mathbb{P}^{\Xi}\left(A_{N,\epsilon}^{c}\right)\leq\sum_{i\ :\ w_{i}\geq\frac{1}{N^{ \chi-\epsilon}}}\exp(-\frac{Nw_{i}^{\beta}}{\sum_{j=1}^{\lceil N^{\gamma} \rceil}w_{j}^{\beta}}). \tag{32}\] Using Lemma D.2, for every \(C>1-\beta\) we have \[\lim_{N\to\infty}\ \ \log(N)\mathbb{P}\ \Bigg{(}\sum_{j=1}^{\lceil N^{\gamma} \rceil}w_{j}^{\beta}\leq C^{-1}N^{\gamma(1-\beta)}\Bigg{)}=0.\] Gathering the previous observations, we need to show that as \(N\to\infty\), we have \[\log(N)\ \mathbb{E}\left(\sum_{w_{i}\geq\frac{1}{N^{\chi-\epsilon}}}\exp \left(-C\frac{Nw_{i}^{\beta}}{N^{\gamma(1-\beta)}}\right)\right)\ \to\ 0.\] Indeed, as \(N\to\infty\), \[\log(N)\mathbb{E}\left(\sum_{w_{i}\geq\frac{1}{N^{\chi-\epsilon}}}\exp \left(-C\frac{Nw_{i}^{\beta}}{N^{\gamma(1-\beta)}}\right)\right) = \log(N)\int_{\frac{1}{N^{\chi-\epsilon}}}^{\infty}\exp(-C\frac{Nx ^{\beta}}{N^{\gamma(1-\beta)}})\frac{dx}{x^{2}}\] \[= \log(N)\int_{\frac{1}{N^{\chi-\epsilon}}}^{\infty}\exp(-CN^{\beta \chi}x^{\beta})\frac{dx}{x^{2}}\] \[= \log(N)N^{\chi}\int_{N^{\epsilon}}^{\infty}\exp(-Cy^{\beta})\frac {dy}{y^{2}}\to 0.\] **Corollary F.2**.: _Let_ \[\bar{A}_{N,\epsilon}\equiv\bar{A}_{N}\coloneqq\left\{\sum_{k=1}^{N}w_{I_{k}^{N}}> \frac{\chi}{2}\log(N)\right\}\] _then_ \[\lim_{N\to\infty}\log(N)\mathbb{P}\left(\bar{A}_{N}^{\epsilon}\right)=0.\] Proof.: By Lemma F.1 we may work on the set \(A_{N}\) on which we have \(\sum_{k=1}^{N}w_{I_{k}^{N}}\geq\sum_{w_{i}\geq\frac{1}{N\chi-\epsilon}}w_{i}\). Let \(a_{N}=\frac{\chi}{2}\log(N)\). Then by Campbell's formula combined with Chebitchev's inequality, \[\log(N)\mathbb{P}\,\left(\bar{A}_{N}^{\epsilon}\right) \leq \log(N)\exp(a_{N})\mathbb{E}\bigg{(}\exp(-\sum_{i:w_{i}\geq\frac{ 1}{N\chi-\epsilon}}w_{i})\bigg{)} \tag{33}\] \[= \log(N)\exp(a_{N})\exp\left(\int_{\frac{1}{N\chi-\epsilon}}^{ \infty}\frac{dx}{x^{2}}(\exp(-x)-1)\right)\] \[= \log(N)\exp(a_{N})\exp\bigg{(}-\log(N^{\chi-\epsilon})+\mathcal{ O}\left(1\right)\bigg{)}.\] Take \(\epsilon\) small enough such that \(\chi-\epsilon>\frac{\chi}{2}\), it is straightforward to check that the last term above converges to \(0\) as \(N\to\infty\). **Proposition F.3**.: _Let \(\epsilon\in(0,1)\). Define_ \[B_{N,\epsilon}:=\left\{\sum_{k=1}^{N}w_{I_{k}^{N}}\mathbbm{1}_{I_{k}^{N}>N( \chi+\epsilon)}<N^{-\epsilon\beta/2}\right\}.\] _Then_ \[\lim_{N\to\infty}\log(N)\mathbb{P}\,\left(B_{N,\epsilon}^{\epsilon}\right)=0. \tag{34}\] Proof.: By Lemma C.2 and standard Cramer estimates, the probability \(\mathbb{P}\,\left(w_{\lceil N\chi^{+\epsilon}\rceil}\geq\frac{(1+\epsilon)^{ -1}}{\lceil N\chi^{+\epsilon}\rceil}\right)\) decays exponentialy fast in \(N\). This implies \(\mathbb{E}\bigg{(}\sum_{k=1}^{N}w_{I_{k}^{N}}\mathbbm{1}_{I_{k}^{N}>N^{(\chi+ \epsilon)}}\bigg{)}\leq\mathbb{E}\bigg{(}\sum_{k=1}^{N}w_{I_{k}^{N}}\mathbbm{ 1}_{w_{I_{k}^{N}}<\frac{(1+\epsilon)^{-1}}{N(\chi+\epsilon)}}\bigg{)}+o\left( N^{-\eta}\right)\) for every \(\eta>0\). Also, a direct application of Corollary D.4 with \(a=1\), \(b=\chi+\epsilon\), and \(\delta=(1+\epsilon)^{-1}\), gives \[\mathbb{E}\bigg{(}\sum_{k=1}^{N}w_{I_{k}^{N}}\mathbbm{1}_{w_{I_{k}^{N}}<\frac{ (1+\epsilon)^{-1}}{N(\chi+\epsilon)}}\bigg{)}\ =\ \mathcal{O}\left(\frac{1}{N^{\beta\epsilon}}\right).\] Then Proposition F.3 is completed by a direct application of Markov's inequality. ### Convergence to the Bolthausen-Sznitman Coalescent We first provide a criterion for the convergence of (discrete-time) coalescents directed by mass partitions to general \(\Lambda\)-coalescents. In fact the proof of the following proposition is already contained in the proof of Lemma 3.1[13] (see the second display therein, which corresponds to (35) and (36) below) which is in turn based on an adaptation of Theorem 2.1[29] to multinomial offspring distributions. An analogue of the following result can also be found in full detail, and within a more general framework, in [40]. Heuristically, equation (35) below ensures that no simultaneous multiple mergers occur in the limit; whilst (36) ensures that the simple multiple mergers converge to those of the \(\Lambda\)-coalescent. To avoid plain repetitions here we only state the result, and then in Lemma F.5 provide a simplification of (35). We then apply these critria to our case in which the directing random mass partition has the form (9). **Proposition F.4**.: _Let \((\eta_{1}^{N},\cdots,\eta_{N}^{N})\) be (a general) random mass partition. Assume that there exists \(L_{N}\) such that for every \(a\geq 2\) and \(b_{1},\cdots,b_{a}\geq 2\)_ \[\lim_{N\to\infty}L_{N}\sum_{1\leq i_{1}<\cdots<i_{a}\leq N}\mathbb{E}\left[ \left(\eta_{i_{1}}^{N}\right)^{b_{1}}\dots\left(\eta_{i_{a}}^{N}\right)^{b_{a} }\right]=0, \tag{35}\] \[\lim_{N\to\infty}L_{N}\sum_{k=1}^{N}\mathbb{E}\left[\left(\eta_{i}^{N}\right)^ {b}\right]=\int_{0}^{1}x^{b-2}\Lambda(dx). \tag{36}\] _Then \(c_{N}\sim L_{N}^{-1}\Lambda([0,1])\) and the time-scaled process \((\Pi_{n}^{N}([L_{N}t]);t\in\mathbb{R}_{+})\) converges in distribution on \(D([0,\infty),\mathscr{P}_{n})\) to the \(\Lambda\)-coalescent started with \(n\) singletons._ **Lemma F.5**.: _If the following condition holds_ \[\lim_{N\to\infty}L_{N}\sum_{k=2}^{N}\mathbb{E}\left[\left(\eta_{i_{1}}^{N} \right)^{2}\right]=0. \tag{37}\] _then (35) holds._ Proof.: For any fixed collection \(i_{1}<,\dots,<i_{a-1}\) we have, summing over the index \(i_{a}\), \[\sum_{i_{a}=i_{a-1}+1}^{N}\mathbb{E}\left[\left(\eta_{i_{1}}^{N} \right)^{b_{1}}\dots\left(\eta_{i_{a}}^{N}\right)^{b_{a}}\right] =\mathbb{E}\left[\left(\eta_{i_{1}}^{N}\right)^{b_{1}}\dots \left(\eta_{i_{a-1}}^{N}\right)^{b_{a-1}}\sum_{i_{a}=i_{a-1}+1}^{N}\left(\eta_ {j}^{N}\right)^{b_{a}}\right]\] \[\leq\mathbb{E}\left[\left(\eta_{i_{1}}^{N}\right)^{b_{1}}\dots \left(\eta_{i_{a-1}}^{N}\right)^{b_{a-1}}\right].\] Thus, for \(j\geq 1\) and now performing the summation over all the indices \(j\leq i_{1}<\cdots<i_{a}\), \[\sum_{j\leq i_{1}<\cdots<i_{a}\leq N}\mathbb{E}\left[\left(\eta_{i_{1}}^{N} \right)^{b_{1}}\dots\left(\eta_{i_{a}}^{N}\right)^{b_{a}}\right]\leq\sum_{j \leq i_{1}<\cdots<i_{a-1}\leq N}\mathbb{E}\left[\left(\eta_{i_{1}}^{N}\right)^ {b_{1}}\dots\left(\eta_{i_{a-1}}^{N}\right)^{b_{a-1}}\right].\] Iterating over \(a\) we obtain, \[\sum_{j\leq i_{1}<\cdots<i_{a}\leq N}\mathbb{E}\left[\left(\eta_{i_{1}}^{N} \right)^{b_{1}}\dots\left(\eta_{i_{a}}^{N}\right)^{b_{a}}\right]\leq\sum_{j =i}^{N}\left(\eta_{i}^{N}\right)^{2},\quad\forall j\geq 1. \tag{38}\] Also, when \(i_{1}=1\), using that \(\eta_{1}^{N}\leq 1\), we have, \[\sum_{1=i_{1}<\cdots<i_{a}\leq N}\mathbb{E}\left[\left(\eta_{i_{1}}^{N}\right) ^{b_{1}}\dots\left(\eta_{i_{a}}^{N}\right)^{b_{a}}\right]\leq\sum_{2\leq i_{1} <\cdots<i_{a-1}\leq N}\mathbb{E}\left[\left(\eta_{i_{1}}^{N}\right)^{b_{1}} \dots\left(\eta_{i_{a-1}}^{N}\right)^{b_{a-1}}\right].\] Thus, decomposing the sum below into the cases \(i_{1}=1\) and \(i_{1}\geq 2\), and using (38) with \(j=2\) in each term of the RHS, \[\sum_{1\leq i_{1}<\cdots<i_{a}\leq N}\mathbb{E}\left[\left(\eta_ {i_{1}}^{N}\right)^{b_{1}}\dots\left(\eta_{i_{a}}^{N}\right)^{b_{a}}\right] \leq\sum_{2\leq i_{1}<\cdots<i_{a-1}\leq N}\mathbb{E}\left[ \left(\eta_{i_{1}}^{N}\right)^{b_{1}}\dots\left(\eta_{i_{a-1}}^{N}\right)^{b_{a -1}}\right]+\] \[\leq 2\sum_{i=2}^{N}\left(\eta_{i}^{N}\right)^{2}.\] From the latter inequality it is clear that (37) implies (35). The following lemmas prove (37) and (36) for our model. These, together with Proposition C.1, imply the convergence of the genealogy to the Bolthausen-Sznitman coalescent. **Lemma F.6**.: _Recall \(\boldsymbol{\eta}^{N}\) defined as in (9). Then, for every \(b\geq 2\),_ \[\lim_{N\to\infty}\log(N)\mathbb{E}\left[\sum_{k=2}^{N}\left(\eta_{k}^{N} \right)^{b}\right]=0.\] Proof.: Since \(\eta_{i}\leq w_{i}/\sum_{k=1}^{N}w_{I_{k}^{N}}\) for all \(1\leq i\in\leq N\), by Corollary F.2 we have, for every \(0<\epsilon<2\), \[\mathbb{E}\left[\sum_{k=2}^{N}\left(\eta_{k}^{N}\right)^{b}\right] \leq\mathbb{E}\left[\sum_{k=2}^{N}\left(\eta_{k}^{N}\right)^{2- \epsilon}\right]\] \[\leq\mathbb{E}\left[\sum_{i=2}^{N}\frac{w_{i}^{2-\epsilon}}{ \left(\chi\log(N)/2\right)^{2-\epsilon}}\right]+o\left(\log(N)^{-1}\right)\] \[=\frac{\sum_{i=2}^{N}\mathbb{E}\left[w_{i}^{2-\epsilon}\right]}{ \left(\chi\log(N)/2\right)^{2-\epsilon}}+o\left(\log(N)^{-1}\right).\] By Lemma C.2, \(1/w_{i}\) is identical in law with \(\mathcal{E}_{1}+\cdots+\mathcal{E}_{i}\) where \(\left(\mathcal{E}_{i}\right)_{i\geq 1}\) are independent exponential r.v.'s. Therefore, \[\mathbb{E}\left[w_{i}^{2-\epsilon}\right]=\int_{0}^{\infty}x^{-(2-\epsilon)}x ^{i-1}e^{-x}\frac{dx}{\Gamma(i)}=\frac{\Gamma(i-(2-\epsilon))}{\Gamma(i)} \overset{i\to\infty}{\sim}i^{-(2-\epsilon)}\] and \(\sum_{i=2}^{N}\mathbb{E}\left[w_{i}^{2-\epsilon}\right]<\infty\); this proves \[\mathbb{E}\left[\sum_{k=2}^{N}\left(\eta_{k}^{N}\right)^{b}\right]=o\left( \log(N)^{-1}\right).\] **Lemma F.7**.: _Let \(a_{N}\to 0\) a non-negative deterministic constant converging to \(0\) as \(N\to\infty\). Then as \(N\to\infty\)_ \[\lim_{N\to\infty}\log(N)\ \mathbb{E}\left[\sum_{j=1}^{N}\left(\frac{w_{j}}{a_{N}+ \sum_{k=1}^{N}w_{k}}\right)^{b}\right]=\frac{1}{b-1}.\] Proof.: The result for \(a_{N}=0\) was shown in eq. (29) in [8]. We extend this asymptotics to the case \(a_{N}\to 0\). One the one hand, first observe that \[\mathbb{E}\left[\sum_{j=1}^{N}\left(\frac{w_{j}}{a_{N}+\sum_{k=1}^{N}w_{k}} \right)^{b}\right]=\mathbb{E}\left[\left(\frac{1}{\frac{a_{N}}{\sum_{k=1}^{N} w_{k}}+1}\right)^{b}\sum_{j=1}^{N}\left(\frac{w_{j}}{\sum_{k=1}^{N}w_{k}} \right)^{b}\right].\] Second, by Markov's inequality followed by Jensen's inequality, \[\mathbb{P}\left(\frac{a_{N}}{\sum_{k=1}^{N}w_{k}}>\epsilon\right) \leq\frac{1}{\epsilon}a_{N}\mathbb{E}\left[\frac{1}{\sum_{k=1}^{ N}w_{k}}\right]=\frac{1}{\epsilon}\frac{a_{N}}{\epsilon\sum_{k=1}^{N}k^{-1}} \mathbb{E}\left[\frac{\sum_{k=1}^{N}k^{-1}}{\sum_{k=1}^{N}w_{k}kk^{-1}}\right]\] \[\leq o\left(\log(N)^{-1}\right)\mathbb{E}\left[\frac{\sum_{k=1}^ {N}w_{k}^{-1}k^{-2}}{\sum_{k=1}^{N}k^{-1}}\right]=o\left(\log(N)^{-1}\right) \frac{\sum_{k=1}^{N}kk^{-2}}{\sum_{k=1}^{N}k^{-1}},\] where we have used Lemma C.2 to compute \(\mathbb{E}\left[w_{k}^{-1}\right]=k\). Thus, intersecting with the set \(\left\{\frac{a_{N}}{\sum_{k=1}^{N}w_{k}}\leq\epsilon\right\}\) and using (29) in [8] (i.e. our Lemma for \(a_{N}=0\)) we obtain, for every \(\epsilon>0\), \[\lim_{N\to\infty}\log(N)\mathbb{E}\left[\left(\frac{1}{\frac{a_{N }}{\sum_{k=1}^{N}w_{k}}+1}\right)^{b}\sum_{j=1}^{N}\left(\frac{w_{j}}{\sum_{k=1 }^{N}w_{k}}\right)^{b}\right] \geq\lim_{N\to\infty}\frac{\log(N)}{\left(\epsilon+1\right)^{b}} \mathbb{E}\left[\sum_{j=1}^{N}\left(\frac{w_{j}}{\sum_{k=1}^{N}w_{k}}\right)^{ b}\right]+\] \[\quad\lim_{N\to\infty}\log(N)\mathbb{P}\left(\frac{a_{N}}{\sum_{ k=1}^{N}w_{k}}>\epsilon\right)\] \[=\left(\frac{1}{\left(\epsilon+1\right)^{b}}\right)\frac{1}{b-1}.\] We then obtain an inequality by taking \(\epsilon\to 0\). The reversed inequality is obvious. **Lemma F.8**.: _Let \(b\geq 2\). Then for \(\boldsymbol{\eta}^{N}\) as in (9),_ \[\lim_{N\to\infty}\chi\log(N)\mathbb{E}\left[\left(\eta_{1}^{N}\right)^{b} \right]=\int_{0}^{1}bx^{b-2}(1-x)dx=\frac{1}{b-1}\] Proof.: Let \(\epsilon>0\). Let \(A_{N,\epsilon}\) be as in Proposition \(F.1\) and \(B_{N,\epsilon}\) as in Proposition \(F.3\). Recall that for every \(\epsilon>0\), \[\lim_{N\to\infty}\ \log(N)\mathbb{P}\left(A_{N,\epsilon}^{c}\right),\ \log(N) \mathbb{P}\left(B_{N,\epsilon}^{c}\right)\ =\ 0. \tag{39}\] As a consequence, it is sufficient to prove that \[\lim_{N\to\infty}\chi\log(N)\mathbb{E}\left[\left(\eta_{1}^{N}\right)^{b};A_{ N,\epsilon}\cap B_{N,\epsilon}\right]=\frac{1}{b-1},\quad\forall b\geq 2. \tag{40}\] On the one hand observe that in the event \(A_{N,\epsilon}\cap B_{N,\epsilon}\) we have \[\left(\eta_{1}^{N}\right)^{b}\leq\sum_{j=1}^{\lceil N\chi^{-\epsilon}\rceil} \left(\frac{w_{j}}{\sum_{k=1}^{\lceil N\chi^{-\epsilon}\rceil}w_{k}}\right)^ {b}\] so that taking expectations, Lemma F.7 implies that \[\limsup_{N\to\infty}\chi\log(N)\mathbb{E}\left[(\eta_{1}^{N})^{b };A_{N,\epsilon}\cap B_{N,\epsilon}\right] \leq\limsup_{N\to\infty}\chi\log(N)\mathbb{E}\left[\sum_{j=1}^{ \lceil N\chi^{-\epsilon}\rceil}\left(\frac{w_{j}}{\sum_{k=1}^{\lceil N\chi^{- \epsilon}\rceil}w_{k}}\right)^{b}\right]\] \[=\frac{\chi}{\chi-\epsilon}\left(\frac{1}{b-1}\right).\] On the other hand note that by Proposition F.6 and for \(\epsilon>0\) such that \(\chi+\epsilon<1\) we have for \(b\geq 2\), \[\liminf_{N\to\infty}\chi\log(N)\mathbb{E}\left[(\eta_{1}^{N})^{b}\right]= \liminf_{N\to\infty}\chi\log(N)\mathbb{E}\left[\sum_{k=1}^{\lceil N\chi^{+ \epsilon}\rceil}(\eta_{k}^{N})^{b}\right].\] Intersecting again with the set \(A_{N,\epsilon}\cap B_{N,\epsilon}\), we see that the RHS above is greater than \[\liminf_{N\to\infty}\chi\log(N)\mathbb{E}\left[\sum_{j=1}^{\lceil N\chi^{+ \epsilon}\rceil}\left(\frac{w_{j}}{\sum_{k=1}^{\lceil N\chi^{+\epsilon} \rceil}w_{k}+N^{-\epsilon\beta/2}}\right)^{b};A_{N,\epsilon}\cap B_{N,\epsilon }\right].\] Since \(\log(N)\mathbb{P}\left(A_{N,\epsilon}^{\varsigma}\right)\), \(\log(N)\mathbb{P}\left(B_{N,\epsilon}^{\varsigma}\right)\to 0\) and the sum above is bounded by \(1\), we thus obtain \[\liminf_{N\to\infty}\chi\log(N)\mathbb{E}\left[\sum_{k=1}^{\lceil N \times^{+\epsilon}\rceil}(\eta_{k}^{N})^{b};A_{N,\epsilon}\cap B_{N,\epsilon} \right]\geq\liminf_{N\to\infty}\chi\log(N)\mathbb{E}\left[\sum_{j=1}^{\lceil N \times^{+\epsilon}\rceil}\left(\frac{w_{j}}{\sum_{k=1}^{\lceil N\times^{+ \epsilon}\rceil}w_{k}+N^{-\epsilon\beta/2}}\right)^{b}\right].\] By Lemma F.7, the RHS above converges to \(\frac{\chi}{\chi+\epsilon}\left(\frac{1}{b-1}\right)\). The proof is finished by taking \(\epsilon\to 0\) in the above two bounds on \(\lim_{N\to\infty}\chi\log(N)\mathbb{E}\left[(\eta_{1}^{N})^{b};A_{N,\epsilon} \cap B_{N,\epsilon}\right]\). Proof Theorem a.1 in the strong selection regime.: The convergence of \((\Pi_{n}^{N}([\chi\log(N)t];t\geq 0)\) and the asymptotic for \(\mathbb{E}\left[T_{2}^{N}\right]\sim c_{N}^{-1}\) (see the proof of the weak selection regime) follow directly from Proposition F.4, as well as the previous estimates. Finally, the asymptotic for \(\mathbb{E}\left[R_{2}^{N}\right]\) follows from (27) and Lemma F.6. ### Speed of evolution Proof of ii) in Theorem a.2.: By Lemma D.6 together with Propositions \(F.1\) and \(F.3\) we have, for every \(\epsilon>0\), \(\mathbb{E}\left[\log\left(\sum_{k=1}^{N}w_{I_{k}}\right)\right]=\mathbb{E} \left[\log\left(\sum_{k=1}^{N}w_{I_{k}}\right);A_{N,\epsilon},B_{N,\epsilon} \right]+o\left(1\right)\) as \(N\to\infty\), which implies \[\lim_{N\to\infty}\left|\log\log(N)-\mathbb{E}\left[\log\left(\sum_{k=1}^{N}w_ {I_{k}}\right)\right]\right|=\lim_{N\to\infty}\left|\log\log(N)-\mathbb{E} \left[\log\left(\sum_{k=1}^{N}w_{I_{k}}\right);A_{N,\epsilon},B_{N,\epsilon} \right]\right| \tag{41}\] for every \(\epsilon>0\). Observe that on the event \(A_{N,\epsilon},B_{N,\epsilon}\) we have \[\log\left(\sum_{k=1}^{\lceil N\times^{-\epsilon}\rceil}w_{k}\right)\leq\log \left(\sum_{k=1}^{N}w_{I_{k}}\right)\leq\log\left(N^{-\epsilon\beta/2}+\sum_{ k=1}^{\lceil N\times^{+\epsilon}\rceil}w_{k}\right). \tag{42}\] Also note that for any sequence \(0\leq a_{N}\stackrel{{ N\to\infty}}{{\to}}0\), \[\mathbb{E}\left[\log\left(a_{N}+\sum_{k=1}^{N}w_{k}\right)\right]=\log\left( \sum_{k=1}^{N}k^{-1}\right)+\mathbb{E}\left[\log\left(\frac{a_{N}+\sum_{k=1} ^{N}w_{k}}{\sum_{k=1}^{N}k^{-1}}\right)\right] \tag{43}\] where we now prove that the second term on the RHS converges to \(0\) as \(N\to\infty\). Indeed, on the one hand, by Jensen's inequality \[\mathbb{E}\left[\log\left(\frac{a_{N}+\sum_{k=1}^{N}w_{k}}{\sum_{k=1}^{N}k^{- 1}}\right)\right]\leq\log\left(\frac{a_{N}+\sum_{k=1}^{N}\mathbb{E}\left[w_{k }\right]}{\sum_{k=1}^{N}k^{-1}}\right)\stackrel{{ N\to\infty}}{{\to}}0\] where for the last limit we have used that, by Lemma C.2, \[\mathbb{E}\left[w_{k}\right]=\frac{\Gamma(k-1)}{\Gamma(k)}=\frac{1}{k-1}.\] On the other hand, by Jensen's inequality, and using that \(a_{N}\geq 0\), \[-\mathbb{E}\left[\log\left(\frac{a_{N}+\sum_{k=1}^{N}w_{k}}{\sum_ {k=1}^{N}k^{-1}}\right)\right] =\mathbb{E}\left[\log\left(\frac{\sum_{k=1}^{N}k^{-1}}{a_{N}+ \sum_{k=1}^{N}w_{k}}\right)\right]\leq\log\left(\mathbb{E}\left[\frac{\sum_{k=1 }^{N}k^{-1}}{\sum_{k=1}^{N}w_{k}}\right]\right)\] \[=\log\left(\mathbb{E}\left[\frac{\sum_{k=1}^{N}k^{-1}}{\sum_{k=1 }^{N}w_{k}kk^{-1}}\right]\right)\leq\log\left(\mathbb{E}\left[\frac{\sum_{k=1 }^{N}w_{k}^{-1}k^{-2}}{\sum_{k=1}^{N}k^{-1}}\right]\right)=\log(1),\] where for the last equality we have again used Lemma C.2 which implies \(\mathbb{E}\left[w_{k}^{-1}\right]=k\). Thus, taking expectations in (42) and plugging in (43) we obtain, for every \(\epsilon>0\), \[\log(\chi-\epsilon)+\log\log\left(N\right)+o\left(1\right)\leq\mathbb{E}\left[ \log\left(\sum_{k=1}^{N}w_{I_{k}}\right);A_{N,\epsilon},B_{N,\epsilon}\right] \leq\log(\chi+\epsilon)+\log\log\left(N\right)+o\left(1\right).\] This, together with (41), implies \[\log(1-\frac{\epsilon}{\chi})\leq\lim_{N\to\infty}\left|\log\chi\log(N)- \mathbb{E}\left[\log\left(\sum_{k=1}^{N}w_{I_{k}}\right)\right]\right|\leq \log(1+\frac{\epsilon}{\chi}).\] The proof is finished by taking \(\epsilon\to 0\). _thm:typical._ Finally, we prove the convergence in distribution of \(R_{1}^{N}\) in the strong selection regime. Proof of Theorem 1.1 ii).: Observe that \(\mathbb{P}\left(R_{1}^{N}=k\right)^{2}=\mathbb{E}\left[\eta_{k}^{N}\right]^{2} \leq\mathbb{E}\left[\left(\eta_{k}^{N}\right)^{2}\right]\) which, by Lemma F.6, converges to zero as \(N\to\infty\).
2305.04944
Exploring the nature of UV-bright $z \gtrsim 10$ galaxies detected by JWST: star formation, black hole accretion, or a non-universal IMF?
We use the Cosmic Archaeology Tool (CAT) semi-analytical model to explore the contribution of Population (Pop) III/II stars and active galactic nuclei (AGNs) to the galaxy UV luminosity function (LF) evolution at $4 \leq z \leq 20$. We compare in particular with recent JWST data in order to explore the apparent tension between observations and theoretical models in the number density of bright galaxies at $z \gtrsim 10$. The model predicts a star formation history dominated by UV faint ($M_{\rm UV} > - 18$) galaxies, with a Pop III contribution of $\lesssim 10\%$ ($\lesssim 0.5\%$) at $z \simeq 20$ ($z \simeq 10$). Stars are the primary sources of cosmic reionization, with $5 - 10 \%$ of ionizing photons escaping into the intergalatic medium at $5 \leq z \leq 10$, while the contribution of unobscured AGNs becomes dominant only at $z \lesssim 5$. The predicted stellar and AGN UV LFs reproduce the observational data at $5 \lesssim z \lesssim 9 - 10$. At higher redshift, CAT predicts a steeper evolution in the faint-end slope ($M_{\rm UV} > - 18$), and a number density of bright galaxies ($M_{\rm UV} \simeq -20$) consistent with data at $z \sim 10 - 11$, but smaller by 0.8 dex at $z \sim 12 - 13$, and 1.2 dex at $z \sim 14 - 16$, when compared to the values estimated by recent studies. Including the AGN emission does not affect the above findings, as AGNs contribute at most to $\lesssim 10 \%$ of the total UV luminosity at $M_{\rm UV} < - 19$ and $z \gtrsim 10$. Interestingly, considering a gradual transition in the stellar IMF, modulated by metallicity and redshift as suggested by recent simulations, the model agrees with JWST data at $z \sim 12 - 13$, and the disagreement at $z \sim 14 - 16$ is reduced to 0.5 dex.
Alessandro Trinca, Raffaella Schneider, Rosa Valiante, Luca Graziani, Arianna Ferrotti, Kazuyuki Omukai, Sunmyon Chon
2023-05-08T18:00:00Z
http://arxiv.org/abs/2305.04944v2
Exploring the nature of UV-bright \(z\gtrsim 10\) galaxies detected by JWST: star formation, black hole accretion, or a non universal IMF? ###### Abstract We use the Cosmic Archaeology Tool (CAT) semi-analytical model to explore the contribution of Population (Pop) III/II stars and active galactic nuclei (AGNs) to the galaxy UV luminosity function (LF) evolution at \(4\leq z\leq 20\). We compare in particular with recent JWST data in order to explore the apparent tension between observations and theoretical models in the number density of bright galaxies at \(z\gtrsim 10\). The model predicts a star formation history dominated by UV faint (\(M_{\rm UV}>-18\)) galaxies, with a Pop III contribution of \(\lesssim 10\%\) (\(\lesssim 0.5\%\)) at \(z\simeq 20\) (\(z\simeq 10\)). Stars are the primary sources of cosmic reionization, with \(5-10\%\) of ionizing photons escaping into the intergalactic medium at \(5\leq z\leq 10\), while the contribution of unobscured AGNs becomes dominant only at \(z\lesssim 5\). The predicted stellar and AGN UV LFs reproduce the observational data at \(5\lesssim z\lesssim 9-10\). At higher redshift, CAT predicts a steeper evolution in the faint-end slope (\(M_{\rm UV}>-18\)), and a number density of bright galaxies (\(M_{\rm UV}\simeq-20\)) consistent with data at \(z\sim 10-11\), but smaller by \(0.8\) dex at \(z\sim 12-13\), and \(1.2\) dex at \(z\sim 14-16\), when compared to the values estimated by recent studies. Including the AGN emission does not affect the above findings, as AGNs contribute at most to \(\lesssim 10\%\) of the total UV luminosity at \(M_{\rm UV}<-19\) and \(z\gtrsim 10\). Interestingly, considering a gradual transition in the stellar IMF, modulated by metallicity and redshift as suggested by recent simulations, the model agrees with JWST data at \(z\sim 12-13\), and the disagreement at \(z\sim 14-16\) is reduced to \(0.5\) dex. keywords: cosmology: theory - cosmology: dark ages, reionisation, first stars - galaxies: high-redshift - galaxies: luminosity function, mass function - galaxies: active - quasars: supermassive black holes ## 1 Introduction The launch of _James Webb Space Telescope_ (JWST) represents a major breakthrough in our understanding of the high-redshift Universe. The early release observations enabled the detection of several galaxy candidates with photometric redshifts \(z\gtrsim 10\)(Castellano et al., 2022; Naidu et al., 2022; Labbe et al., 2022; Bradley et al., 2022; Harikane et al., 2023; Tacchella et al., 2022; Williams et al., 2022; Adams et al., 2023; Atek et al., 2023; Donnan et al., 2023; Tacchella et al., 2022), some of which with spectroscopic confirmation (Schaerer et al., 2022; Curti et al., 2023; Wang et al., 2022; Heintz et al., 2022; Burker et al., 2023; Arrahal Haro et al., 2023; Harikane et al., 2023), including the currently most distant galaxy at \(z=13.2\)(Curtis-Lake et al., 2022). This provides an unprecedented opportunity to explore the properties of the galaxy population in the first few hundreds million years of cosmic history, and to constrain their evolution. In particular, deep JWST observations are starting to constrain the low-mass end of the black hole (BH) mass function at \(z\sim 4-7\)(Ubler et al., 2023; Kocevski et al., 2023; Harikane et al., 2023) and to detect accreting supermassive black holes (SMBHs) in the nuclei of galaxies at \(z\sim 9\)(Larson et al., 2023), and might be also able to detect signatures of the first stellar populations (Wang et al., 2022; Welch et al., 2022; Vanzella et al., 2023). The wealth of early JWST observations is already starting to challenge theoretical models. In particular, the number density of bright galaxies and its apparent lack of evolution between \(z\sim 9\) and \(z\sim 13-17\) are in tension with standard model predictions (Finnelstein et al., 2022). Several physical causes have already been proposed to relieve this tension. A first possibility is that the observed UV bright galaxies arise as outliers of the general population, due to variations in halo formation histories, which lead to young ages (\(\sim 10\) Myr) and high star formation rates (Mason et al., 2023). Alternatively, the observed UV luminosity function at \(z\gtrsim 10\) can be reproduced if the interstellar dust is evacuated by radiatively driven outflows during the earliest phase of galaxy build-up (Ferrara et al., 2022; Fiore et al., 2022; Ziparo et al., 2023). An overabundance of bright objects could also be explained if galaxies produce stars or UV photons more efficiently than expected due to a lack of star formation suppression at pre-reionization epochs (Harikane et al., 2023c), feedback-free starbursts (Dekel et al., 2023), less efficient stellar-driven winds (Yung et al., 2023), or to a more top-heavy stellar initial mass function (IMF) characterizing stellar populations at high redshift (Harikane et al., 2023c,b; Finkelstein et al., 2022; Inayoshi et al., 2022; Yung et al., 2023). Indeed, the fraction of massive stars in high redshift stellar populations is expected to increase as a result of both the low metallicity (Omukai et al., 2005; Hirano et al., 2014; Hirano et al., 2015; Chon et al., 2021) and the higher temperature of the Cosmic Microwave Background (CMB; Schneider and Omukai, 2010; Chon et al., 2022). A considerable evolution of the stellar IMF might therefore be expected at \(z\gtrsim 10\), resulting into a reduced mass-to-light ratio in early galaxies. To test this hypothesis, spectroscopic signatures associated to the presence of massive stars, such as strong nebular line emission and significant SN-driven galactic winds, will be tested by forthcoming observations (Nakajima and Maiolino, 2022; Katz et al., 2022; Trussler et al., 2022). By looking at the inferred properties of JWST photometric sources (stellar masses, ages, and UV slopes), Mirocha and Furlanetto (2023) argue that a combination of increased star formation efficiency, short-term temporal variations in the star formation rate, and dust attenuation is required to match the observations. When the comparison is made with spectroscopically confirmed sources (Curtis-Lake et al., 2022; Robertson et al., 2023; Arrabal Haro et al., 2023), Keller et al. (2023) and McCaffrey et al. (2023) show that that existing cosmological simulations with varying resolution can generally reproduce the observations in terms of galaxy stellar masses, star formation rates, and number density of galaxies at \(z>10\). Similarly, Prada et al. (2023) show that forecasts of a standard cosmological galaxy formation model1 are consistent with the abundance of photometrically selected JWST/HST galaxies at \(z=8\), 9, and 10, and with the properties of spectroscopically confirmed galaxies at the same redshift; an exception is represented by the sample of red massive photometrically selected galaxies identified by Labbe et al. (2022) at \(z\sim 8\), whose stellar masses exceeding \(10^{10}M_{\odot}\) may be affected by systematic errors, but if real would require a revision of the standard galaxy formation model (see e.g. Menci et al., 2022). Footnote 1: They combine Uchuu N-body simulations (Ishiyama et al., 2021) with the Universe Machine galaxy formation algorithm (Behroozi et al., 2019). The above quoted studies show the complexity of interpreting early JWST measurements. One additional possibility is that at least a fraction of the observed UV luminosity densities at \(z\gtrsim 10\) are produced by active galactic nucleus (AGN) activity (Pacucci et al., 2022). Indeed, ongoing JWST surveys, such as JADES Medium/Deep, CEERS, and PRIMER, are expected to be sensitive enough to detect tens of accreting BHs with masses \(M_{\rm BH}=10^{6}-10^{8}M_{\odot}\) at \(7\leq z\leq 10\), with JADES Deep having the sensitivity to detect growing BHs with masses \(M_{\rm BH}=10^{4}-10^{6}M_{\odot}\) at \(z\gtrsim 10\)(Trina et al., 2023). While most of the \(z\sim 12-16\) JWST candidates show extended morphologies (Harikane et al., 2023c), JWST NIRSpec spectroscopy have detected broad emission lines from low-luminosity AGNs at \(z\sim 4-7\)(Kocevski et al., 2023; Ubler et al., 2023; Harikane et al., 2023) and from one source at \(z=8.679\)(Larson et al., 2023), indicating that AGN activity may indeed contribute to the UV luminosity of at least a fraction of these sources. The estimated BH masses are \(\sim 10^{7}-10^{8}M_{\odot}\) and the \(M_{\rm BH}/M_{*}\) ratio is higher than the empirical relationship measured for nearby broad-line AGNs with comparable BH masses (Reines and Volonteri, 2015). These findings are consistent with the expectations of theoretical models, which show that JWST surveys have the sensitivity to detect early accreting black holes out to \(z\sim 10\)(Trina et al., 2023), provided that these systems are overmassive with respect to their host galaxy stellar mass (Volonteri et al., 2022). Interestingly, none of the systems is detected in X-rays, and their position on the BPT (Baldwin et al., 1981), or the OHNO (Backhaus et al., 2022) diagrams is similar to that of star-forming galaxies observed at similar redshift. This means that the identification of these systems must be aided by properly designed color-selection techniques (Natarajan et al., 2017; Valiante et al., 2018; Zhang et al., 2021; Goulding and Greene, 2022; Inayoshi et al., 2022), or spectral diagnostics based on high-ionization lines, such as HeII and NeV (Nakajima and Maiolino, 2022; Cleri et al., 2023), but may ultimately rely on the detection of their broad emission lines (Kocevski et al., 2023; Ubler et al., 2023; Larson et al., 2023; Harikane et al., 2023). In this work, we aim to assess the impact of AGN emission and Population (Pop) III/II stars on the high-redshift galaxy UV luminosity function. We base our predictions on the Cosmic Archaeology Tool (CAT, Trinca et al., 2022), and we focus, in particular, on the redshift range probed by JWST, from \(z\gtrsim 4\) out to \(z\sim 16-18\)(Harikane et al., 2023; Bouwens et al., 2023; Perez-Gonzalez et al., 2023; Harikane et al., 2023). The paper is organized as follows. In section 2 we summarize the mean features of CAT and the improvements made in the modeling of the first sources. In section 3 and 4 we present our results. In particular, we first illustrate how the model complies with global constraints on the star formation (sec. 3.1) and cosmic reionization histories (sec. 3.2), and how our predictions compare with independent models. Then, we present the predicted redshift evolution of the galaxy luminosity function (sec. 4.1), tracing the contribution of AGNs (sec. 4.2) and Pop III stars (sec. 4.3) in different luminosity bins, and the potential effects of a gradual change in the stellar IMF with redshift and metallicity (sec. 4.3.2). Finally, in section 5 we summarize our findings and draw our main conclusions. ## 2 Modeling the first sources with CAT Here we briefly recall the main features of the CAT model, presented in detail in Trinca et al. (2022), and the improved modeling that we have made for the present study (see sections 2.2 and 2.6). CAT is a semi-analytical model which has been developed to follow the co-evolution of the first galaxies and their nuclear black holes through cosmic times. A large sample of dark matter (DM) hierarchical merger histories, representative of the evolution of the entire galaxy population, is generated from \(z=4\) up to \(z=24\) using the galform galaxy formation model (Parkinson et al., 2008), which is based on the extended Press-Schechter formalism. To describe star formation occurring in molecular and atomic-cooling halos, corresponding to virial temperatures \(1200\)K \(\leq T_{\rm vir}<10^{4}\)K and \(T_{\rm vir}\geq 10^{4}\)K, respectively, we adopt a mass resolution that corresponds to a virial temperature of \(T_{\rm vir}=1200\)K. Hence, the minimum resolved halo mass ranges from \(9.4\times 10^{5}\) M\({}_{\odot}\) at \(z\sim 24\) to \(1.0\times 10^{7}\) M\({}_{\odot}\) at \(z\sim 4\). Once each DM halo virializes and collapses, the gas is accreted and, depending on the virial temperature, redshift and composition, it cools down and forms stars. The baryonic evolution is followed by means of a set of physically motivated prescriptions, that we briefly summarize below, with a set of free parameters: the star formation efficiency \(\epsilon_{\rm SF}\), the BH accretion parameter \(\alpha\), the SN and AGN wind efficiencies \(\epsilon_{\rm w,SN}\), \(\epsilon_{\rm w,AGN}\) (see Eq.1, 3, 7, and 8). These are calibrated to reproduce the observed stellar mass density and star formation rate density at \(4\leq z\leq 6\), and the SMBHs masses and luminosities inferred for the \(z\sim 6\) quasar population (for further details see section 3 in Trinca et al.2022). ### Star formation The fraction of gas mass available for star formation depends on the balance between cooling and dynamical timescales. The star formation rate (SFR) is computed in each galaxy as: \[\mathrm{SFR}=f_{\mathrm{cool}}\,\mathrm{\textit{Mg}_{\mathrm{gas}}}\,\mathrm{ \textit{\epsilon}_{\mathrm{SF}}}/\mathrm{\textit{\tau}_{\mathrm{dyn}}}, \tag{1}\] where \(\mathrm{\textit{M}_{\mathrm{gas}}}\) is the available gas mass reservoir, \(\mathrm{\textit{\epsilon}_{\mathrm{SF}}}=0.05\) is the star formation efficiency, and \(\mathrm{\textit{\tau}_{\mathrm{dyn}}}=[R_{\mathrm{vir}}^{3}/(G\,\mathrm{ \textit{M}_{\mathrm{halo}}})]^{1/2}\) is the dynamical time of the system. The parameter \(f_{\mathrm{cool}}\) quantifies the reduced cooling efficiency inside molecular cooling halos (also referred to as _minihalos_) with respect to more massive atomic-cooling halos. In fact, in minihalos, where \(T_{\mathrm{vir}}<10^{4}\,\mathrm{K}\), the only efficient cooling channel in the primordial gas is provided by the roto-vibrational emission of molecular hydrogen (H\({}_{2}\)). If the star forming regions are exposed to a sufficiently large flux in the _Lyman-Werner_ energy band, \([11.2-13.6]\,\mathrm{eV}\), emitted by nearby galaxies, the formation of H\({}_{2}\) can be strongly suppressed, decreasing the cooling efficiency (Fialkov et al., 2013; Sugimura et al., 2014; Valiante et al., 2016; Wolcott-Green et al., 2017). Once the gas in the galaxy inter-stellar medium (ISM) starts to become metal-enriched due to stellar evolution, metals and dust enhance the cooling efficiency in minihalos. Hence, the parameter \(f_{\mathrm{cool}}\) in Eq. 1 depends on the halo virial temperature, redshift, gas metallicity and intensity of the illuminating LW radiation (see Valiante et al.2016 for a more detailed description). Conversely, in atomic cooling halos we set \(f_{\mathrm{cool}}=1\). ### Improved treatment of Population III stars Relying on recent state-of-the-art hydrodynamical simulations, which follow in detail the formation of stellar systems in very metal-poor environments (Chon et al., 2021, 2022), we imposed a further requirement for the onset of star formation inside pristine halos. Since simulations suggest that cold gas clouds need to accumulate a sufficient amount of gas before undergoing collapse and fragmentation, we assumed that the cold gas mass inside each halo must be larger than a minimum value of \(M_{\mathrm{cold,min}}=10^{3}\,M_{\odot}\) to form stars. We also adopted an enhanced SF efficiency during Pop III star formation, assuming \(\mathrm{\textit{\epsilon}_{\mathrm{SF}}}\), \(\mathrm{\textit{\epsilon}_{\mathrm{SF}}}=0.15\), while maintaining the standard efficiency \(\mathrm{\textit{\epsilon}_{\mathrm{SF}}}=0.05\)(see Trinca et al.2022) for subsequent stellar populations. This improved modeling sets the minimum total stellar mass formed in each Pop III star formation episodes to \(\sim 150\,M_{\odot}\), in agreement with simulation results (Chon et al., 2022). Theoretical studies suggest that at extremely low metallicities, such as the ones characterizing the first star forming regions, very massive stars are preferentially formed (Omukai and Nishi, 1998; Bromm et al., 2001; Omukai and Palla, 2003; Yoshida et al., 2008; Hosokawa et al., 2011). The initial mass function (IMF) of the first generation of stars, also referred to as PopIII stars, is therefore supposed to be top-heavy, with typical masses ranging from few 10s up to 100s of solar (Hirano et al., 2014; Hirano et al., 2015; Susa et al., 2014; Hosokawa et al., 2016; Sugimura et al., 2020). Here we assume, for Pop III stars, a Larson IMF: \[\mathrm{\Phi}(m_{*})\propto m_{*}^{\alpha-1}\,e^{-m_{\mathrm{th}}} \tag{2}\] where \(m_{\mathrm{ch}}=20\,\mathrm{M}_{\odot}\) is the characteristic mass, \(\alpha=-1.35\) and the mass ranges between \(10\,\mathrm{M}_{\odot}\leq\mathrm{m}_{*}\leq 300\,\mathrm{M}_{\odot}\). Our choice is motivated by stellar archaeology studies and appears to best match the observed Galactic halo metallicity distribution function and the properties of C-enhanced and C-normal stars at [Fe/H] \(<-3\)(de Bennassuti et al., 2014, 2017; Fraser et al., 2017; Magg et al., 2022; Aguado et al., 2023). Under the environmental conditions where Pop III stars form (pristine galaxies hosted in DM minihalos), the total stellar mass formed in a dynamical timescale might be too small to fully sample the IMF of Pop III stars.2 Therefore, we stochastically sample the IMF during each episode of Pop III star formation, building up the population star by star until we saturate the total stellar mass formed. In addition, Pop III stars are often assumed to die instantaneously due to the rapid evolutionary timescales of very massive stars. However, for less massive stars, with \(m_{*}\sim 10\,M_{\odot}\) and lifetimes \(\sim 20\)Myr, this assumption might lead to a significant underestimation of their total radiative output, as well as of the characteristic timescale of metal enrichment following their supernova (SN) explosions. For this reason, following each star formation episode, we define the lifetime of the stellar population \(\mathrm{\textit{\tau}_{\mathrm{life,PopIII}}}\) as the lifetime of the most massive star formed, which represents the characteristic timescale for metal enrichment inside the host halo. As a result, multiple Pop III star formation episodes can occur within the same host halo before the most massive stars explode as SN, enriching the gas above the critical threshold of metallicity \(\mathrm{Z_{crit}}\), and suppressing star formation due to mechanical feedback (see section 2.4). Footnote 2: In fact, a total stellar mass of \(M_{*,\mathrm{tot}}\geq 10^{6}\,M_{\odot}\)(Valiante et al., 2016) would be required to fully sample the Pop III IMF following a single star formation episode. Figure 1 compares the resulting distribution of stellar lifetimes for Pop III stars formed in minihalos and atomic cooling halos. It is evident how the distribution changes depending on the halo properties. Inside minihalos, Pop III stars are characterized by longer lifetimes since fewer and less massive stars form. In fact, atomic cooling halos are more massive and characterized by more efficient cooling, providing a reservoir of cold gas large enough to sample the high mass end of the IMF (\(m_{*}>m_{\mathrm{ch}}\)), leading to shorter Pop III lifetimes. In this case, the host halo will be more rapidly enriched, promptly transitioning towards Pop II star formation. At higher metallicities, emission through metal-fine structure lines and the presence of dust increase the cooling efficiency. This leads to a transition toward lower characteristic masses. Therefore, above a critical metallicity of \(Z_{\mathrm{crit}}=10^{-3.8}\,\mathrm{M}_{\odot}\), we assume that the formation of Pop II stars follows a Larson IMF, given in Eq.2, with \(m_{\mathrm{ch}}=0.35\mathrm{M}_{\odot}\) in the mass range [0.1, 100] \(\mathrm{M}_{\odot}\). ### Black hole seeds formation and growth At the end of each Pop III star formation episode, we assume that the heaviest among the newly formed BH remnants forms a light BH seed. Inside atomic-cooling halos (where \(T_{\mathrm{vir}}\geq 10^{4}\,\mathrm{K}\)), if metal and dust cooling are still inefficient (\(Z\leq Z_{\mathrm{cr}}\)) and molecular cooling is suppressed by a strong illuminating LW flux3, the gas collapses almost isothermally with no fragmentation. This leads to the formation of a single supermassive star that becomes unstable, due to nuclear exhaustion or GR instabilities, forming a heavy BH seed with a mass of \(10^{5}\,\rm M_{\odot}\)(Hosokawa et al., 2012; Inayoshi et al., 2014; Latif et al., 2013; Ferrara et al., 2014; Becerra et al., 2015; Latif & Ferrara, 2016; Becerra et al., 2018). Note that we do not consider intermediate mass BH seeds, which are supposed to Iron runaway mergers in dense stellar clusters (see Sassano et al., 2021 for a recent investigation that considers all the three BH seeds populations). Once formed, BH seeds are assumed to settle at the center of the host galaxy, where they can accrete gas and grow in mass. High resolution zoom-in simulations show that if the BH seed mass is less than \(10^{5}\,M_{\odot}\), its dynamical evolution is very perturbed by the irregular gas and stellar distribution in high-redshift galaxies (Pfister et al., 2019; Sassano et al., 2023). This effect will further suppress the growth of light BH seeds, as discussed by Trinca et al. (2022), but have a smaller impact on the observable population of accreting BHs, which largely descend from heavy BH seeds (Trinca et al., 2023). The gas accretion rate onto BHs is described by the Bondi-Hoyle-Lyttleton (BHL) formula (Hoyle & Lyttleton, 1941; Bondi, 1952): \[\dot{M}_{\rm BHL}=\alpha\frac{4\pi G^{2}M_{\rm BHPgas}^{2}(r_{\rm A})}{c_{ \rm s}^{3}}, \tag{3}\] where \(c_{\rm s}\) is the sound speed, \(\rho_{\rm gas}(r_{\rm A})\) is the gas density evaluated at the radius of gravitational influence of the BH, \(r_{\rm A}=2GM_{\rm BH}/c_{\rm s}^{2}\), and the boost factor \(\alpha=90\), which takes into account density inhomogeneity at small scales, is one of the free parameters of the model. In our reference model, the gas accretion rate, \(\dot{M}_{\rm accr}\), cannot exceed the Eddington limit, so that: \[\dot{M}_{\rm accr}=\min(\dot{M}_{\rm BHL},\dot{M}_{\rm Edd}), \tag{4}\] and the BH mass growth rate is computed as: \[\dot{M}_{\rm BHL}=(1-\epsilon_{\rm r})\dot{M}_{\rm accr}. \tag{5}\] where, \(\dot{M}_{\rm Edd}=L_{\rm Edd}/(\epsilon_{\rm r}c^{2})\), \(\epsilon_{\rm r}=0.1\) is the adopted radiative efficiency, and \(L_{\rm Edd}=4\pi cGM_{\rm BH}m_{\rm p}/\sigma_{\rm T}\) is the Eddington luminosity (\(c\) is the speed of light, \(m_{\rm p}\) is the proton mass and \(\sigma_{\rm T}\) is the Thomson scattering cross section). During galaxy mergers, the two nuclear BHs may sink to the center of the newly formed galaxy, form a binary system and merge. The timescale of this process can be considerably longer than halo sinking timescales (Tremmel et al., 2018), but the formation of coalescing BH pairs may be facilitated if the BHs have masses \(\geq 10^{5}\,M_{\odot}\) and are hosted in galaxies with high central stellar and gas densities (Volonteri et al., 2020). Here we take a very simplified approach, and assume that two BHs coalesce during major mergers, i.e. if the mass ratio of their interacting host DM halos is \(\mu>1/10\)(Tanaka & Haiman, 2009; Valiante et al., 2011). Conversely, in minor mergers (\(\mu<1/10\)), only the most massive among the two nuclear BHs is assumed to migrate to the center of the newly formed galaxy. We note here that this oversimplification has a relatively small impact on BH mass growth, which is largely dominated by gas accretion (Dubois et al., 2014; Valiante et al., 2016; Pacucci & Loeb, 2020). ### Mechanical and radiative feedback The abundance of gas inside each galaxy is affected by mechanical feedback due to galaxy-scale outflows driven by the energy released by SN explosions and BH accretion, \[\dot{M}_{\rm ej}=\dot{M}_{\rm ej,SN}+\dot{M}_{\rm ej,AGN} \tag{6}\] where \(\dot{M}_{\rm ej,SN}\) and \(\dot{M}_{\rm ej,AGN}\) are the SN- and AGN-driven outflow rates. The first term is defined as: \[\dot{M}_{\rm ej,SN}=\frac{2E_{\rm SN}\epsilon_{\rm w,SN}R_{\rm SN}(t)}{v_{\rm e }^{2}}, \tag{7}\] where \(E_{\rm SN}\) is the explosion energy per SN, \(v_{\rm e}=(2GM/R_{\rm vir})^{1/2}\) is the escape velocity of the galaxy, \(\epsilon_{\rm w,SN}=1.6\times 10^{-3}\) is a free parameter representing the SN-driven wind efficiency, and \(R_{\rm SN}(t)\) is the SN explosion rate. The latter quantity depends on the star formation history and on the nature of the stellar populations hosted by each galaxy: for Pop III stars, \(E_{\rm SN}\) is assumed to be \(2.7\times 10^{52}\) erg, while for Pop II/s stars, \(E_{\rm SN}=1.2\times 10^{51}\) erg. The second term in Eq. 6 is computed as, \[\dot{M}_{\rm ej,AGN}=2\,\epsilon_{\rm w,AGN}\,\epsilon_{r}\,\dot{M}_{\rm accr }\,\left(\frac{c}{v_{\rm e}}\right)^{2}. \tag{8}\] where \(\epsilon_{\rm w,AGN}\) is the AGN-driven wind efficiency. Following Trinca et al. (2022), in our reference model, we assume that \(\epsilon_{\rm w,AGN}=2.5\times 10^{-3}\). In addition to the radiative feedback induced by LW photons, described in section 2.1, during the process of cosmic reionization the photo-heating due to the increased gas temperature in photo-ionized regions can suppress star formation in haloes with virial temperatures below the temperature of the intergalactic medium (IGM, Valiante et al., 2016). We consider \(T_{\rm IGM}=Q_{\rm HII}\,T_{\rm ratio}+(1-Q_{\rm HII})\,T_{\rm HI}\), where \(T_{\rm reio}=2\times 10^{4}\) K, \(T_{\rm HI}=0.017(1+z)^{2}\) and the filling factor of HII regions, \(Q_{\rm HII}\), is computed as described below. ### Metal and dust enrichment Following Valiante et al. (2014) and de Bennassuti et al. (2014), CAT follows the metal and dust enrichment in each galaxy adopting Figure 1: Distribution of lifetimes of Pop III stellar populations formed inside minihalos (orange) and atomic-cooling halos (blue). We show the two probability distributions assuming that the cold gas mass inside each halo must be larger than a minimum value of \(M_{\rm cold,min}=10^{3}\,\rm M_{\odot}\) to trigger Pop III star formation. The two distributions have a different shape, reflecting the reduced amount of gas that is available for star formation in minihalos, that leads to the preferential formation of Pop III stars with masses comparable to the characteristic mass of the IMF, \(m_{\rm ch}=20\,\rm M_{\odot}\), and to an undersampling of the high-mass tail of the IMF (see text). a two-phase ISM model, with a cold atomic/molecular phase, where star formation occurs and where dust grains can grow in mass by accreting gas-phase metals, and a hot/warm diffuse phase where dust can be destroyed by SN shocks. Following DM halo virialization, the gas is initially accreted into the diffuse phase, then it condenses into the cold/molecular phase, where star formation occurs. Stars evolve and return gas, metal and dust into the diffuse phase. Finally, mechanical feedback due to SN explosions and AGN eject gas from the diffuse and condensed phases, following the description provided in section 2.4. Metal and dust enrichment in the two-phase ISM is described by a system of differential equations, which relies on mass- and metallicity-dependent metal and dust yields and follows the release of nucleosynthetic products on the stellar characteristic lifetimes. We refer the interested readers to Valiante et al. (2014) and de Bennassuti et al. (2014) for a thorough description of the chemical evolution model implemented in CAT. ### Photo-ionizing emission and reionization CAT follows the formation of the first stars and BHs across cosmic epochs in our galaxy sample. Therefore, we can investigate the relative contribution of different classes of sources to cosmic reionization. In particular, at different redshifts, we can compute the photo-ionizing emissivities from Pop II stars, Pop III stars and early accreting BHs evolving in each galaxy of our sample. For Pop III stars we compute the photo-ionizing emission rate \(\dot{n}_{\gamma}\) from the mass-dependent emissivities tabulated by Schaerer (2002) for zero metallicity stars with no mass loss. For Pop II stars, we adopt the metallicity- and age-dependent intrinsic emissivities computed using Bruzual & Charlot (2003) population synthesis model. To compute the emission rate of ionizing photons from early accreting BHs, we model their spectral energy distribution (SED) as a multicolor-disk spectrum up to energies of \(kT_{\rm max}\sim{\rm 1keV}(M_{BH}/M_{\odot})^{-1/4}\) plus a non-thermal power-law component \(L_{\nu}\propto\nu^{-\alpha}\) with \(\alpha\simeq 2\) at higher energies (Shakura & Sunyaev, 1973). Two additional effects need to be considered in order to model cosmic reionization: _(i)_ only a fraction of the ionizing photons emitted will escape the galaxy and reach the outer medium, and _(ii)_ the IGM density increases its inhomogeneity with time, leading to higher gas opacity to ionizing photons, and to a slower reionization process. These effects are usually modeled with two parameters that are still poorly constrained by theoretical models and observations; the escape fraction, \(f_{\rm esc}\), i.e. the fraction of ionizing photons that are able to escape the galaxy, and the clumping factor, \(C\), which quantifies the increased clumpiness of the IGM. Starting from the source emissivities, these two additional parameters allow us to predict the redshift evolution of the volume filling factor of ionized hydrogen, \(Q_{\rm HII}\). Several works showed that a decreasing trend with redshift of \(f_{\rm esc}\) is required to simultaneously accommodate the production rate of ionizing photons associated with star formation and the available constraints on the IGM electron scattering optical depth \(\tau_{e}\). Therefore, following Dayal et al. (2020), we assume a redshift dependent escape fraction for ionizing photons emitted by Pop II and Pop III stars: \[f_{\rm esc,*}(z)=f_{0}\left[(1+z)/5\right]^{\beta}\, \tag{9}\] where we choose \(f_{0}=0.03\) and \(\beta=1.5\), such that \(f_{\rm esc}\) varies between \(\sim 3-35\%\) for \(z=4-24\). The range of values assumed for the galaxy escape fraction is in broad agreement with empirical constraints obtained from early JWST observations. Recent results by Mascia et al. (2023), based on a sample of 24 lensed galaxies at \(4.5<z<8\), suggest typical mean values of \(f_{\rm esc}\sim 0.10\), while Schaerer et al. (2022) find smaller values, \(f_{\rm esc}\sim 0.03-0.08\), for three systems at \(z\sim 8\). For the AGN ionizing emission, instead, we make the assumption that the fraction of unobscured AGNs can be used as a tracer of the escape fraction (Ricci et al., 2017; Dayal et al., 2020). Therefore, following Ueda et al. (2014), we adopt a luminosity-dependent parametrization: \[f_{\rm esc,AGN}(L_{\rm X})=\min\left[f_{\rm max},\max\left[f_{0}-\beta(\log(L _{\rm X}-43.75)),f_{\rm min}\right]\right] \tag{10}\] with \(f_{\rm max}=0.84\), \(f_{\rm min}=0.20\), \(f_{0}=0.73\), \(\beta=0.24\) and \(L_{X}\) is the AGN X-ray luminosity in the \([2-10]\) keV energy band (for further details see Trinca et al., 2022). For the IGM clumping, we rely on the parametrization proposed by Iliev et al. (2005), and we adopt the following redshift dependent clumping factor: \[C(z)=17.6\,e^{-0.10\,z+0.0011\,z^{2}}. \tag{11}\] The time evolution of the volume filling factor for ionized hydrogen, \(Q_{\rm HII}\), can be written as (Barkana & Loeb, 2001): \[\dot{Q}_{\rm HII}=f_{\rm esc}\,\dot{n}_{\gamma}/n_{\rm H}-\alpha_{\rm B}\,C \,n_{\rm H}\left(1+z\right)^{3}Q_{\rm HII} \tag{12}\] where \(\dot{n}_{\gamma}\) is the total emission rate of ionizing photons per unit volume computed summing over all the available sources, \(n_{\rm H}=X_{\rm H}\,n_{\rm IGM}\) is the number density of the hydrogen gas in the IGM, \(X_{\rm H}\) is the hydrogen mass fraction, \(n_{\rm IGM}\) is the IGM gas number density and \(\alpha_{\rm B}=2.6\times 10^{-13}\,{\rm cm}^{3}\,{\rm s}^{-1}\) is case-B hydrogen recombination rate. From \(Q_{\rm HII}(z)\), we can compute the IGM optical depth to electron scattering \(\tau_{\rm e}(z)\) as: \[\tau_{\rm e}(z)=\int_{0}^{z}n_{\rm e}(z^{\prime})\sigma_{\rm T}\,c\,\bigg{|} \frac{dt}{dz^{\prime}}\bigg{|}dz^{\prime} \tag{13}\] where \(\sigma_{\rm T}=6.65\times 10^{-25}\) cm\({}^{2}\) is the Thomson cross section, \(c\) is the speed of light and \(n_{\rm e}(z^{\prime})\) is the mean electron number density at \(z^{\prime}\), which can be written as: \[n_{\rm e}(z^{\prime})=Q_{\rm HII}\left[(z^{\prime})\,n_{0,\rm B}\,X_{\rm H}(1+ z)^{3}\right. \tag{14}\] where \(n_{0,\rm B}=2.51\times 10^{-7}\,{\rm cm}^{-3}\) is the mean baryon number density at \(z=0\). ## 3 Global observational constraints In this section, we first describe the redshift evolution of the comoving star formation rate density predicted by the model. Then, we show the predicted redshift evolution of the hydrogen ionizing emissivity and neutral hydrogen fraction, and how these compare with available observational data. ### Star Formation History In the upper panel of Figure 2 we show the star formation rate density (SFRD) evolution predicted by CAT at \(z>4\). The red solid line represents the total SFRD, while the dashed and dash-dotted red lines indicate the SFRD for galaxies with intrinsic UV magnitude \(M_{\rm UV}\) smaller, i.e., brighter, than \(-17.0\) and \(-18.0\), respectively (see also Trinca et al., 2022). We compare our results with observational constraints by Bouwens et al. (2012, 2014); Ellis et al. (2013); Merlin et al. (2019), and with recent JWST data at \(z\ga 10\)(Bouwens et al., 2023; Harikane et al., 2023; Donnan et al., 2023; McLeod et al., 202 2023). We also show the empirical relations proposed by Madau et al. (2014) extrapolated to \(z>6\) (gray dashed-dot line) and the constant star formation efficiency model by Harikane et al. (2023c) (maron dotted line). In the redshift range \(4<z<10\), CAT predictions for the total SFRD are in good agreement with data from Merlin et al. (2019), who estimated the contribution of high-redshift passive galaxies to the global SFRD during their phase of activity. It is also consistent with the SFRD estimated from ALMA large surveys, in particular from ALPINE data by Khusanova et al. (2021) and Gruppioni et al. (2020), and from REBELS data by Algera et al. (2023), and it is in very good agreement with the SFRD inferred from gamma-ray burst observations, which are sensitive to both obscured and unobscured star formation (Kistler et al., 2009; Robertson & Ellis, 2012). Conversely, the SFRD derived from the rest-frame UV luminosity are better reproduced by CAT when only the contribution of the sources brightest than \(M_{\rm UV}\lesssim-17.0\) is considered (Trinca et al., 2022). We stress that the total SFRD computed with CAT accounts for both intrinsically faint objects and for obscured sources, which are better traced by rest-frame FIR observations. In the lower panel of Figure 2 we also show the predicted SFRD for Pop III stars (solid orange line). The Pop III SFRD predicted by CAT is characterised by an initial steep rise at \(z\gtrsim 22\), and then declines to follow a relatively flat evolution. Indeed, despite the scatter due to the intrinsic burstiness of Pop III star formation, we find an almost constant SFRD of \(\sim 10^{-4}\,M_{\odot}\,\rm yr^{-1}\,Mpc^{-1}\) over the redshift range \(10\lesssim z\lesssim 20\), below which the ISM metal enrichment of galaxies causes Pop III star formation to become progressively rare. In Figure 2, we also show the Pop III SFRD predicted by high-resolution (Jaacks et al., 2019; Liu & Bromm, 2020; Skinner & Wise, 2020) and large-scale (Verdini et al., 2023) hydrodynamical simulations, and with the results of the semi-analytical model by Visbal et al. (2020). Orders of magnitude differences are found between different studies, which can be ascribed to different mass resolutions and scales of the simulations, as well as to different treatments of radiative feedback, chemical evolution, Pop III stellar IMF and critical metallicity threshold for Pop III stars (see e.g. Venditti et al., 2023, for a detailed discussion). CAT results appear to be in broad agreement with high-resolution simulations, which tend to predict larger Figure 2: Evolution of the global (PopIII + PopII, upper panel) and PopIII-only (lower panel) SFR density as a function of redshift. In the upper panel, CAT predictions illustrate the total SFRD (solid red lines) and the SFRD of UV-bright sources with \(\rm M_{UV}<-17\) (red dashed line) and \(\rm M_{UV}<-18\) (red dash-dotted line). The SFRD inferred from observations sampling the rest-frame UV luminosity are taken from Bouwens et al. (2012, 2014); Ellis et al. (2013); Schenker et al. (2013) and from recent JWST data (Bouwens et al., 2023; Harikane et al., 2023c; Donnan et al., 2023; McLeod et al., 2023). In addition, we also show the SFRD derived by (Merlin et al., 2019) from the SFRD of passive galaxies during their active phase, by ALMA large surveys (Khusanova et al., 2021; Gruppioni et al., 2020; Algera et al., 2023), and by gamma-ray bursts observations (Kistler et al., 2009; Robertson & Ellis, 2012), which are sensitive to both obscured and unobscured star formation. Finally, we also show the extrapolation of the empirical models by Madau et al. (2014) and the constant star formation efficiency model by Harikane et al. (2023c). In the lower panel, we compare the Pop III SFRD predicted by CAT with independent theoretical models by Skinner & Wise (2020); Liu & Bromm (2020); Jaacks et al. (2019); Visbal et al. (2020); Venditti et al. (2023). For illustrative purposes, we also report CAT predictions for the global and PopIII-only SFR density with fainter colors in the lower and upper panel, respectively. SFRDs, especially in the redshift range \(10<z<20\). This might be related to the ability of resolving Pop III star formation in small DM minihalos, which are generally below the resolution threshold of large-scale simulations. Interestingly, the Pop III SFRD predicted by CAT is in very good agreement with the predictions of Skinner and Wise (2020) down to \(z\sim 12\), below which their SFRD quickly decreases, due to the strong impact of LW radiation inside the simulated \(1\)Mpc\({}^{3}\) comoving box. The comparison with the semi-analytical approach proposed by Visbal et al. (2020) is probably affected by the different SF efficiency considered for Pop III stars. They assumed \(\epsilon_{\rm SF,Vish}=0.001\), which is more than \(2\) dex lower than our reference value of \(\epsilon_{\rm SF,PopIII}=0.15\). This leads to a large difference in the SFRD especially at very early times, before stellar feedback starts to efficiently self-regulate the SF process. ### Cosmic Reionization In Figure 3, we illustrate the reionization history predicted by CAT, with the relative role played by different sources of ionizing photons. In the upper panel, we show the evolution in redshift of the global photo-ionizing emissivity, i.e. the rate of ionizing photons injected into the IGM by the entire galaxy population. Different colored lines represent the contribution of Pop II stars (solid brown), Pop III stars (solid yellow), and AGNs (blue dashed), which we assume to include the entire population of early accreting nuclear BHs. For all sources, the intrinsic photon rates are corrected for the adopted escape fractions (Eq. 9 for Pop II and Pop III stars and Eq. 10 for AGNs). The emissivity of Pop III stars initially rises quickly and peaks at \(z\sim 20\), to remain then almost constant with \(\dot{\tau}_{\rm F,PopIII}\sim 10^{49}-10^{50}\,{\rm s^{-1}\,Mpc^{-1}}\), closely tracing the Pop III SFRD evolution, down to the last few episodes of Pop III star formation at \(z\sim 10\). The contribution of Pop II stars shows a smooth increase from \(z\sim 23\) to \(z\sim 12\), below which it slowly declines, despite the increasing trend of the Pop II SFRD, as a consequence of the decrease in the escape fraction \(f_{\rm esc}(z)\). The parametric evolution of \(f_{\rm esc}(z)\) described by Eq. 9, translates into values of \(f_{\rm esc}\simeq 0.04,0.10,0.25\) at \(z=5,10,20\), respectively. Despite the large uncertainties that still affect observational constraints, these values are consistent with recent results (Finkelstein et al., 2019; Naidu et al., 2020; Schaerer, 2002; Mascia et al., 2023), which suggest a global averaged escape fraction between \(5-10\%\) at \(z<10\), and a rising emissivity with increasing redshift throughout the epoch of reionization. It is interesting to note that the Pop II emissivity starts to significantly dominate over the Pop III contribution only at \(z\lesssim 16\), when the SFRD of Pop III stars is already \(\sim 2\) dex below the Pop II one (see Figure 2). This is a result of the different stellar IMF and metallicity of the two populations, with massive and very massive Pop III stars having much harder spectra and higher ionizing photon emissivities per unit stellar mass. In our reference model, stars are the primary source of ionizing photons in the IGM. At \(z\gtrsim 15\), the BHs hosted in galaxy nuclei are descendants of light seeds, with typical initial masses of \(M_{\rm BH}\lesssim 10^{3}\,M_{\odot}\). Their mass growth proceeds in the Bondi-Hoyle gas accretion regime, and is highly inefficient, as shown in Trinca et al. (2022). As a consequence, the first accreting BHs at \(z>15\) are intrinsically faint sources of photo-ionizing radiation. At \(z\lesssim 15\), heavy BH seeds form, with masses \(M_{\rm BH}=10^{5}\,M_{\odot}\), and their gas accretion is more efficient. As a result, the AGN ionizing emissivity starts to increase, but remains subdominant with respect to the Pop II stellar emission down to \(z\sim 5\). In the redshift range \(4<z<6.1\) the photo-ionizing emissivity predicted by CAT is in good agreement with the empirical constraints on the ionizing ultraviolet background obtained by Becker and Bolton (2013) and Becker et al. (2021), with unobscured AGNs (blue dashed line) providing the dominant contribution only at \(z<5\). The predicted AGN emissivity appears to be consistent with the recent estimates from Harikane et al. (2023), based on the first census of 10 faint broad-line AGNs at \(z\sim 4-7\) detected in early JWST observations. In the central panel of Figure 3, we show the IGM fraction of neutral hydrogen, \(x_{\rm HI}=1-Q_{\rm HII}\), predicted by CAT (red solid curve), compared with empirical constraints presented by Bolan et al. (2022, grey data points) and obtained with different inference methods, based on Lyman-break galaxy (LBG) samples (Morales et al., 2021; Bolan et al., 2022), constraints from the dark pixel fraction in the Lyman and Ly-\(\beta\) forest (McGreer et al., 2015), and quasar damping wings (Davies et al., 2018; Greig et al., 2019; Wang et al., 2020). We find that the ionization process is complete around \(z\sim 5.5\), in agreement with the end of the reionization epoch expected from observational constraints. In addition, we find a good agreement with observations at \(z\lesssim 8\), although it is important to notice that empirical estimates at \(z>7\) are inferred from different tracers and show a large scatter. In particular, analysis based on damping wings of bright quasars (Davies et al., 2018) suggest a lower neutral fraction at \(z\sim 7.5\) with respect to estimates obtained from LBG samples, and are in closer agreement with CAT predictions. These uncertainties in the higher redshift range will improve with forthcoming deep JWST surveys at \(z=7-10\), which will put tighter constraints on the evolution of the IGM neutral fraction with combined photometric and spectroscopic data. Finally, the lower panel of Figure 3 shows the evolution of the electron scattering optical depth \(\tau_{\rm e}\), compared to recent constrains obtained by the Planck cosmological survey and, in particular, with the results from Planck Collaboration et al. (2018). We find a value \(\tau_{\rm e,CAT}=0.067\), which is consistent within \(2\sigma\) with the Planck estimates \(\tau_{\rm e,Planck}=0.054\pm 0.007\). ## 4 Predicting Galaxy UV Emission In the previous sections, we showed that CAT model predictions appear to be consistent with current constraints on the redshift evolution of the SFRD and the history of cosmic reionization. The natural extension of this analysis is therefore to characterize the luminosity distribution of the population of high-redshift galaxies, and follow its evolution in time. In what follows, we present CAT model predictions for the galaxy UV luminosity function at \(4\leq z\leq 16\), considering the contribution of stellar populations - including Pop III stars -, BH accretion, and the effects of a gradual change in the stellar IMF with redshift and metallicity, as suggested by recent hydrodynamical simulations (Chon et al., 2021, 2022). ### Galaxy UV luminosity function at \(4\leq z\leq 16\) For the present study, we improve our modeling of the total UV emission arising from each galaxy with respect to what has been presented in Trinca et al. (2022), where the galaxy intrinsic UV luminosity was obtained from the SFR adopting a standard conversion factor (Madau and Dickinson, 2014). We compute the UV luminosity of each galaxy, \(L_{\rm UV,*}\), by summing over the emission of its active stellar populations, adopting age and metallicity dependent SEDs, as explained in section 2.6. Following Mancini et al. (2016), we account for dust obscuration by Figure 3: _Upper panel_: Global photo-ionizing emissivity as a function of redshift. Brown solid, yellow solid and blue dashed lines represent the contribution of PopII stars, PopIII stars and unobscured AGN, respectively, to the global emissivity. We show as a reference the empirical constraints proposed by Kuhlen & Faucher-Giguere (2012, grey data points), Becker & Bolton (2013, pink shaded region), Becker et al. (2021, green data points) and the recent estimates of the AGN contribution obtained by Harikane et al. (2023a, light blue circles). _Central panel_: evolution of the IGM neutral hydrogen fraction as a function of redshift. CAT predictions are compared to a compilation of observational constraints presented in Bolan et al. (2022), and obtained through different tracers, such as the evolution of Lyman-\(\alpha\) equivalent width (Bolan et al. 2022, stars), Lyman-\(\alpha\) luminosity function (Morales et al. 2021, pentagons), the dark pixel fraction in the Ly-\(\alpha\) and Ly-\(\beta\) forest (McGreer et al. 2015, circles), and quasar damping wings (Davies et al. 2018; Greig et al. 2019; Wang et al. 2020, diamonds). _Bottom panel_: Predicted evolution of the Thomson scattering optical depth as a function of redshift, compared with constraints obtained by the Planck cosmological survey (Planck Collaboration et al. 2016, 2018). correcting the galaxy UV luminosity as: \[L_{\rm UV,obs}=L_{\rm UV}\,\exp[-\Sigma_{\rm gas}\,\mathcal{D}\,k_{\rm UV}] \tag{15}\] where \(\Sigma_{\rm gas}=M_{\rm gas}/\pi r_{\rm d}^{2}\) is the gas surface density within a radius \(r_{\rm d}=0.18\,r_{\rm vir}\) (Mo et al. 1998), \(\mathcal{D}\) the dust-to-gas mass ratio, and \(k_{\rm UV}\) is the extinction coefficient per unit mass in the energy band of interest. The value of \(k_{\rm UV}\) has been inferred considering the extinction curve of the Small Magellanic Cloud (SMC, Weingartner & Draine 2001). Since CAT is able to track the fraction of the ISM that resides in the warm/hot diffuse medium (see section 2.5), we assume here a simple screen model, where the optical depth is computed considering the contribution of the diffuse gas and dust mass inside the galaxy. This will possibly result in an underestimation of the impact of dust obscuration, if compared to more sophisticated two-phase dust extinction models (see e.g. Mancini et al. 2016). In Figure 4 we show the predicted evolution of the galaxy UV luminosity function in the redshift range \(4\leq z\leq 16\). Here we only consider the UV luminosity coming from stellar emission, \(L_{\rm UV,*}\) (i.e. we do not consider the additional contribution to the UV emission from accreting BHs). At \(z>10\), given the restricted number of sources currently observed and the potential uncertainties in their redshift determination, we decided to show the galactic LF predicted by CAT averaged over three redshift ranges \(10<z<11\), \(12<z<13\), \(14<z<16\). CAT results are compared to several observational data (see Harikane et al. 2023c and references therein), including results coming from JWST observations (Castellano et al. 2022a; Harikane et al. Figure 4: Observable (obscuration-corrected) galaxy UV LF between \(z=4\) and \(z=16\). CAT predictions (black data points) are compared with observational data from Oesch et al. (2018, brown) Bowler et al. (2020, pink), Bouwens et al. (2021, blue), Bouwens et al. (2023, red), Harikane et al. (2022a, 2023c, dark green),Harikane et al. (2023b, light green), Finkelstein et al. (2022a,b, cyan), Harikane et al. (2022b, orange), Donnan et al. (2023, magenta) and Castellano et al. (2022a, violet). The figure shows that stellar emission can account for the observed UV luminosity evolution from \(z\sim 4\) to \(z\sim 10\). At higher redshifts, similar to other standard galaxy formation models, CAT predictions fail to account for the UV bright-end of the luminosity function sampled by JWST observations. The grey shaded area highlights the population of sources with \(M_{\rm UV}>-17\), which contribute to the unresolved SFRD shown in Figure 2. In the lower panels, we also show the best-fit distributions obtained by Bouwens et al. (2023, \(z\sim 10\), \(13\), \(17\), red lines) and Harikane et al. (2023c, \(z\sim 9\), \(12\), \(16\), green lines), assuming a Schechter function (solid) or a double power-law (dashed) (distribution. The light green-dotted lines report instead the best-fit of the LF at \(z\sim 9\), \(10\), \(12\) and \(16\) obtained by Harikane et al. (2023b) considering only galaxies with a spectroscopic confirmation. 2023c,b; Finkelstein et al., 2022; Donnan et al., 2023). The model predictions are in good agreement with observational data in the redshift range \(z\simeq 4-9\). Interestingly, at redshift \(z\gtrsim 9\), our model predicts a larger number density of sources fainter than \(M_{\rm UV}=-18\) compared to extrapolations of the observationally estimated LF. This might be due to the incompleteness of the observed samples at these faint magnitudes, or to a higher impact of dust attenuation (see e.g. Barrufet et al., 2023). At the same time, it may point to a too efficient rate of star formation or to a too inefficient feedback in the galaxies populating the faint end of the galaxy UV LF at these epochs in our model. While this is an interesting result on its own that needs additional analysis, here we focus on the model predictions for the bright-end of the galaxy LF, at \(M_{\rm UV}<-18\). At \(z\sim 10-13\), the mild evolution observed in the galaxy LF is well reproduced by CAT, except for the bins of highest luminosity, at \(M_{\rm UV}<-19\), where, despite the large statistical uncertainties, the galaxy number density predicted by the model is smaller than the value inferred by some observational studies. These, however, show significant variations, depending on the surveyed area and JWST program considered, hinting to the potential presence of galaxy overdensities in some of these fields (Castellano et al., 2022), to the effect of cosmic variance (Yung et al., 2023; Adams et al., 2023), as well as to the possible contamination of low-redshift systems in the photometric samples (Naidu et al., 2022; Arrabal Haro et al., 2023). Finally, at \(z\sim 14-16\), CAT predictions are compared with the recent constraints from Harikane et al. (2023c) and Bouwens et al. (2023), based on two candidate galaxies with estimated redshift \(z_{1}=16.25^{+0.24}_{-0.46}\) and \(z_{2}=16.41^{+0.66}_{-0.55}\). At this very early epoch, the model predicts a number density of bright galaxies that appears to be significantly lower with respect to what is suggested by JWST observations, with a number density of sources with \(M_{\rm UV}\sim-20\), which is \(\sim 1.2\) dex lower than the best-fit distribution obtained by Harikane et al. (2023b) assuming a Schechter function (see also Figure 9 for a quantitative comparison). It has to be noted, though, that if the spectroscopic confirmation will favour redshift values on the lower bound of the uncertainty range, CAT predictions would stand at \(\sim 1\sigma\) from the constraint proposed by Harikane et al. (2023b). In addition, we also show the best-fit distribution obtained by Harikane et al. (2023c) at \(z\sim 16\), where they perform the analysis considering only sources with a spectroscopic confirmation. Given the lack of any spectroscopically confirmed galaxy at \(z>14\), they extrapolated the best-fit Schechter function obtained for \(z\sim 9-12\) toward higher redshift. In this case the CAT predictions for \(M_{\rm UV}\leq-19\) are consistent with the observed LF, which, however, needs to be interpreted as a lower limit. Hence, spectroscopic identification of galaxy candidates observed at \(z\simeq 14-16\) will be crucial to confirm the excess of UV bright sources compared to current model predictions. In the following section, we will explore whether the contribution to the UV luminosity from accreting BHs can partially relieve the current tension between CAT predictions and the bright end of the UV LF at \(z\sim 14-16\) derived from photometric candidates (Harikane et al., 2023b; Bouwens et al., 2023). ### Can BH accretion explain the UV luminosity function at \(z>107\) In addition to stellar emission, we also account for the possible contribution of the UV luminosity emitted by the nuclear BH, \(L_{\rm UV,AGN}\). In fact, for high redshift systems, AGN contamination in the UV rest-frame emission might be significant (Pacucci et al., 2022), as we will analyze more in detail below. From the BH accretion rate (see Eq. 3), we estimate the BH bolometric luminosity as: \[L_{\rm bol}=\epsilon_{\rm r}\,\dot{M}_{\rm accr}\,c^{2}, \tag{16}\] and then convert this into the UV luminosity relying on the bolometric correction proposed by Duras et al. (2020), and assuming \(L_{\nu}\propto\nu^{-0.44}\)(as described in detail in Trinca et al., 2022). A thorough comparison between the AGN LF predicted by CAT in the UV and X-ray bands has been presented by Trinca et al. (2022), finding a good agreement between our reference model prediction and the available observational constraints. To provide an example, here we show in Figure 5 the predicted AGN UV luminosity function at \(z=5\) (filled black data points and black dashed line). This is compared to observational constraints from McGreer et al. (2018) and Nida et al. (2020) at the bright-end (\(M_{1500}<-22\)), and from Parsa et al. (2018); Giallongo et al. (2019); Kocevski et al. (2023) and Harikane et al. (2023a) at the faint-end. We also show the galaxy UV LF predicted by CAT (the black dotted line and grey shaded region represent, respectively, the double power-law best fit and \(1\sigma\) spread of the distribution) and observed (empty grey squares, Harikane et al., 2022). The comparison confirms that CAT model predictions are in good agreement with the observed galaxy and AGN UV luminosity function at \(z=5\), including the most recent estimates based on JWST data (Kocevski et al., 2023; Harikane et al., 2023). We then recompute the UV LF considering both emission from stars and accreting BHs, \(L_{\rm UV,*}+L_{\rm UV,AGN}\) at \(z=4-16\), and estimate the mean and maximum contribution of the AGNs to the total galaxy emission in different ranges of magnitude. This is shown in Figure 6. At \(z=4-9\), the total (galaxy \(+\) AGN) UV LF predicted by CAT is compared to the fit of the combined LF for both AGN and star forming galaxies derived by Finkelstein & Bagley (2022), finding a good agreement. At \(z\geq 10\), we compare our predictions with the best-fit distributions proposed by Bouwens et al. (2023); Harikane et al. (2023b), also shown in Figure 4. On average, we find that Figure 5: UVLF predicted by CAT for the AGN and galaxy population at redshift \(z=5\). Similarly to Figure 4, the galaxy luminosity function is computed considering only the emission from stellar populations (the dotted line and gray shaded region represent, respectively, the best-fit and \(1\sigma\) spread of the distribution), and it is compared with observations from Harikane et al. (2022) (empty gray squares). The AGN luminosity function is shown by the black filled dots, and fitted with the black dashed line. This is compared to observations by McGreer et al. (2018); Parsa et al. (2018); Giallongo et al. (2019); Nida et al. (2020) and to the recent estimates based on JWST data by Kocevski et al. (2023) and Harikane et al. (2023a). CAT model predictions are in good agreement with available constraints at \(z=5\). the contribution of accreting BHs to the galaxy UV luminosity is negligible at these redshifts (see the filled histograms). At \(10\lesssim z\lesssim 15\), during the formation epoch of heavy BH seeds, the AGN emission contributes on average to \(\sim 1-3\%\) of the total UV emission, with the largest contribution being at most \(\lesssim 10\%\) in the brightest bins of magnitude, at \(\rm M_{UV}\lesssim-19\) (see the empty histograms). At even higher redshifts, \(z\gtrsim 15\), the AGN emission is even smaller, due to the stunted growth of light seeds predicted by our reference model (see Trinca et al., 2022). Considering the joint UV emission of stars and accreting BHs, we still find that the number density of systems with \(M_{\rm UV}\sim-20\) predicted by CAT is \(\sim 1.2\) dex lower than the best-fit distribution obtained by Harikane et al. (2023) assuming a Schechter function, similarly to what was found when only stellar emission was considered. At the same time, model predictions closely match the observed \(\sim 10\%\) of the total UV emission. Figure 6: Observable (obscuration-corrected) global UV luminosity function (galaxy + AGN) predicted by CAT at \(z=4,5,6,7,8,9\) and in the redshift ranges \(10\leq z\leq 11,12\leq z\leq 13\) and \(14\leq z\leq 16\) (from top left to bottom right). For each panel, we also show the mean (filled histograms) and maximum (empty histograms) AGN contribution to the total UV luminosity (galaxy + AGN) in each magnitude bin. In the upper panels, CAT predictions are compared with the compilation of observational data presented by Finkelstein & Bagley (2022) for the global UV LF. In the lower panels, we show instead the best-fit distributions obtained by Bouwens et al. (2023) (\(z\sim 10,13,17\), red lines) and Harikane et al. (2023),\(c\)) (\(z\sim 9,10,12,16\), green lines) based on observational constraints on the UV LF at \(z\gtrsim 9\) coming from recent JWST data. distribution obtained by Harikane et al. (2023c) considering only spectroscopically confirmed galaxies (see Figure 9). At \(7\lesssim z\lesssim 10\), we observe a growing impact of AGNs to the total UV emission, with an average contribution between \(1-20\%\) of the galaxy luminosity, and peaks of maximum contribution reaching \(>50\%\) of the total emission, especially in the brightest luminosity bins. Finally, at \(z<7\), we predict the AGN population to dominate the bright-end of the total UV LF, accounting on average for \(20-100\%\) of the emission coming from systems brighter than \(M_{\rm UV}~{}-22\). The average AGN contribution decreases for fainter galaxies, where it appears to be subdominant, though it might still reach \(>50\%\) of the total UV emission for specific systems. We thus find that the AGN contribution does not have a significant impact on the emission of \(z\gtrsim 10\) galaxies, and it cannot resolve the discrepancy between model predictions and observations on the number density of bright sources with \(M_{\rm UV}<-19\). However, while the AGN UV luminosity appears to be - on average - subdominant in fainter high-redshift galaxies, detailed selection criteria applied to deep JWST surveys (Trinca et al., 2023) might be able to identify systems hosting the brightest and more luminous BHs from the general population (Nakajima & Maiolino, 2022; Goulding & Greene, 2022; Volonteri et al., 2022), enabling to constrain their contribution to the total luminosity function. ### Can a top-heavy IMF explain the UV luminosity function at \(z>10\)? The UV emissivity of stellar populations is very sensitive to their IMF, metallicity and ages. Mason et al. (2023) argue that current \(z\gtrsim 10\) galaxies observed by JWST are dominated by systems with young ages (\(\lesssim 10\) Myr) and high star formation rates. Harikane et al. (2023b) show that, for a given star formation rate, the UV luminosity from Pop III stars characterized by a top-heavy IMF is \(\sim 3-4\) times larger than that of Pop II stars with a normal Salpeter-like IMF. Here we first discuss CAT model predictions regarding the UV emission from Pop III stars, and then we estimate the potential effect on the galaxy UV LF of a more gradual transition from a top-heavy IMF for Pop III stars to a standard, Salpeter-like IMF for Pop II stars, modulated by metallicity and redshift, as suggested by recent numerical simulations of star cluster formation in low-metallicity environments resolving individual forming stars (Chon & Omukai, 2020; Chon et al., 2022). #### 4.3.1 The role of Pop III stars: CAT model prediction CAT models stellar populations in high redshift galaxies depending on whether their initial metallicity is smaller or larger than the critical metallicity \(Z_{\rm cr}\). We therefore have an abrupt transition in the IMF, from a top-heavy IMF for Pop III stars (\(Z<Z_{\rm cr}\)) to a normal, Figure 7: Galaxy UV LF predicted by CAT at \(z=10,13,15,16,17\) and \(20\) (from top left to bottom right). Below each panel, the red histograms show the fraction of galaxies hosting active Pop III stellar populations in each bin of magnitude. Observational data follow the same color-coding adopted in Figure 4. Salpeter-like IMF for Pop III stars (\(Z\geq Z_{\rm cr}\)). We have shown in Figure 4 that, according to CAT, stellar emission, including Pop III stars, cannot account for the overabundance of bright UV galaxies observed by JWST. To further clarify this point, in Figure 7, we report the fraction of galaxies hosting active Pop III populations predicted by CAT at \(z=10,13,15,16,17\) and \(20\) (red histograms), together with the corresponding galaxy UV LF at the same redshift. At \(15\lesssim z\lesssim 20\), the fraction of galaxies hosting Pop III stars is always relatively small, and progressively shifted to fainter luminosity bins. Interestingly, however, we find that 10% of some of the brightest systems at \(z\sim 17\), with \(\rm M_{UV}\sim-18\) host active Pop III stars. This may be ascribed to a less efficient chemical feedback at these very early times, where subsequent generations of Pop III stars form before the medium is enriched above the critical metallicity, leading to a transition in the stellar IMF. The occurrence of active Pop III stars inside bright and more massive galaxies might also be favoured by mergers with smaller metal-free DM halos, which are supposed to occur with higher frequency at very high redshift. At \(z\lesssim 16\), chemical feedback leads to a drop in the occupation fraction of active Pop III stars, especially at the bright-end of the distribution. At \(z\sim 15\) Pop III populations survive only in fainter and less evolved galaxies, where the gas still maintains a pristine composition. In these smaller halos, Pop III star formation is less efficient and leads preferentially to the formation of stars at the lower mass end of the Pop III IMF, which evolve on longer timescales (see Figure 1). Finally, below \(z\sim 15\) CAT predicts a sharp transition in the brightest systems towards Pop II star formation. Thereafter, Pop III stars continue to form only inside more pristine and fainter galaxies, with \(M_{\rm UV}\gtrsim-10\), despite a sustained Pop III star formation survives down to \(z\sim 10\), where the rapid enrichment leads to a complete suppression of metal-poor star formation, as shown in Figure 2. An important caveat of the current model is represented by the assumption of homogeneous metal enrichment and radiative feedback. Various techniques based on statistical descriptions of the DM halo distribution (Dijkstra et al., 2014; Salvadori et al., 2014; Inayoshi et al., 2014; Sassano et al., 2021) or on N-body simulations to reconstruct their spatial distributions and redshift evolution (see e.g. Visbal et al., 2020; Spinoso et al., 2022; Hartwig et al., 2022 for recent studies) have been developed to allow semi-analytical models to account for these spatial inhomogeneities. Our model lacks this information, although the DM halo merger trees generated with galform(Parkinson et al., 2008; Trinca et al., 2022) enables us to explore different overdensities, sampling at the same time less- and more-evolved environments. In addition, small and large scale gas dynamics and turbulence have been shown to be very important to allow the formation of Pop III and Pop II stellar populations in different regions of the same galaxy (Tornato et al., 2007; Johnson et al., 2013; Pallottini et al., 2015; Xu et al., 2016; Sarmento et al., 2019; Liu and Bromm, 2020; Sarmento and Scannapieco, 2022). Recent cosmological simulations by Venditti et al. (2023) show that Pop III SF might extend down to \(z\sim 6-8\) in the outskirts of more metal-enriched galaxies. In particular, they predict that active Pop III stars might survive in \(\geq 10\%\) of massive galaxies with \(M_{\rm*}\geq 3\times 10^{9}M_{\odot}\) at \(z\simeq 6.7\), although with a Pop III/Pop II mass fraction \(\leq 0.1\) %. This suggests that Pop III stars formed in particularly unpolluted regions of standard main sequence galaxies might survive below the redshift range predicted by CAT, and in a larger fraction of galaxies. #### 4.3.2 A more gradual transition in the stellar IMF Recent high-resolution 3D hydrodynamical simulations of low-metallicity star forming regions show that the metallicity-driven transition in the stellar IMF might be smoother than predicted by the critical metallicity scenario, and that a larger fraction of massive stars than predicted by a standard Salpeter-like IMF persists up to a metallicity of \(Z\sim 10^{-2}\,Z_{\odot}\)(Chon et al., 2021). The main effect is due to the interplay between the cooling timescale and the timescale of turbulence decay. When \(Z\lesssim 10^{-2}\,Z_{\odot}\), star formation begins after the turbulent motion decays, and a single massive cloud core monolithically collapses to form a central massive stellar cluster; despite dust-induced fragmentation occurs at \(Z\geq Z_{\rm cr}\), promoting the formation of low-mass stars \(m_{\rm*}\lesssim 0.1\,M_{\odot}\), the large gas accretion rate from the circumstellar disc preferentially feeds the central massive stars, making the mass distribution top-heavy. When \(Z\geq 0.1\,Z_{\odot}\), efficient metal-line cooling and collisions of the turbulent flows promote the onset of star formation in a highly filamentary gas structure. In this case, the mass supply to the massive stars is limited by the local gas reservoir and shared among the stars, leading to a standard Salpeter-like IMF. In addition, the larger temperature of the CMB radiation at \(z\gtrsim 10\) suppresses cloud fragmentation and reduces the number of low-mass stars in star-forming regions with metallicities \(Z\gtrsim 10^{-2}Z_{\odot}\)(Schneider and Omukai, 2010; Chon et al., 2022). As a result, stellar populations with metallicity of \(Z\lesssim 10^{-2}Z_{\odot}\) or forming at \(z\gtrsim 10\) are expected to be characterised by a mass spectrum consisting of a low-mass Salpeter-like component, peaking at \(\sim 0.1M_{\odot}\), and a top-heavy component at \(\gtrsim 10\,M_{\odot}\), with the mass fraction in the latter increasing with redshift, and decreasing with metallicity. While these results rely so far only on sophisticated theoretical studies, it is tempting to investigate their potential impact on the high-\(z\) galaxy UV LF, particularly given that galaxies are expected to be more metal-poor at high redshift. Indeed, galaxies observed with JWST at \(4<z<9\) have been found to be relatively young, with estimated ages \(t_{\rm*}<30\) Myr, and with metallicities in the range \(\sim 0.04-0.7\,Z_{\odot}\)(see Nakajima and Maiolino, 2022 and references therein), and similar properties have been derived for galaxies confirmed to be at \(10\lesssim z\lesssim 13\)(Curtis-Ake et al., 2022; Tacchella et al., 2022; Bunker et al., 2023; Tacchella et al., 2023). To quantify the effect of a redshift-modulated stellar IMF on the galaxy UV LF, here we take a simple approach and we assume that galaxies populating the bright-end of the galaxy UV LF at \(z\geq 10\) have metallicities \(Z_{\rm*}\sim 0.1Z_{\odot}\) and stellar ages \(t_{\rm*}\sim 10\) Myr. Following Tanikawa et al. (2022), we model the _transitional_ stellar IMF as a composition of a Kroupa IMF in the mass range \(0.08M_{\odot}\leq m_{\rm*}\leq 300M_{\odot}\)(Kroupa, 2001), \[\Phi(m_{\rm*})dm_{\rm*}\propto\begin{cases}m_{\rm*}^{-1.3}\text{ for }0.08M_{\odot}\leq m_{\rm*}<0.5M_{\odot}\\ m_{\rm*}^{-2.3}\text{ for }0.5M_{\odot}\leq m_{\rm*}\leq 300M_{\odot}\end{cases}\] and of a Log-flat IMF for \(10M_{\odot}\leq m_{\rm*}\leq 300M_{\odot}\), \[\Phi(m_{\rm*})dm_{\rm*}\propto m_{\rm*}^{-1}\text{,}\] with a relative mass-weight that depends on \(z\) and \(Z_{\rm*}\) and that have been obtained by fitting the simulations results of Chon et al. (2022). For a metallicity of \(Z_{\rm*}=0.1Z_{\odot}\), the weight of the Log-flat IMF can be expressed as \(w_{\rm LF}=0.04\times(z-5)\) for \(z>5\) and it is \(w_{\rm LF}=0\) at \(z\lesssim 5\), meaning that below this redshift all the stars follow a Kroupa IMF. Following Madau and Dickinson (2014), we use the flexible stellar population synthesis code (FSPS, Conroy et al., 2009; Conroy and Gunn, 2010) to compute the conversion factor between the intrinsic specific luminosity at 1500 A, \(L_{\nu}\)(FUV) (expressed in units of erg s\({}^{-1}\) Hz\({}^{-1}\)) and the SFR (in units of \(M_{\odot}\) yr\({}^{-1}\)), \[\mathcal{K}_{\rm FUV}=\frac{\text{SFR}}{L_{\nu}(\text{FUV})} \tag{17}\] assuming a constant SFR and our adopted transitional stellar IMF for \(Z_{*}=0.1\,Z_{\odot}\). At redshift \(z=(0-5,10,15,20)\), we find that \(\mathcal{K}_{\rm FUV}=(1.46,1.21,1.04,0.91)\times 10^{-28}\) for a stellar age of \(t_{*}=10\) Myr, and \(\mathcal{K}_{\rm FUV}=(1.12,0.96,0.90,0.81)\times 10^{-28}\) for \(t_{*}=20\) Myr. Hence, a stellar population with the same metallicity and age is predicted to emit up to \(1.4-1.8\) times more FUV radiation per unit SFR at \(z\sim 20\) than at \(z\lesssim 10\). In Figure 8 we show how the predicted galaxy UV LF changes when applying a correction for \(\mathcal{K}_{\rm FUV}\) consistent with a stellar population characterized by a composite stellar IMF. We assume a boost in the luminosity of each galaxy of a factor \(\mathcal{K}_{\rm FUV}(z)/\mathcal{K}_{\rm FUV,Kroupa}\), where \(\mathcal{K}_{\rm FUV,Kroupa}=1.46\times 10^{-28}\) and \(\mathcal{K}_{\rm FUV}(z)\) are the expected correction factors for, respectively, the Kroupa and the composite IMFs, assuming a stellar population of 10 Myr with \(Z_{*}=0.1\,Z_{\odot}\). We see how this correction impacts significantly on the UV LF, especially at \(z\gtrsim 9\). In particular, CAT predictions are now consistent with observational constraints in the redshift ranges \(z\sim 10-11\) and \(z\sim 12-13\). At \(z\sim 14-16\), the model predicts number density of UV bright sources that is still smaller than estimated from photometric candidates (Harikane et al., 2023) and consistent with the lower limit inferred from the spectroscopic analysis (Harikane et al., 2023) but the difference from their best-fit model is now reduced to 0.5 dex and 0.8 dex at, respectively, \(M_{\rm UV}\simeq-19.5\) and \(M_{\rm UV}\simeq-20.5\) (see Figure 9). Hence, by considering a gradual transition in the stellar IMF modulated by metallicity and redshift, we are able to recover a better match with the observational estimates of the number density of bright sources at \(z\gtrsim 10\). A minor discrepancy persists in the highest redshift range, \(z>14\), where the model predictions are also affected by large statistical uncertainties. In a future study, we plan to incorporate these new theoretical findings into a more sophisticated modeling of stellar populations in CAT, as a transitional IMF that depends on metallicity and redshift not only affect the emissivity of the first galaxies, but it also changes the rate of supernova explosion, hence the effects of mechanical and chemical feedback. In addition, it modifies the mass function of BH remnants, with interesting consequences on the BH seed populations. Figure 8: Galaxy UV LF, as in Figure 4, but assuming the \(\mathcal{K}_{\rm FUV}\) expected for a stellar population of 10 Myr with a transitional IMF, as described in the text. Black shaded regions show how the predicted LFs are shifted towards higher luminosities with respect to the previous distribution. Observational data follow the same color-coding adopted in Figure 4. ### Quantifying the tension with JWST data We quantify the tension between the predictions of the model that we have explored with current observations in Figure 9, where we compare the number density of galaxies with \(M_{\rm UV}\sim-20\) predicted by CAT (coloured histograms) in the redshift range \(z\sim 10-11\) (left panel), \(z\sim 12-13\) (middle panel) and \(z\sim 14-16\) (right panel), with the observational estimates of Bouwens et al. (2023, horizontal red line) and Harikane et al. (2023c, horizontal green line), based on photometric candidates, and with analysis of Harikane et al. (2023b, horizontal light green), based on spectroscopically confirmed galaxies. The values obtained from the photometric samples have been estimated from the Schechter fits4 of the LF at \(M_{\rm UV}\simeq-20\) reported in the original papers, with their associated errors. The value estimated from the spectroscopic sample is reported as a lower limit. The three coloured histograms represent the number density of sources predicted by CAT when the UV luminosity is computed according to CAT reference model considering only the emission from stars (black) and from stars and accreting BHs (blue), while the orange histograms show the result of computing the stellar emission according to a transitional IMF. As anticipated, considering the AGN contribution in addition to the stellar emission does not significantly increase the number density of sources with \(M_{\rm UV}\simeq-20\) at \(z\gtrsim 10\). CAT reference model predictions are consistent with observations at \(z\simeq 10-11\), but fall short by 0.8 dex at \(z\simeq 12-13\), and 1.2 dex at \(z\sim 14-16\), compared to the values estimated by Harikane et al. (2023c), and by 1.8 dex at \(z\simeq 12-13\), and 2.7 dex at \(z\sim 14-16\), compared to the values estimated by Bouwens et al. (2023), while being consistent with the lower limit estimated by Harikane et al. (2023b) at \(z\sim 14-16\). Interestingly, the increase in the UV luminosity predicted by a transitional IMF brings the model in agreement with the observational estimate at \(z\sim 12-13\) by Harikane et al. (2023c), and reduces the disagreement with their best-fit value to 0.6 dex at \(z\sim 14-16\), while the estimated values from Bouwens et al. (2023) are still 0.8 dex (2.1 dex) higher than the model predictions at \(z\sim 12-13\) (\(z\sim 14-16\)). Footnote 4: In general, the LF are best-fit assuming both a double-power-law and a Schechter function, but due to the uncertainties in the measurements, the difference between the two is small at \(z\sim 9\), and negligible at \(z\sim 12\) and 16 Harikane et al. (2023c). ## 5 Conclusions In this study we have explored the contribution of Pop III/II stars and accreting BH seeds to the UV luminosity evolution at \(4\lesssim z\lesssim 20\) using the Cosmic Archaeology Tool (CAT) model. We first presented the predictions of CAT regarding the cosmic star formation history and the contributions of the first galaxies and their nuclear BHs to cosmic reionization. We then compared the galaxy UV luminosity function at \(4\leq z\lesssim 20\) with available observations, from deep HST and JWST data, discussing the contribution of emission from stars, including Pop III stars, and accreting BHs. We find that: * The model predicts a cosmic star formation history dominated by UV faint (\(M_{\rm UV}>-18\)) galaxies populated by Pop II/I stars. In the redshift range \(4<z<10\), CAT predictions for the total SFRD are in very good agreement with data inferred from gamma-ray burst observations, which are sensitive to both obscured and unobscured star formation (Kistler et al., 2009; Robertson & Ellis, 2012). Conversely, the SFRD derived from the rest-frame UV luminosity are better reproduced by CAT when only the contribution of the sources brightest than \(M_{\rm UV}\approx-18\) is considered (Trinca et al., 2022). * At \(10\lesssim z\lesssim 20\), the formation of Pop III stars is strongly self-regulated, their SFRD remains almost constant, and their contribution to the total SFRD ranges from \(\lesssim 10\%\) down to \(\lesssim 0.5\%\). Below \(z\sim 10\), chemical feedback prevents further episodes of Pop III star formation. * CAT predicts a cosmic reionization process largely consistent with current observational constraints. Stars are the primary sources of cosmic reionization, with \(5-10\%\) of ionizing photons escaping into the IGM at \(5\leq z\leq 10\), in good agreement with recent results (Finkelstein et al., 2019; Naidu et al., 2020; Schaerer, 2002; Mascia Figure 9: CAT prediction for the number density of galaxies with \(M_{\rm UV}\sim-20\) in the redshift range \(z\sim 10-11\), \(z\sim 12-13\) and \(z\sim 14-16\). The different colored histograms show the obtained values for the stellar-only galaxy LF (black bars), the total UVLF (stars + AGN, blue bars) and the galaxy LF corrected for a composite stellar IMF (orange bars). The horizontal lines represent the empirical estimates obtained from the best-fit LFs of Harikane et al. (2023c, dark green) and Bouwens et al. (2023, red), with the shaded regions showing the \(1\sigma\) error. The light green horizontal lines show instead the estimates based only on spectroscopically confirmed galaxies obtained by Harikane et al. (2023b), which are represented here as lower limit of the galaxy number density. et al., 2023). Due to their top-heavy IMF and lower metallicity, Pop III stars dominate the emissivity down to \(z\simeq 16\). The AGN ionizing emissivity remains subdominant with respect to the Pop II stellar emission down to \(z\sim 5\). Having satisfied these global constraints, we have investigated the redshift evolution of the galaxy UV luminosity function predicted by CAT, comparing it with deep HST data and with recent JWST observations. We find that: * The stellar and AGN UV luminosity functions predicted by CAT are in good agreement with observations at \(5\lesssim z\lesssim 9-10\). At higher redshift, CAT predicts a steeper evolution in the faint-end slope than the observed best-fit luminosity functions extrapolated at \(M_{\rm UV}>-18\), suggesting that stars may form too efficiently, feedback may be too inefficient, or that current observational samples may be still incomplete at these faint luminosities. * When considering only the emission from stars, the UV LF predicted by CAT shows a mild evolution at \(10\lesssim z\lesssim 13\), consistent with observations, except for the highest luminosity bins, at \(M_{\rm UV}<-19\), where the model seems to underestimate the number of bright objects, despite the large statistical uncertainties. We quantified this tension by comparing the number density of \(M_{\rm UV}\sim-20\) galaxies with recent JWST data, finding that the model predictions are consistent with observations at \(z\simeq 10-11\), but fall short by 0.8 dex at \(z\simeq 12-13\), and 1.2 dex at \(z\sim 14-16\), compared to the values estimated by Harikane et al. (2023c), and by 1.8 dex at \(z=12-13\), and 2.7 dex at \(z\sim 14-16\), compared to the values estimated by Bouwens et al. (2023), while being consistent with the lower limit estimated by Harikane et al. (2023b) at \(z\sim 14-16\). * Including the emission by AGNs does not affect these findings. In fact, at \(z\gtrsim 15\), the AGN emission appears negligible, due to the stunted growth of light seeds predicted by our reference model (see Trinca et al., 2022). At \(10\lesssim z\lesssim 15\), during the formation epoch of heavy BH seeds, the AGN emission contributes on average to \(\sim 1-3\%\) of the total UV emission, and their largest contamination reaches \(\lesssim 10\%\) in the brightest bins of magnitude, at \(\rm M_{\rm UV}\lesssim-19\). However, the AGN emission becomes progressively more important at lower redshift, with an average (maximum) contribution of \(\lesssim 20\%\) (\(>50\%\)) at \(7\lesssim z\lesssim 10\), and of \(20-100\%\) at \(z<7\) in systems with \(M_{\rm UV}\lesssim-22\). * Metal-free and very metal-poor stellar populations might also increase the UV luminosity of galaxies at \(z\gtrsim 10\). Our results suggest that Pop III stars, with their harder emission spectra, contribute to the UV luminosity in up to \(\sim 10\%\) of high-redshift systems at \(z\gtrsim 16\), while their contribution becomes significantly smaller with decreasing redshift, due to the progression of metal enrichment. * Finally, we have explored the effects on the UV luminosities of a more gradual transition in the stellar IMF, as suggested by recent high-resolution numerical simulations (Chou et al., 2021, 2022). We model this _transitional_ IMF as a superposition of a Kroupa and a Log-flat IMF, with a relative weight that depends both on redshift and on the initial stellar metallicity, \(Z_{*}\). Assuming a fixed value of \(Z_{*}=0.1\,Z_{\odot}\) and a constant stellar age of \(t_{*}=10\) Myr, we find that galaxies emit 30% more UV photons per unit SFR at \(z\simeq 10\) and 60% at \(z\simeq 20\) than at \(z\lesssim 5\), where a standard Kroupa IMF applies. When accounting for this effect, the number density of \(M_{\rm UV}\sim-20\) galaxies predicted by the model is in agreement with the observational estimate at \(z\sim 12-13\) by Harikane et al. (2023c), and reduces the disagreement with their best-fit value to 0.6 dex at \(z\sim 14-16\), while the estimated values from Bouwens et al. (2023) are still 0.8 dex (2.1 dex) higher than the model predictions at \(z\sim 12-13\) (\(z\sim 14-16\)). If the current tension between theoretical models and JWST observations will consolidate on the basis of a larger sample of spectroscopically confirmed galaxies at \(z\gtrsim 10\), this opens up the prospective of exploring the nature of the first sources that inhabit early galaxies, and to improve our understanding of the physical processes that shape the assembly of cosmic structures at these remote cosmic epochs. ## Acknowledgements We acknowledge support from the Amaldi Research Center funded by the MIUR program "Dipartimento di Eccellenza "(CUP:BS11I8001170001) and from the INFN TEONGRAV specific initiative. This research is supported in part by Grants-in-Aid for Scientific Research (KO: 17H06360, 17H01102, 17H02869, 22H00149) from the Japan Society for the Promotion of Science. ## Data Availability The simulated data underlying this article will be shared on reasonable request to the corresponding author.
2305.15691
Identification in Some Discrete Choice Models: A Computational Approach
This paper presents an algorithm that generates the conditional moment inequalities that characterize the identified set of the common parameter of various semi-parametric panel multinomial choice models. I consider both static and dynamic models, and consider various weak stochastic restrictions on the distribution of observed and unobserved components of the models. For a broad class of such stochastic restrictions, the paper demonstrates that the inequalities characterizing the identified set of the common parameter can be obtained as solutions of multiple objective linear programs (MOLPs), thereby transforming the task of finding these inequalities into a purely computational problem. The algorithm that I provide reproduces many well-known results, including the conditional moment inequalities derived in Manski 1987, Pakes and Porter 2023, and Khan, Ponomareva, and Tamer 2023. Moreover, I use the algorithm to generate some new results, by providing characterizations of the identified set in some cases that were left open in Pakes and Porter 2023 and Khan, Ponomareva, and Tamer 2023, as well as characterizations of the identified set under alternative stochastic restrictions.
Eric Mbakop
2023-05-25T03:54:29Z
http://arxiv.org/abs/2305.15691v1
# Identification in Some Discrete Choice Models: A Computational Approach+ ###### Abstract This paper presents an algorithm that generates the conditional moment inequalities that characterize the identified set of the common parameter of various semi-parametric panel multinomial choice models. I consider both static and dynamic models, and consider various weak stochastic restrictions on the distribution of observed and unobserved components of the models. For a broad class of such stochastic restrictions, the paper demonstrates that the inequalities characterizing the identified set of the common parameter can be obtained as solutions of multiple objective linear programs (MOLPs), thereby transforming the task of finding these inequalities into a purely computational problem. The algorithm that I provide reproduces many well-known results, including the conditional moment inequalities derived in Manski 1987, Pakes and Porter 2023, and Khan, Ponomareva, and Tamer 2023. Moreover, I use the algorithm to generate some new results, by providing characterizations of the identified set in some cases that were left open in Pakes and Porter 2023 and Khan, Ponomareva, and Tamer 2023, as well as characterizations of the identified set under alternative stochastic restrictions. ## 1 Introduction Multinomial choice models have been widely used in empirical research to model situations where agents face a finite set of alternatives, ever since the seminal paper by McFadden 1974. Estimating the parameters of these models allows researchers (among other things) to evaluate the relative importance of the different factors that drive individuals' choices, estimate average partial effects, and make predictions about counterfactual scenarios, such as the introduction of a new alternative. For a comprehensive introduction and overview of multinomial choice models, including applications, the reader is referred to Train 2009. In this paper, I consider panel multinomial choice models of the type \[Y_{\mathrm{ti}}=\arg\max_{d\in[D]}\bigl{\{}X_{\mathrm{dti}}^{\prime}\beta+\gamma 1_{(Y_{(t-1)}=d)}+\lambda_{\mathrm{di}}+\epsilon_{\mathrm{dti}}\bigr{\}}, \tag{1.1}\] where \(Y_{\mathrm{ti}}\) denotes the observed choice of individual i at time \(t\in[T]\)1 among the \(D\) (\(\geq 2\)) alternatives that constitute the choice set \([D]\), \(X_{\mathrm{dti}}\) represents the vector of observed individual-alternative-time specific covariates, and \(\lambda_{\mathrm{di}}\) and \(\epsilon_{\mathrm{dti}}\) denote respectively the latent individual-alternative specific fixed effects and the random preference shocks. In this paper, I am interested in providing a simple characterization of the (sharp) identified set of the common parameter \(\theta=(\beta,\gamma)\), under various weak (i.e. nonparametric) distributional assumptions on the joint distribution of the observed and latent variables. I will refer to models where the utilities of alternatives does not depend on lagged choices (i.e., \(\gamma=0\)) as "static settings", and to those where it does as "dynamic settings." It is important to note that although the approach presented in this paper applies more generally to dynamic settings where the indirect utilities may depend on any arbitrary (but finite) number of lagged choices, when discussing dynamic settings, I will mainly focus on settings with first order dynamics (one lag) for the sake of expositional simplicity. Footnote 1: I restrict the analysis in this paper to balanced panels and, given an integer \(n\in\mathbb{N}\), I will use \([n]\) to denote the set \([n]:=\{1,2,\cdots,n\}\). In order to better describe what I do in this paper, consider the two-periods (i.e. \(T=2\)) binary choice setting of Manski 1987, where the choice of individual i at time \(t\in[2]\) is modeled by: \[Y_{\mathrm{ti}}=1_{\bigl{\{}X_{\mathrm{ti}}^{\prime}\theta-\lambda_{\mathrm{i }}-\epsilon_{\mathrm{ti}}>0\bigr{\}}}. \tag{1.2}\] Under the assumption that the conditional marginal distributions of period 1 and period 2 shocks, given the covariates and fixed effects, are equal and are continuously distributed with full support (i.e. \(\epsilon_{1\mathrm{i}}|X_{1\mathrm{i}},X_{2\mathrm{i}},\lambda_{\mathrm{i}} \sim\epsilon_{2\mathrm{i}}|X_{1\mathrm{i}},X_{2\mathrm{i}},\lambda_{\mathrm{ i}}\)), Manski 1987 shows that the CCPs satisfy the following three inequality restrictions: \[x_{1}^{\prime}\theta\underset{\leq}{\geqq}x_{2}^{\prime}\beta\quad\text{ implies}\quad P(Y_{1}=1|X_{1}=x_{1},X_{2}=x_{2})\underset{\leqq}{\geqq}P(Y_{2}=1|X_{1}=x_{1},X_{2}=x_{2}). \tag{1.3}\] Conditional moment inequality restrictions like (1.3) are important for at least two reasons. First, when point identification is of interest, these inequalities can be used to investigate what additional assumptions, if any, are needed to guarantee point identification and can form the basis for constructing semi-parametric estimators for the parameter \(\theta\). In Manski 1987, for instance, it is shown that a sort of large support assumption on the intertemporal difference of the covariates suffices to guarantee point identification of \(\theta\), and inequalities (1.3) are used to construct a maximum score estimator of the parameter. Another example of an estimator constructed in a similar manner is the cyclical monotonicity-based estimator proposed by Shi, Shum, and Song 2018.2 Second, as large support-like assumptions on the covariates seem necessary to ensure point identification of \(\theta\) when only weak stochastic restrictions are placed on the random utility shocks (see Theorem 1 in Chamberlain 2010), in applications where such large support assumptions are not plausible, inequalities like (1.3) can form the basis for a partial identification analysis. One may then ask whether the inequalities (1.3) characterize the sharp identified set, in that they exhaust all of the information contained in the maintained stochastic restriction. For if they do not, the set of parameter values that are consistent with these inequalities will form an outer region of the sharp identified set. In general, showing sharpness is not an easy task, and it has only been carried out in a few cases. In particular, under similar stochastic restrictions as in Manski 1987, and using different proof techniques, Pakes and Porter 2023 and Khan, Ponomareva, and Tamer 2023 consider special cases of Model 1.1 (where \(\gamma=0\) and \(T=2\) for Pakes and Porter 2023, and where \(D=2\) for Khan, Ponomareva, and Tamer 2023), derive inequality restrictions like (1.3), and prove that the restrictions that they obtain characterize the sharp identified set. One limitation of the approach of these papers is that the arguments that they employ to establish sharpness are model-specific, and do not readily translate to new inequalities when the models under consideration are slightly modified. This paper provides a unified approach, and I present an algorithm that generates all the conditional moment inequality restrictions that characterize the sharp identified set of the parameter \(\theta\) in models like 1.1, for a broad class of stochastic restrictions on the shocks. The algorithm is applicable to settings with arbitrary (but finite) number of alternatives, time periods, and lags (in dynamic settings). In the case of Manski 1987, for instance, the algorithm generates exactly the inequality restrictions (1.3). Therefore, the algorithm presented in this paper automates the hitherto challenging analytical task of deriving all the conditional moment inequality restrictions that characterize the sharp identified set. Footnote 2: see also Khan, Ouyang, and Tamer 2021 and Khan, Ponomareva, and Tamer 2023. I demonstrate the usefulness and versatility of the algorithm through five examples. In Example 2.23, I consider the static setting of Pakes and Porter 2023, and I show that the algorithm recovers the inequality restrictions that were obtained there by analytical means. In Example 2.25, I consider again the setting of Pakes and Porter 2023, but replace their conditional stationarity stochastic restriction by the alternative (and stronger) stochastic restriction of conditional exchangeability, and I use the algorithm to generate the inequality restrictions that characterize the sharp identified set. In Example 3.4, I consider the dynamic binary choice setting of Khan, Ponomareva, and Tamer 2023, and again show that the algorithm recovers the inequality restrictions that were obtained in Khan, Ponomareva, and Tamer 2023 by analytical means. In Example 3.12, I consider a two-lags extension of the model of Khan, Ponomareva, and Tamer 2023, and use the algorithm to generate the inequalities that characterize the sharp identified set. Finally, in Example 3.5, I demonstrate that the algorithm can still be informative in cases where it is not applicable due to incompatibility with the maintained stochastic restrictions. In particular, I consider the dynamic multinomial choice model of Pakes et al. 2022, and I replace the stochastic restriction that they consider and which is incompatible with my approach, by the compatible and weaker stochastic restriction of conditional stationarity. Interestingly, the inequalities that the algorithm generates are shown to neither imply nor be implied by the inequalities derived in Pakes et al. 2022. Hence, the inequalities that the algorithm generates provide additional restrictions that can be used in conjunction with the inequalities derived in Pakes et al. 2022 to generate a smaller outer region of the identified set, and this also shows that the characterization provided in Pakes et al. 2022 is not sharp (see Remark 3.9). The algorithm that I present in this paper builds on few ideas that appear in the sharpness proof of Pakes and Porter 2023. Specifically as the outcome variable is discrete, "locally", i.e. for a fixed value of the parameter and covariates, the (ambient) space of the latent variable can be partitioned into a finite number of "regions", where for each time period, all realizations of the shocks within a region induce the agents to choose the same alternative. I use this partitioning to show that each "local model" is observationally equivalent to a model where the distribution of the latent variables is finite-dimensional. I refer to the process of constructing these observationally equivalent finite-dimensional versions of the local models as the "discretization" of the model, and it can be viewed as a device for dimension reduction. When the stochastic restrictions on the latent variables can be enforced by a finite number of linear equality/inequality constraints within the discretized model, the discretized local models are polytopes \(-\) that is, bounded intersections of finitely many half-spaces. I show that the normal vectors of a specific subset of the hyperplanes that bound these polytopes are in one-to-one correspondence with the conditional moment inequalities that characterize the sharp identified set of \(\theta\), and I show that solving for the set of these normal vectors can be framed as a multiple objective linear program (MOLP). The approach that I discuss in this paper then essentially consists of an algorithm that constructs the discretized local models and then solves for the solutions of the associated MOLPs. There are readily available algorithms to solve general MOLPs; however, these algorithms are computationally expensive and do not scale well with the size of the problem (measured by the number of alternatives D and the number of time periods T). In static settings, I exploit the specific structure of the MOLPs that appear in multinomial choice models, to propose an algorithm that is tailored to solving these MOLPs, and I show through simulations that my algorithm improves upon the generic algorithms, and performs well for models of moderate size. This paper contributes to the literature that studies the identification and estimation of semi-parametric panel multinomial choice models without parametric restrictions on the shocks. Papers in this literature typically impose weak nonparametric restrictions on the shocks, and leave the joint distribution of the covariates and fixed effects unrestricted. The earliest paper in this literature is Manski 1987, which established sufficient conditions for point identification of the common parameter in a static binary choice model under a stationarity restriction on the shocks. Subsequent papers that have similarly provided conditions for point identification of the common parameter in both static and dynamic multinomial choice models under stationarity-like restrictions on the shocks include Shi, Shum, and Song 2018, who exploited cyclical monotonicity to derive conditions for point identification in static multinomial choice models; Khan, Ouyang, and Tamer 2021, who proposed a general approach and conditions for point identification of static and dynamic multinomial choice models; and Khan, Ponomareva, and Tamer 2023, who derived the conditional inequalities that characterize the identified set for the common parameter in a dynamic binary choice model, and additionally provided conditions for point identification3. Papers in this literature that mainly focus on partial identification include Pakes and Porter 2023, which considers a static multinomial choice model and derives all the inequalities that characterize the sharp identified set of the common parameter, and Pakes et al. 2022, which considers a dynamic multinomial choice model and derives bounds on the identified set of the state dependence parameter. Gao and Li 2020 derives conditions for identifying the index parameter in a static semi-parametric and nonseparable multinomial choice model with high-dimensional unobserved heterogeneity.4 Footnote 3: See also, Honore and Kyriazidou 2000 who provided condition for point identification in dynamic binary choice models under a serial independence restriction on the shocks, Aristodemou 2020 who derives identifying inequalities for the common parameter of static binary choice models under a conditional independence restriction, and Honore and Lewbel 2002 who provide conditions for point identification of a static binary choice model with predetermined covariates. Footnote 4: See also Chesher and Rosen 2017 and Chesher, Rosen, and Smolinski 2013 who provide a general characterization of the sharp identified set of the parameters of nonseparable instrumental variable models that include multinomial choice models. A related literature studies identification and estimation of semi-parametric multinomial choice models where parametric (usually logistic) assumptions are made on the shocks and, as above, the joint distribution of fixed effects and covariates is left unrestricted. Here the identification of the common parameter often relies on the conditional likelihood approach, or on finding (identifying) moment conditions that do not depend on the fixed effects. Andersen 1970; Chamberlain 1980, 2010; Cox 1958; Honore and Kyriazidou 2000, 2019; Honore and Paula 2021; Honore and Weidner 2021; Magnac 2000. In particular, Honore and Weidner 2021 extend the approach proposed in Bonhomme 2012 to computationally solve for conditional moment _equalities_ that identify the common parameters in various dynamic panel data Logit models and, as such, their approach is similar to the one taken in the present paper. Furthermore, it can be said that the current paper shares a conceptual connection with Bonhomme 2012, which provides a recipe for obtaining (potentially) identifying moment _equalities_ for the common parameter of nonlinear panel data models with fixed effects. By contrast, the present paper offers a more concrete recipe (a computer algorithm) to generate the conditional moment inequalities that characterize the sharp identified set of the common parameter for various panel multinomial choice models. Finally, from a methodological perspective, this paper is closely related to Csirmaz 2016, which builds upon the approach of Zhang and Yeung 1998 and converts the task of generating new entropy inequalities (information-theoretic inequalities that are not of the Shannon-type) into MOLPs. Some steps of the algorithm described in this paper are similar to those in Csirmaz 2016. The paper is structured as follows: In Section 2, I describe the algorithm in static settings and provide various results to establish its validity. Specifically, Subsection 2.1 illustrates the approach of this paper in the simple setting of Manski 1987 to help readers develop intuition for the algorithm. Subsections 2.2 and 2.3 then demonstrate how the ideas discussed in Subsection 2.1 can be extended to the general static setting, and I provide results to establish the validity of the algorithm under the extension. In Subsection 2.3.1, I discuss the limitations of the approach presented in this paper and describe stochastic restrictions under which the algorithm is not compatible. In Subsection 2.4, I provide an alternative algorithm to solve for the MOLPs associated with our models, and I provide simulation results to assess its performance. In Subsection 2.5, I apply the algorithm developed for static settings to two examples. In Section 3, I discuss how the algorithm needs to be modified to accommodate dynamic settings, and Subsection 3.3 provides three examples of the application of the algorithm in dynamic settings. Finally, Section 4 concludes and gives directions for future research. All proofs are provided in the appendix. ## 2 Static models In this section, I consider the following _static_ panel data multinomial choice model: Individual i chooses alternative d (\(\in[\mathrm{D}]\)) at time t (\(\in[\mathrm{T}]\)), denoted \(Y_{\mathrm{ti}}=\mathrm{d}\), if and only if \[X^{\prime}_{\mathrm{dti}}\theta_{0}+\lambda_{\mathrm{di}}+\epsilon_{\mathrm{dti }}>X^{\prime}_{d^{\prime}\mathrm{ti}}\theta_{0}+\lambda_{d^{\prime}\mathrm{i}} +\epsilon_{d^{\prime}\mathrm{ti}}\ \forall\mathrm{d}^{\prime}\in[\mathrm{D}]\backslash\{ \mathrm{d}\}. \tag{2.1}\] Here \([D]=\{1,2,\cdots,D\}\), with \(D\geq 2\), denotes the set of alternatives, and \(T\geq 2\) denotes the length of the panel. I will use the following notation: \(\lambda_{i}=(\lambda_{1i},\cdots,\lambda_{Di})^{\prime}\in\mathbb{R}^{D}\), \(Y_{i}=(Y_{1i},\cdots,Y_{Ti})^{\prime}\in\mathbb{R}^{D}\), \(X_{ti}=(X_{1ti},\cdots,X_{Dti})\in\mathbb{R}^{d_{x}\times D}\), \(\epsilon_{ti}=(\epsilon_{1ti},\cdots,\epsilon_{Dti})^{\prime}\in\mathbb{R}^{D}\), \(X_{i}=(X_{1i},\cdots,X_{Ti})\in\mathbb{R}^{d_{x}\times DT}\), and \(\epsilon_{i}=(\epsilon_{1i},\cdots,\epsilon_{Ti})\in\mathbb{R}^{D\times T}\). In the analysis that follows, I will impose either one of the following two stochastic restrictions on the joint distribution of \((X_{i},\lambda_{i},\epsilon_{i})\). **Assumption 2.1** (Stationarity).: 1. Conditional on \(X_{i}\) and \(\lambda_{i}\), for each \(t\in[T]\), \(\epsilon_{ti}\) is continuously distributed. 2. Conditional on \(X_{i}\) and \(\lambda_{i}\), the shocks \(\epsilon_{ti}\) have the same distribution: \[\epsilon_{ti}|X_{i},\lambda_{i}\sim\epsilon_{si}|X_{i},\lambda_{i}\quad \forall\ \ s,t\in[T]\] **Assumption 2.2** (Exchangeability).: 1. Conditional on \(X_{i}\) and \(\lambda_{i}\), for each \(t\in[T]\), \(\epsilon_{ti}\) is continuously distributed. 2. Conditional on \(X_{i}\) and \(\lambda_{i}\), the joint distribution of the shocks is exchangeable: \[(\epsilon_{1i},\epsilon_{2i},\cdots,\epsilon_{Ti})|X_{i},\lambda_{i}\sim( \epsilon_{\pi(1)i},\epsilon_{\pi(2)i},\cdots,\epsilon_{\pi(T)i})|X_{i}, \lambda_{i}\] for all permutations \(\pi\) on the set \([T]\). _Remark 2.3_.: It is important to note that the preceding assumptions have varying degrees of strength. Specifically, the stationarity restriction is weaker than the exchangeability restriction, and both are weaker than assuming that the shocks are conditionally independently and identically distributed (conditionally IID) (as stated in Assumption 2.13 below). While both stationarity and exchangeability allow for arbitrary correlation between the fixed effects \(\lambda\) and covariates \(X\), as well as arbitrary correlation between the random utility shocks across alternatives and time periods, they only allow for limited heteroskedasticity of the shocks, as both assumptions imply that the shocks are homoskedastic conditional on the fixed effects and the full history of covariates (the vector \(x_{i}\)). To avoid issues that arise when ties are present (i.e., when there is more than one alternative that maximizes the indirect utility), we require that the shocks are continuously distributed, given the covariates and fixed effects. It should be noted that this requirement can be relaxed, but doing so makes the analysis more complicated since we must consider the selection mechanism that is used to determine the agent's choice when ties are present (as discussed in Pakes and Porter 2023). Assumptions 2.1 and 2.2 are examples of weak stochastic restrictions that have been widely considered in the semi-parametric discrete choice literature, and although I have chosen to illustrate the approach of this paper under these assumptions, it is worth noting that the method presented here is versatile and can handle a wide range of other stochastic restrictions. Specifically, it can accommodate any stochastic restriction that can be encoded by a finite number of linear equality/inequality constraints once the model is "locally discretized" (see Section 2.2). An example of a stochastic restriction for which the method of this paper is not applicable is the conditional IID stochastic restriction (see Section 2.3.1), and I show in Section 2.3.1 that it cannot be encoded by finitely many linear equality/inequality constraints once the model is discretized. Using the method of this paper on a relaxation of the conditional IID restriction (Assumption 2.2, for instance) will yield conditional moment inequalities that characterize an outer region within which the identified set is guaranteed to lie. _Remark 2.4_.: If we set \(\zeta_{\mathrm{ti}}=\lambda_{\mathrm{i}}+\epsilon_{\mathrm{ti}}\), then the stationarity (resp. exchangeability) of the shocks \(\epsilon_{\mathrm{it}}\) conditional on \((x_{\mathrm{i}},\lambda_{\mathrm{i}})\) implies that the random vectors \(\zeta_{\mathrm{ti}}\) are stationary (resp. exchangeable) conditional on \(x_{\mathrm{i}}\). Moreover, in the absence of additional stochastic restrictions, the fixed effects can be absorbed into the shocks, and Assumption 2.1 for instance is equivalent to assuming that \(\zeta_{\mathrm{ti}}\) are continuously distributed and are exchangeable conditional on \(x_{\mathrm{i}}\)5. Footnote 5: As there is no restriction (like a location normalization for instance) on the shocks \(\epsilon_{\mathrm{dit}}\), the model is observationally equivalent to one where the fixed effects are equal to zero. The same point is made in Pakes and Porter 2023. Also, by the Law of Iterated Expectation, conditional on \(x\), the random vectors \(\zeta_{\mathrm{t}}\)’s are continuously distributed, since the \(\epsilon_{\mathrm{t}}\)’s are continuously distributed conditional on \(\lambda\) and \(x\). Given the random utility model 2.1 above, along with either Assumption 2.1 or Assumption 2.2, the goal is to characterize the identified set for the index parameter \(\theta\). This is the set of all parameter values that are consistent with the model and the identified conditional choice probabilities 6. Let \(\mathcal{M}_{S}\) (resp. \(\mathcal{M}_{E}\)) denote the set of all sequences of \(T\) random vectors, each \(\mathbb{R}^{D}\)-valued, that are stationary (resp. exchangeable), and let \(y(\theta,x,\lambda,e)\in[D]^{T}\) denote the choices that are generated by the multinomial choice model (Equation 2.1) when \(X=x\), the true value of the index-parameter is \(0\), the fixed effects take the value \(\lambda\), and the shocks \(e\) (in \(\mathbb{R}^{D\times T}\)) take the value \(e\)7. Here, the \(t^{th}\) component of \(y(\theta,x,\lambda,e)\) represents the choice that is generated by the model in period \(t\). Given a value \(X=x\) of the covariates and a value \(\theta\) of the index-parameter, let \(\mathcal{P}_{S}(x,\theta)\) (resp. \(\mathcal{P}_{E}(x,\theta)\)) denote the set of all conditional choice probability vectors at \(x\) that are consistent with model 2.1, when the true parameter value is equal to \(\theta\) and Assumption 2.1 (resp. Assumption 2.2) holds: That is, \[\mathcal{P}_{S}(x,\theta):=\left\{p\in\mathbb{R}^{D^{T}}|\text{ there exists }(\lambda,\epsilon)\text{ such that }p\sim y(\theta,x,\lambda,\epsilon)|x\text{ and }\epsilon|x,\lambda\text{ is stationary}\right\},\] with a similar definition for \(\mathcal{P}_{E}(x,\theta)\). It then follows from Remark 2.4 that \(\mathcal{P}_{S}(x,\theta)\) and \(\mathcal{P}_{E}(x,\theta)\) can equivalently be defined by \[\mathcal{P}_{S}(x,\theta):=\left\{p\in\mathbb{R}^{D^{T}}|\text{ \ }p\sim y( \theta,x,\zeta)\text{ for some }\zeta\in\mathcal{M}_{S}\right\} \tag{2.2}\] and \[\mathcal{P}_{E}(x,\theta):=\left\{p\in\mathbb{R}^{D^{T}}|\text{ \ }p\sim y( \theta,x,\zeta)\text{ for some }\zeta\in\mathcal{M}_{E}\right\}, \tag{2.3}\] where \(y(\theta,x,e)\)\((\in[D]^{T})\) represents choices that are generated by model 2.1 when \(X=x\), the true value of the parameter is \(\theta\), \(\lambda=0\), and \(\epsilon=e\). I will use the term "local model" to refer to sets such as \(\mathcal{P}_{S}(x,\theta)\) and \(\mathcal{P}_{E}(x,\theta)\). Given the identified distribution of observed choices and covariates \(F_{(Y,X)}\), the identified set for the parameter \(\theta\) under Assumption 2.1, denoted \(\Theta_{S}\)\((=\Theta_{S}(F_{(Y,X)}))\), is the set of all parameter values \(\theta\) (possibly empty if the model is misspecified) that can rationalize the observed conditional choice probabilities at all points \(x\) in the support of \(X\). That is: \[\Theta_{S}:=\{\theta\in\Theta|\text{ }\forall x\in\text{supp}(X),\text{ }F_{Y|X=x}\in\mathcal{P}_{S}(x,\theta)\}, \tag{2.4}\] where \(\Theta\subseteq\mathbb{R}^{d_{x}}\) represents the parameter space. The identified set \(\Theta_{E}\), under Assumption 2.2, is defined analogously. I show in Proposition 2.6 below that locally (that is, when the covariates are fixed at a value \(x\), and the index parameter takes a value \(\theta\)), the model 2.1 is equivalent to a "discrete model" where the random utility shocks are restricted to take on a finite set of values. I use this discrete representation of the model, in conjunction with the convexity of the stochastic restrictions (2.1 and 2.2), to show that the sets \(\mathcal{P}_{S}(x,\theta)\) and \(\mathcal{P}_{E}(x,\theta)\) are polytopes (Proposition 2.6) 8. I then use arguments from duality theory to produce an equivalent dual descriptions of these polytopes, and I show in particular that the sought-after conditional moment inequalities that characterize the identified set coincide with the undominated extreme points of the _duals_ of these polytopes (details are provided further below). Footnote 8: A polyhedron is a set of the form \(\{x|Ax\leq b\}\) (with \(A\in\mathbb{R}^{m\times n}\) and \(b\in\mathbb{R}^{m}\), \(m,n\in\mathbb{N}\)), and a polytope is a bounded polyhedron (see Schrijver 2004). In the following section, I use the setting of Manski 1987 to illustrate how the approach taken in this paper generates the conditional moment inequalities that characterize the identified set. As the aim is to build intuition, I will gloss over some of the details, which I will discuss further below when I consider the general setting. ### Heuristics The following model is that of Manski 1987. In a two-periods binary choice model, let agent \(i\)'s choice at time \(t\in[2]\) be determined by Equation 1.2, and suppose that Assumption 2.1 holds (i.e., the shocks \(\epsilon_{ti}\) are continuously distributed and stationary, conditional on the covariates and fixed effects). Manski 1987 showed that the following three conditional moment inequalities hold under the stationarity assumption:9 Footnote 9: Note that the inequalities 2.5 differ slightly from those in Manski 1987. This is due to the fact that I do not assume, as in Manski 1987, that the conditional cumulative distribution function of the shocks (given the covariates and fixed effects) is strictly increasing. \[\begin{split} P(Y_{1}=1|X=x)&\leq P(Y_{2}=1|X=x) \quad\text{whenever}\quad x_{1}^{\prime}\theta<x_{2}^{\prime}\theta\\ P(Y_{1}=1|X=x)&\geq P(Y_{2}=1|X=x)\quad\text{whenever }\quad x_{1}^{\prime}\theta>x_{2}^{\prime}\theta\\ P(Y_{1}=1|X=x)&=P(Y_{2}=1|X=x)\quad\text{whenever }\quad x_{1}^{\prime}\theta=x_{2}^{\prime}\theta\end{split} \tag{2.5}\] I show in the following paragraphs how the approach of this paper can be used to obtain these inequalities by computational means. The first step of the approach of this paper consists in constructing discrete models that are locally (at a fixed value of the covariate \(x\) and the parameter \(\theta\)) observationally equivalent to the model 1.2 under the stationarity restriction. This _discretization_ step can be thought of as a dimensionality reduction, as it shows that although the distribution of the shocks in each local model is a-priori in an infinite-dimensional space (the space of stationary distributions on \(\mathbb{R}^{2}\)), there is an observationally equivalent version of each local model where the shocks are restricted to take a finite set of values (thus restricting their distributions to a finite-dimensional space). The idea of the discretization is borrowed from the proof of Theorem 2 in Pakes and Porter 2023. Fix \(x\in\operatorname{supp}(X)\) (all probability statements below are made conditional on \(X=x\)), and \(\theta\in\Theta\), and let \(v_{t}=x_{i}^{\prime}\theta\), for \(t\in[2]\). As it will become clear from the computations that follow, the discrete (and locally equivalent) models that we construct are totally determined by the order relation between the indices \(v_{1}\) and \(v_{2}\). There will thus be three cases to consider: \(v_{2}>v_{1}\), \(v_{2}<v_{1}\) and \(v_{2}=v_{1}\). Suppose that \(v_{2}>v_{1}\) (the other two cases are handled analogously). From \(v_{1}\) and \(v_{2}\), we construct the intervals \(I_{1}=(-\infty,v_{1})\), \(I_{2}=(v_{1},v_{2})\) and \(I_{3}=(v_{2},+\infty)\). Let the rectangles \(R_{ij}\), \(i,j\in[3]\) be defined by \(R_{ij}=I_{i}\times I_{j}\), and let \(p_{dd^{\prime}}\coloneqq P(y_{1}=d,y_{2}=d^{\prime}|x)\), \(d,d^{\prime}\in\{0,1\}\). For \(i,j\in[3]\), let \(q_{ij}\) denote the probability that \(\zeta\) (\(=(\zeta_{1},\zeta_{2})^{\prime}\)) is in \(R_{ij}\), where \(\zeta_{ti}\coloneqq\lambda_{ti}+\epsilon_{ti}\), i.e, \(q_{ij}=P(\lambda+\epsilon_{1}\in I_{i},\lambda+\epsilon_{2}\in I_{j})\). Each of the conditional choices probabilities \(p_{dd^{\prime}}\) can be expressed in terms of the sums of the \(q_{ij}\)'s: For instance \[p_{11} =P(v_{1}-\zeta_{1}>0,v_{2}-\zeta_{2}>0)=P(\zeta_{1}<v_{1},\zeta_{2}<v _{2})\] \[=P(\zeta_{1}\in I_{1},\zeta_{2}\in I_{1}\cup I_{2})=P(\zeta\in R_{11 }\cup R_{12})=q_{11}+q_{12}.\] Carrying out similar computations for the other 3 events yields the following linear relations \[\begin{bmatrix}p_{00}\\ p_{01}\\ p_{10}\\ p_{11}\end{bmatrix}=\begin{pmatrix}0&0&0&0&0&1&0&0&1\\ 0&0&0&1&1&0&1&1&0\\ 0&0&1&0&0&0&0&0\\ 1&1&0&0&0&0&0&0\end{pmatrix}\] which I write in matrix form as \(p=A(x,\theta)q\). The stationarity restriction on the shocks implies (see Remark 2.4) that the vector \(q\) satisfies the linear restriction \[q_{11} q_{12} q_{13} q_{21} q_{22} q_{23} q_{31} q_{32} q_{33}\] \[\begin{bmatrix}0\\ 0\\ 0\end{bmatrix}=\begin{pmatrix}0&1&1&-1&0&0&-1&0&0\\ 0&-1&0&1&0&1&0&-1&0\\ 0&0&-1&0&0&-1&1&1&0\end{pmatrix}\] Which I write in matrix form as \(R_{S}(x,\theta)q=0\)10. Hence for all \(p\in\mathcal{P}_{S}(x,\theta)\), there exists a \(q\in\mathbb{R}^{9}\) such that \(q\geq 0\), \(R_{S}q=0\) and \(p=Aq\)11, and we get Footnote 11: For instance by stationarity, we must have \(P(\zeta_{1}\in I_{1})=P(\zeta_{2}\in I_{1})\) which is equivalent to \(q_{11}+q_{12}+q_{13}=q_{11}+q_{21}+q_{31}\) or \(q_{12}+q_{13}-q_{21}-q_{31}=0\), which corresponds to the first row of the restriction \(R_{S}q=0\). There are two other restrictions, and any one of these three restrictions is redundant given the other two, and dropping either one of these restrictions does not change the conclusion of our analysis. Footnote 11: Since each column of \(A\) has a single non-zero entry equal to \(1\), the relation \(=Aq\) combined with \(p\) is a probability vector, imply that \(q\) is also a probability vector: \(1=\mathbb{1}^{T}p=\mathbb{1}^{T}Aq=\mathbb{1}^{T}q\). \[\mathcal{P}_{S}(x,\theta)\subseteq\{p\in R^{4}|\ p^{T}\mathbb{1}=1\}\cap\{p=Aq |\ q\geq 0\ \ R_{S}q=0\}.\] The reverse inclusion also holds, for if \(q\) is a probability vector such that \(R_{S}q=0\), then there exists a continuously distributed random vector \(\zeta\in\mathcal{M}_{S}\) such that \(q_{ij}=P(\zeta\in R_{ij})\) (see Proposition 2.6 below). Therefore, no information is lost in the discretization, and we have \[\mathcal{P}_{S}(x,\theta)=\{p\in R^{4}|\ p^{T}\mathbb{1}=1\}\cap\{p=Aq|\ q\geq 0,\ \ R_{S}q=0\}. \tag{2.6}\] The right hand side of 2.6 shows that the set \(\mathcal{P}_{S}(x,\theta)\) is equivalent to a model where the random utility shocks are discrete and take a finite number of values (given by the dimension of \(q\)). Another consequence of 2.6 is that it shows that the set \(\mathcal{P}_{S}(x,\theta)\) is a polytope, and a probability vector \(p\) belongs to \(\mathcal{P}_{S}(x,\theta)\) if and only if it belongs to \(\{p=Aq|\ q\geq 0,\ \ R_{S}q=0\}\). As the restriction \(p^{T}1=1\) is automatically satisfied by the identified CCPs (as they are probability vectors), our task then reduces to characterizing the polyhedral cone \[\mathcal{C}:=\{p=Aq|\ q\geq 0,\ \ R_{S}q=0\}.\] In particular, we want to find a concise affine description of the set \(\mathcal{C}\); that is we want to find a minimal set of inequalities that are satisfied by a vector \(p\) if and only if \(p\) belongs to \(\mathcal{C}\) (these are the so-called _facet defining inequalities_ for \(\mathcal{C}\), and each polyhedron admits such an affine description--See Schrijver 2004). Once the (local) discrete models are constructed as in 2.6, the second step of the approach of this paper consists in using duality theory to give a dual representation of the set \(\mathcal{C}\) that will be used in the third step to set up a computational problem to solve for its facet defining inequalities. I will show further below that the facet defining inequalities of \(\mathcal{C}\) correspond to the conditional moment inequalities that characterize the identified set. The starting point of obtaining a dual characterization of \(\mathcal{C}\) is Farkas' Lemma12, which provides a partial answer, as it implies that Footnote 12: Farkas’ Lemma states that the equation \(Mx=b\) has a non-negative solution (i.e., a solution \(x\geq 0\) componentwise) if and only if for all vectors \(\lambda\) such that \(\lambda^{T}M\leq 0\), we have \(\lambda^{T}b\leq 0\). In our context, we can take \(M=(A^{T},-R_{S}^{T})^{T}\) and \(b=(p^{T},0)^{T}\). \[[p\in\mathcal{C}]\ \ \text{if and only if}\ \left[y^{\prime}p\leq 0\ \text{for all}\ y\in\mathbb{R}^{4}\ \text{s.t.}\ A^{T}y\leq R_{S}^{T}z\ \text{for some}\ z\in\mathbb{R}^{3}\right]. \tag{2.7}\] A consequence of 2.7 is that any \(y\) that belongs to \(\mathcal{Q}(x,\theta):=\{y\in\mathbb{R}^{4}|\ A^{T}y\leq R_{S}^{T}z,\text{ for some}\ z\in\mathbb{R}^{3}\}\) provides a valid inequality restriction that must hold for all CCP vectors at \(x\) that are consistent with the model for the parameter value \(\theta\), as we necessarily have \(y^{T}p\leq 0\) for all \(p\in\mathcal{P}_{S}(x,\theta)\). Moreover, 2.7 implies that the set \(\mathcal{Q}\) (\(=\mathcal{Q}(x,\theta)\)) contains all the inequality restrictions that the model places on elements of \(\mathcal{C}\), since \(p\) belongs to \(\mathcal{C}\) if and only if \(y^{T}p\leq 0\) for all \(y\in\mathcal{Q}\). However, all the inequalities in \(\mathcal{Q}\) are not an economical way to to summarize the restrictions that the model places on the CCPs at \(x\), and a more economical (using less inequalities) summary of the same restrictions is provided by considering only the extreme rays of \(\mathcal{Q}\) (note that \(\mathcal{Q}\) is a polyhedral cone). Alternatively, if we let \(\mathcal{Q}_{0}\) denote the intersection of \(\mathcal{Q}\) with the set \(\{y\in\mathbb{R}^{4}|\ \|y\|_{\infty}\leq 1\}\), i.e, \[\mathcal{Q}_{0}:=\mathcal{Q}\cap\{y\in\mathbb{R}^{4}|\ A^{T}y\leq R_{S}^{T}z, \text{for some}\ z\in\mathbb{R}^{3},\ \|y\|_{\infty}\leq 1\},\] then \(\mathcal{Q}_{0}\) is a polytope, and its extreme points provide a summary of all of the restrictions that the model places on elements of \(\mathcal{C}^{13}\). Moreover, as the inequalities in \(\mathcal{Q}_{0}\) represent restrictions on non-negative vectors (\(p\in C\) implies that \(p\geq 0\)), we can further reduce the set of inequalities needed to characterize \(C\) by considering only the undominated extreme points of \(Q_{0}\); these are the extreme points \(y\) of \(Q_{0}\) such that there does not exists \(y^{\prime}\in Q_{0}\) such that \(y^{\prime}\neq y\) and \(y^{\prime}\geq y\) (component-wise). Indeed, note that if \(\hat{y}\leq y\), then whenever \(y^{T}p\leq 0\) for some \(p\geq 0\), we necessarily have \(\hat{y}^{T}p\leq 0\). Hence, the restriction that correspond to the undominated extreme points of \(Q_{0}\) are sufficient to sharply characterize \(C\) (and thus \(\mathcal{P}_{S}(x,\theta)\)). The third and final step of the approach of this paper consists in solving for the undominated extreme points of \(Q_{0}\). To do so, I frame the problem of solving for the undominated extreme points of \(Q_{0}\) as a multiple-objective linear program (MOLP). Here, a MOLP simply consists of solving for _all_ of the undominated extreme points of the image of a set \(\{x|Mx\leq b\}\) under a matrix \(C\) (see Benson 1998 or Csirmaz 2021), and MOLP are written compactly as: \[\begin{split}&\text{V}\text{m}\text{a}\text{x}\quad Cx\\ &\text{s.t.}\quad x\in\{x:Mx\leq b\}\end{split} \tag{2.8}\] where \(M\in\mathbb{R}^{m\times n}\), \(b\in\mathbb{R}^{m}\) and \(C\in\mathbb{R}^{p\times n}\)14. Footnote 14: For each \(i\in[p]\), the row \(C_{i}\) can be thought of as the objective/utility of the \(i^{th}\) individual, and each element of \(\{x:Mx\leq b\}\) can be thought to represent a feasible allocation. A MOLP then consists of finding all of the extreme points of the Pareto frontier (in the objective/utility space) In the present context, solving for the undominated extreme points of \(Q_{0}\) is equivalent to the MOLP \[\begin{split}&\text{V}\text{m}\text{a}\text{x}\quad Cx\\ &\text{s.t.}\quad x\in\{(y,z)^{\prime}\in\mathbb{R}^{7}\mid A^{T} y\leq R_{S}^{T}z,\ \|y\|_{\infty}\leq 1\}\end{split}\] where \(C=(I_{4\times 4},\,\partial_{4\times 3})\). There are many known algorithms to solve MOLPs; using Benson's outer approximation algorithm (see Benson 1998) to solve 2.1 yields the following two solutions: \(\hat{y}_{0}=(0,0,0,0)\) and \(\hat{y}_{1}=(0,-1,1,0)\). The vector \(\hat{y}_{0}\) yields the trivial restriction \(0^{T}p\leq 0\) for all \(p\in\mathcal{P}_{S}(x,\theta)\); it is clearly feasible (i.e, belongs to \(Q_{0}\)), and it is undominated as we would otherwise have a restriction implying that some components of the observed CCPs must be zero. The inequality that corresponds to \(\hat{y}_{1}\) is: \[p_{10}\leq p_{01}\ \ \text{for all}\ \ \ p\in\mathcal{P}_{S}(x,\theta).\] Adding \(p_{11}\) to both sides of the inequality yields \[P(Y_{1}=1|X=x)\leq P(Y_{2}=1|X=x)\ \ \text{for all}\ \ \ p\in\mathcal{P}_{S}(x,\theta).\] Since the polytope \(\mathcal{P}_{S}(x,\theta)\) is the same for all \(\theta\) and \(x\) such that \(x_{1}^{\prime}\theta<x_{2}^{\prime}\theta\), we have in fact shown that \[P(Y_{1}=1|X=x)\leq P(Y_{2}=1|X=x)\ \ \text{whenever}\ \ \ \ x_{1}^{\prime}\theta<x_{2}^{\prime}\theta,\] and this is the only inequality that a CCP vector \(p\) must satisfy to belong to \(\mathcal{P}_{S}(x,\theta)\). Arguing by symmetry (by relabeling the time periods), we immediately get \[P(Y_{2}=1|X=x)\leq P(Y_{1}=1|X=x)\ \ \text{whenever}\ \ \ \ x_{2}^{\prime} \theta<x_{1}^{\prime}\theta.\] Repeating the same construction with \(v_{0}=v_{1}\) yields \[P(Y_{2}=1|X=x)=P(Y_{1}=1|X=x)\ \ \text{whenever}\ \ \ \ x_{2}^{\prime} \theta=x_{1}^{\prime}\theta.\] These are the inequalities of Manski 1987, which were derived there by analytical means. An advantage of the current approach is that the inequalities that we get from our algorithm are sharp by construction, as they exhaust all the information in \(\mathcal{Q}_{0}\). By contrast, Manski 1987 only established the validity of these inequalities, but did not show sharpness. Sharpness was only established recently as a corollary of a more general result of Pakes and Porter 2023. As can be seen (for instance) by the arguments in Pakes and Porter 2023 and Khan, Ponomareva, and Tamer 2023, establishing sharpness is not usually an easy task. I thus view it as a main advantage of the approach of this paper that the procedure always generates inequalities that are sharp by construction. Below, I show that the foregoing program can be carried through in both static and dynamic settings (with a fixed but arbitrary number of lags) with \(D\geq 2\) alternatives and \(T\geq 2\) time periods. Thus, absent computational limitations, the procedure of this paper provides a way to automate the the derivation of all the conditional moment inequality restrictions that characterize the identified set of the common parameters. ### Discretization and construction of the DCPs In this and the next section, I show how the procedure discussed in Section 2.1 can be generalized to the static setting with \(D\geq 2\) and \(T\geq 2\). I proceed as in Section 2.1, and present the procedure in three steps. All of the steps are applied locally (i.e., for a fixed value of the covariate \(x\) and a fixed parameter value \(\theta\)). This section discusses the first step, and provides a generalized discretization scheme. In Section 2.3, I discuss the second and the third step of the procedure, and show how the discretization can be used to set up MOLPs whose solutions correspond to the conditional moment inequalities that characterize the identified set. Since the three steps of the procedure are applied locally, changing the covariate value \(x\) or the parameter \(\theta\) may generate some new inequality restrictions, and we may need to carry out the procedure at all covariate and parameter values pairs to generate all the desired inequality restrictions. I will show, however, in Proposition 2.8 below that we always only need to consider a finite number of _local models_ to generate all the inequalities that characterize the identified set, even if some of the components of \(X\) are continuously distributed (for instance, in Section 2.1 we only needed to consider 3 local models to generate all of the inequalities that characterize \(\Theta_{S}\)). The discretization scheme is borrowed from Pakes and Porter 2023 where they consider a setting with \(T=2\). I show here how it can be extended to \(T\geq 2\), and will discuss in Section 3 how it can be extended to dynamic models. Similar discretization scheme appear in Kitamura and Stoye 2018 and Tebaldi, Torgovitsky, and Yang 2023, and some of the terminology that I use below is borrowed from these papers.15 Footnote 15: See also Gu, Russell, and Stringham 2022 for another example of the utilization of discretization schemes. The model under consideration in this section is the static model (i.e, Equation 2.1), where I impose either Assumption 2.1 or 2.2. Fix a covariate value \(X=x\) and a parameter value \(\theta\), and set \(v=(v_{1},\cdots,v_{T})\in\mathbb{R}^{D\times T}\), where \(v_{t}=(v_{1t},\cdots,v_{Dt})^{\prime}\) and \(v_{dt}:=x_{dt}^{\prime}\theta\). Here, \(v_{t}\) represents the vector of the deterministic components of the indirect utilities at time \(t\), when \(X=x\) and the index parameter takes the value \(\theta\). For each time period \(t\in[T]\), the space of period \(t\) shocks, \(\mathbb{R}^{D}\), can be partitioned 16 into the D regions \(\{\varepsilon_{d,t}\}_{d=1}^{D}\), where \(\varepsilon_{d,t}\) denotes the set of all realizations of the shocks that induce the agent to choose alternative \(d\) at time \(t\): Footnote 16: The partition is up to the measure zero set formed by the union of the boundaries of the regions. \[\varepsilon_{d,t}:=\{\zeta\in\mathbb{R}^{D}|\ v_{dt}+\zeta_{d}>v_{d^{\prime}t }+\zeta_{d^{\prime}},\ \ \forall d^{\prime}\neq d,\ d^{\prime}\in[D]\}. \tag{2.9}\] Here \(\zeta:=\lambda+\epsilon\) represents the composite error term. We can further partition the space \(\mathbb{R}^{D}\) by considering the coarsest partition that generates all sets in \(\{\varepsilon_{dt}\}_{d\in[D],t\in[T]}\). We refer to the sets in this coarsest partition as _patches_. Following, Tebaldi, Torgovitsky, and Yang 2023, the set of all patches represents what can be called the _minimal relevant partition_, and all the realizations of the shocks in a patch induce agents to make a specific choice in each time period \(t\in[T]\). The patches can be determined as follows: A sequence \((d_{1},d_{2},\cdots,d_{T})\in[D]^{T}\) determines a patch if and only if: \[\varepsilon_{d_{1},1}\cap\varepsilon_{d_{2},2}\cap\cdots\cap\varepsilon_{d_{T },T}\neq\emptyset.\] The set of all patches is denoted by \(F\): \[F:=\{(d_{1},d_{2},\cdots,d_{T})\in[D]^{T}\ |\ \varepsilon_{d_{1},1}\cap \varepsilon_{d_{2},2}\cap\cdots\cap\varepsilon_{d_{T},T}\neq\emptyset\}.\] Patches, in turn, can be used to partition \(\mathbb{R}^{D\times T}\) (the domain of the vector of shocks across all D alternatives and \(T\) time periods) into the _rectangular regions_ of the type \(f_{1}\times f_{2}\times\cdots\times f_{T}\), with \(f_{i}\in F\) for all \(i\in[T]\). The set of all such rectangular regions is denoted \(\mathcal{R}\): \[\mathcal{R}:=\{f_{1}\times f_{2}\times\cdots\times f_{T}\ \ |\ f_{i}\in F,\ \forall i\in[T]\}.\] Given \(v\) as an input, a simple algorithm to determine the set of all patches is the following: The sequences \((d_{1},\cdots,d_{T})\) that correspond to patches are those for which the value of the following linear program is zero (i.e, the constraint region is non-empty 17) Footnote 17: By convention (see Schrijver 2004), a minimization linear program has a value of \(\infty\) if the constraint region is empty. \[\begin{split}\text{minimize}& 0\\ \text{subject to}&\zeta\in\mathbb{R}^{D}\\ &\zeta_{d^{\prime}}-\zeta_{d_{t}}{\leq}\nu_{d_{t}t}-\nu_{d^{ \prime}t}-\delta\ \ \text{ for all }t\in[T]\text{ and }d^{\prime}\in[D]\backslash\{d_{t}\},\end{split} \tag{2.10}\] where the tolerance parameter \(\delta\) is small positive constant (can for instance be set equal to \(10^{-4}\)) that is used to replace the strict inequalities that appear in the definition of the sets \(\varepsilon_{d,t}\) by weak inequalities. Thus the set of all patches can be determined by solving \(D^{T}\) linear programs (LP). _Remark 2.5_.: For fixed \(T\), it is possible to replace the LP that determine the patches by explicit rules, at the cost of some extra derivations. When \(T=2\), for instance, \((d,d^{\prime})\) is a patch, with \(d\neq d^{\prime}\), if and only if \(\Delta v_{d}<\Delta v_{d^{\prime}}\), where \(\Delta v_{d}:=v_{d2}-v_{d1}\) for all \(d\in[D]\). Indeed, \(\varepsilon_{d,1}\cap\varepsilon_{d^{\prime},2}\neq\emptyset\), if and only if there exists \(\zeta_{d}\) and \(\zeta_{d}^{\prime}\) in \(\mathbb{R}\), such that \(\zeta_{d}-\zeta_{d^{\prime}}>v_{d^{\prime}1}-v_{d1}\) and \(\zeta_{d}-\zeta_{d^{\prime}}<v_{d^{\prime}2}-v_{d2}\)18. There exists a pair \((\zeta_{d},\zeta_{d^{\prime}})\) that solves both inequalities if and only if \(v_{d^{\prime}1}-v_{d1}<v_{d^{\prime}2}-v_{d2}\), and the latter is equivalent to \(\Delta v_{d}<\Delta v_{d^{\prime}}\). A similar argument shows that \((d,d)\) is always a patch for all \(d\in[D]\). Replacing the LPs that determine the patches by such explicit rules can speed up computations and is recommended when \(D^{T}\) (the number of LPs that the procedure outline above solves) is large. Footnote 18: This is the case since we can effectively restrict the agent’s decision to be between the alternatives \(d\) and \(d^{\prime}\), by making the shocks of the remaining alternatives \(d^{\prime\prime}\notin\{d,d^{\prime}\}\) take arbitrarily large negative values. Footnote 19: These sets are polyhedral cones, and becomes a polytope when intersected with the set \(\{p\in\mathbb{R}^{D^{T}}|\mathbb{I}^{T}p=1\}\), which is in some sense redundant in our characterization of the identified set, as the identified conditional choice probabilities necessarily satisfy the restriction \(\mathbb{I}^{T}p=1\). The terminology of _discrete choice polyhedron_ is similar to that of _multiple choice polytope_ that is used in the mathematical economics and mathematical psychology literature to refer to the set of all stochastic choice functions that are consistent with a random utility model (see Doignon and Saito 2022, McFadden 2005, Fishburn 1992 and references therein) As in Section 2.1, I show that the sets \(\mathcal{P}_{r}(x,\theta)\) (for \(r\in\{S,E\}\)) defined by equations 2.2 and 2.3 are polytopes. In particular, I show that they can be written in the form \(\mathcal{P}_{r}(x,\theta)=\{p\in\mathbb{R}^{D^{T}}|\mathbb{I}^{T}p=1\}\cap\{p= A(x,\theta)q|\ q\in\mathbb{R}^{|\mathcal{R}|},\ R_{r}(x,\theta)q=0,\ q\geq 0\}\) for some matrices \(A(x,\theta)\) and \(R_{r}(x,\theta)\) whose construction I now discuss. I will refer to the sets \((p=A(x,\theta)q|\ q\in\mathbb{R}^{|\mathcal{R}|},\ R_{r}(x,\theta)q=0,\ q\geq 0)\) as _discrete choice polyhedrons_ (DCPs) 19. The matrix \(A\) is a \(D^{T}\)-by-\(|\mathcal{R}|\) matrix, with each row corresponding to a sequence of choices (i.e, each row is indexed by a sequence \(d_{1},d_{2},\cdots,d_{T}\), with \(d_{i}\in[D]\)), and each column corresponding to a region \(\mathbf{R}\in\mathcal{R}\). Given a choice sequence \(d=(d_{1},\cdots,d_{T})\in[D]^{T}\) and a region \(\mathbf{R}=\mathbf{f}_{1}\times\mathbf{f}_{2}\times\cdots\times\mathbf{f}_{T}\in \mathcal{R}\), the entry corresponding to the \(d^{th}\)-row and \(\mathbf{R}^{th}\)-column is equal \(1\) (and equal to \(0\) otherwise) if and only if for each \(t\in[T]\), the shocks in the patch \(\mathbf{f}_{t}\) induces the agent to choose alternative \(d_{t}\) at time \(t\) (i.e, if the patch \(\mathbf{f}_{t}\) is given by \(\mathbf{f}_{t}=(d_{1}^{(t)},d_{2}^{(t)},\cdots,d_{T}^{(t)})\), then \(d_{t}^{(t)}=d_{t}\)). Intuitively, all the entries of the \(d^{th}\) row equal to one, represent all of the regions in the partition \(\mathcal{R}\) that induce the agent to choose alternative \(d_{t}\) for each \(t\in[T]\). The matrices \(R_{r}\), \(r\in\{E,S\}\) are easy to construct, and simply enforce the stationarity or exchangeability restriction on the probability vector \(\mathbf{q}\in\mathbb{R}^{|\mathcal{R}|}\), which gives the probability that the shocks belong to the different regions of the partition \(\mathcal{R}\). That is, the matrix \(R_{S}\) encodes the restriction: \(\forall f\in F\), and for all \(i,j\in[T]\) \[\sum_{\{f_{1}\times\cdots\times f_{T}\}}\sum_{f_{t}\in F}\,\,\forall t\in[T], \,\,f_{t}=f\}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, characterize the identified set. One of the main observations in this paper is that the undominated extreme points of the duals of the sets \(\mathcal{P}(\mathrm{x},\theta)\) correspond to the conditional moment inequalities that characterize the identified set. Thus deriving the moment inequalities that characterize the identified set can be reduced to the task of solving for the undominated extreme points of some polytopes, which in turn can be framed as MOLPs and solved computationally. Recall that given a convex set \(\mathcal{C}\) in \(\mathbb{R}^{n}\), its dual cone is the set \(\mathcal{C}^{*}\) defined by \[\mathcal{C}^{*}\coloneqq\{\mathrm{y}\in\mathbb{R}^{n}|\ \mathrm{y}^{\mathrm{T}} \mathrm{x}\leq 0\ \ \forall\ \mathrm{x}\in\mathcal{C}\}.\] When \(\mathcal{C}\) is a closed and convex cone, the dual of its dual is itself, i.e, \(\mathcal{C}^{**}=\mathcal{C}\)(see Schrijver, 2004). Thus the dual cone of a closed convex cone provides an alternative and equivalent description of \(\mathcal{C}\) (i.e, \(\mathcal{C}\) and its dual \(\mathcal{C}^{*}\) are in one-to-one correspondence). If \(\mathcal{C}\) is a DCP, i.e, a set of the type \(\mathcal{C}=\{\mathrm{p}=\mathrm{A}\mathrm{q}|\ \mathrm{q}\geq 0,\mathrm{R} \mathrm{q}=0\}\), then \(\mathcal{C}\) is a closed and convex cone, and its dual cone is the set of all \(\mathrm{y}^{\prime}\)'s such that \(\sup_{\mathrm{p}\in\mathcal{C}}\mathrm{y}^{\mathrm{T}}\mathrm{p}\leq 0\). By the (strong) duality theorem of linear programing, we have \[\sup_{\mathrm{p}\in\mathcal{C}}\mathrm{y}^{\mathrm{T}}\mathrm{p} =\sup_{\{\mathrm{q}|\ \mathrm{q}\geq 0,\ \mathrm{R}\mathrm{q}=0\}}\mathrm{y}^{\mathrm{T}}\mathrm{A}\mathrm{q}\] \[=\inf_{\{\mathrm{z}|\ \mathrm{A}^{\mathrm{T}}\mathrm{y}\leq \mathrm{R}^{\mathrm{T}}\mathrm{z}\}}0.\] The last term is either \(\infty\) or \(0\) depending on whether the set \(\{\mathrm{z}|\ \mathrm{A}^{\mathrm{T}}\mathrm{y}\leq\mathrm{R}^{\mathrm{T}}\mathrm{z}\}\) is empty or not. Thus the dual cone of \(\mathcal{C}^{*}\) is given by \(\mathcal{C}^{*}=\{\mathrm{y}|\ \exists\mathrm{z}\ \mathrm{s.t.}\ \mathrm{A}^{\mathrm{T}} \mathrm{y}\leq\mathrm{R}^{\mathrm{T}}\mathrm{z}\}\)21. By the preceding duality result, we have Footnote 21: Following Pakes and Porter 2023, the same conclusion is obtained with the use of Farkas’ Lemma. The alternative derivation here is mainly used to justify why I refer to these sets as the _dual_ discrete choice polyhedrons. The same conclusion can also be reached by considering the polar cone of \(\mathcal{C}\) (see Schrijver 2004 pg. 65). \[\mathcal{C} =\mathcal{C}^{**}=\{\mathrm{p}|\ \mathrm{p}^{\mathrm{T}}\mathrm{y} \leq 0\ \forall\mathrm{y}\in\mathcal{C}^{*}\}\] \[=\{\mathrm{p}|\ \mathrm{p}^{\mathrm{T}}\mathrm{y}\leq 0\ \forall \mathrm{y}\in\mathcal{C}^{*},\ \|\mathrm{y}\|_{\infty}\leq 1\}\] where the inclusion of the normalization \(\|\mathrm{y}\|_{\infty}\) clearly does not change the set \(\mathcal{C}\), as \(\mathcal{C}\) is a cone. When the sets \(\mathcal{C}\) are DCPs, I will refer to the associated polyhedrons \(\{\mathrm{y}|\ \exists\mathrm{z}\ \mathrm{s.t.}\ \mathrm{A}^{\mathrm{T}} \mathrm{y}\leq\mathrm{R}^{\mathrm{T}}\mathrm{z},\ \|\mathrm{y}\|_{\infty}\leq 1\}\) as the _Dual Discrete Choice Polyhedrons_ (DDCPPs). Each DDCP is a polytope, and as such it admits a finite number of extreme points, say \(\{\mathrm{y}_{\mathrm{i}}\}_{\mathrm{i}\}_{\mathrm{i=1}}^{m}\), and a simple argument shows that \[\mathcal{C}=\{\mathrm{p}|\ \mathrm{p}^{\mathrm{T}}\mathrm{y}_{\mathrm{i}}\leq 0 \ \forall\mathrm{i}\in[m]\}.\] Thus the DCP is completely determined by the extreme points of the DDCP. As the number of extreme points of the DDCPs can be large, a simpler characterization can be obtained by considering the notion of dominance defined below. **Definition 2.7**.: A point \(y\) belonging to a convex set \(\mathcal{C}\) is dominated (in \(\mathcal{C}\)) if there exist \(y^{\prime}\in\mathcal{C}\) such that \(y^{\prime}\neq y\) and \(y^{\prime}\geq y\) (component-wise). A point \(y\in\mathcal{C}\) is undominated if it is not dominated. 2223 Footnote 22: Equivalently, using the Separating Hyperplane Theorem, a point \(y^{\prime}\in\mathcal{C}\) is undominated iff there exists a \(w\in\mathbb{R}^{n}\), with \(w>0\) (i.e., all components are strictly positive), such that \(y^{\prime}\in\ \operatorname{argmax}_{\{y\in\mathcal{C}\}}y^{\mathsf{T}}w\) (see Yu and Zeleny 1975). As the DCPs are contained in \(\mathbb{R}^{n}_{+}\), if the set of undominated extreme points of the DDCP is given by \(\{y_{j}\}_{j=1}^{m^{\prime}}\), then a simple argument shows that \[\mathcal{C}=\{p|\ p\geq 0,\ p^{\mathsf{T}}y_{i}\leq 0\ \forall i\in[m^{ \prime}]\}.\] That is, the DCP is completely determined by the undominated extreme points of the DDCP. Solving for the undominated extreme points of a DDCP can be framed as a MOLP (see Benson 1998 and equation 2.8). Indeed the undominated extreme points of the set \(\{y|\ \exists z\ \mathrm{s.t.}\ A(x,\theta)^{\mathsf{T}}y\leq R_{r}(x,\theta)^{ \mathsf{T}}z,\ \|y\|_{\infty}\leq 1\}\), \(r\in\{E,S\}\), are the solutions to the MOLP: \[\begin{split}&\mathsf{Vmax}\quad Cx\\ &\mathrm{s.t.}\quad x\in\{(y,z)^{\prime}\in\mathbb{R}^{d_{y}+d_{z}} \mid A(x,\theta)^{\mathsf{T}}y\leq R_{r}(x,\theta)^{\mathsf{T}}z,\ \|y\|_{\infty}\leq 1\}\end{split} \tag{2.11}\] where \(C=(I_{d_{y}\times d_{y}},\theta_{d_{y}\times d_{z}})\), \(\dim(y)=d_{y}\) and \(\dim(z)=d_{z}\)24. Let \(\mathcal{I}_{S}(x,\theta)\) (resp. \(\mathcal{I}_{E}(x,\theta)\)) denote the set of all the solutions to the MOLP 2.11 under the stationarity (resp. exchangeability) restriction. Footnote 24: Note that Csirmaz 2016 uses the term “non-dominated” instead of “undominated”. Proposition 2.8 below shows that the sets \(\mathcal{I}_{r}(x,\theta)\) (for \(r\in\{E,S\}\)), defined above, only take a finite number of values as \(x\) and \(\theta\) vary through their respective domains \(\mathcal{X}\) and \(\Theta\). The main implication of the proposition is that we only need to consider a finite number of local models (and by consequence we only need to solve for a finite number of MOLPs) to obtain all of the conditional moment inequalities that characterize the sharp identified set, this being the case even if some components of the explanatory variables \(X\) are continuously distributed. The proposition is a direct consequence of the fact that from the discretization scheme in Section 2.2 the sets \(\mathcal{P}_{r}(x,\theta)\) are completely determined by the set of patches that are used in their discrete representation, and there is only a finite number of possible configurations of patches (see Appendix A for details). **Proposition 2.8**.: _Fix \(r\in\{E,S\}\). Then there exists some \(m\in\mathbb{N}\), a finite partition \(\{O_{k}\}_{k\in[m]}\) of \(\mathcal{X}\times\Theta\), and a finite collection of sets \((\operatorname{I}_{k})_{k=1}^{m}\) (each one a subset of \(\mathbb{R}^{D^{\mathsf{T}}}\)), such that for \(k\in[m]\) we have \(\mathcal{I}_{r}(x,\theta)=\operatorname{I}_{k}\) for all \((x,\theta)\in O_{k}\)._ _Remark 2.9_.: For some concrete examples that I consider below, I will show how to construct the partitions \(\{O_{i}\}_{i\in[m]}\). In the two-periods binary choice model, it follows from the derivations of Section 2.2 that we can take \(m=3\), and define the sets \(\{O_{k}\}_{k\in[3]}\) as follows: \(O_{1}:=\{(x,\theta)\in\mathcal{X}\times\Theta\mid x_{1}^{\prime}\theta>x_{2}^{ \prime}\theta\}\), \(O_{2}:=\{(x,\theta)\in\mathcal{X}\times\Theta\mid x_{1}^{\prime}\theta<x_{2}^{ \prime}\theta\}\) and \(O_{3}:=\{(x,\theta)\in\mathcal{X}\times\Theta\mid x_{1}^{\prime}\theta=x_{2}^{ \prime}\theta\}\). Given a partition \(\{O_{k}\}_{k\in[m]}\) as in Proposition 2.8, to generate all of the conditional moment inequalities that characterize the identified set, it suffices to solve a finite number \(m\) of MOLPs; one for each \(O_{k}\). I now state the main result of this section, which gives a characterization of the sharp identified set in term of the solutions \(\mathcal{I}_{i}(x,\theta)\) of the MOLPs. Theorem 2.10 is a direct consequence of Proposition 2.8 and the of the dual characterization of the DCPs. **Theorem 2.10**.: _Fix \(r\in\{E,S\}\), and let the partition \(\{O_{k}\}_{k\in[m]}\) and the sets \(\{I_{k}\}_{k\in[m]}\) be as in Proposition 2.8. Then given the identified distribution \(F_{\forall,X}\) of observables, the corresponding sharp identified set for the index parameter \(\theta\) is given by_ \[\Theta_{r}(F_{\forall,X})\\ =\{\theta\in\Theta\mid\forall x\in\operatorname{supp}(X)\text{ s.t. }(x,\theta)\in O_{k},\text{ for some }k\in[m],\text{ we have }y^{T}p(x)\leq 0,\ \forall y\in I_{k}\},\] _where \(p(x)=F_{\forall|X=x}\) denotes the identified conditional choice probability vector at \(X=x\)._ When \(D^{T}\) is small (say less than 20), some known algorithms, like Benson's algorithm (see Benson 1998), can solve the MOLPs in 2.11 in reasonable time. However, as all known algorithms to solve MOLPs have worst case computational complexity that is exponential in the size of inputs and outputs, all known methods are in general not computationally feasible when \(D^{T}\) is large (say \(D^{T}=100\)). I have reported in Tables 1 and 2 the running time of Benson's algorithm to solve the MOLPs in 2.11 for few values of \(D\) and \(T=2\)25. As can be seen from the table, as \(D\) increases, the algorithm quickly becomes impractical. This suggests that one may need to exploit the specific structure of the DCPs and DDCPs to create algorithms that are tailored to solving MOLPs that arise from discrete choice model, and that work for "larger models" where Benson's algorithm becomes impractical. In Section 2.4, I have made an attempt in that direction, but much work remains; there, I prove that the DDCPs are _integral_, and I use this observation to develop a _cutting plane procedure_ that works well for models of moderate size. For a comparison, I have included the running time for this new procedure in Tables 1 and 2. Footnote 25: For a fixed value of \((x,\theta)\), the computations consist of: creating the \(A\) and \(R\) matrices from Section 2.2, and solving 2.11 using either Benson or the cutting-plane algorithm, and then running the redundancy elimination algorithm discussed at the end of Section 2.4. The reported times represent the average running time over 10 runs with randomly drawn values of \((x,\theta)\). All computations were done on single desktop machine with 32GB of memory and a 2.4 GHz 8-Core Intel Core i9 processor. _Remark 2.11_.: It is important to note that if we are only interested in obtaining some (but not necessarily all) of the inequalities that characterize the identified set, then this can easily be done by solving the LPs \(\alpha g\mathrm{m}\mathrm{x}\{w^{\prime}y\mid y\in\mathcal{Q}\}\) for some arbitrary objective vectors \(w>0\) (see Footnote 22), where \(\mathcal{Q}\) is the DDCP. Indeed, the solutions to such LPs will be undominated extreme points of \(\mathcal{Q}\). Thus, by randomly drawing vectors \(w_{k}\), for \(k=1,\cdots,K\) (for some large integer \(K\), say \(K=10^{4}\)), from a distribution supported on \(\mathbb{R}_{+}{}^{D}\)7, and solving for \(y_{k}=\alpha g\mathrm{m}\mathrm{x}\{w_{k}^{\intercal}y\mid y\in\mathcal{Q}\}\), the inequalities corresponding to the solutions \(\{y_{k}\}_{k\in[K]}\) will represent a set of valid conditional moment restrictions that the model imposes on the elements of the corresponding DCP. However, this approach will likely generate redundant inequalities 26. In such cases, the redundancy elimination procedure described at the end of Section 2.4 can be used to generate a minimal set of equivalent inequalities. Remark 3.10 below provides an example where I carry out this "probabilistic approach". Essentially, generating \(a\) conditional moment inequality implied by the model is easy and computationally cheap, as it corresponds to solving for _an_ undominated extreme point of the DDCP, which can be framed as solving a LP. However solving for all the conditional moment inequalities that characterize the sharp identified set is difficult and computationally expensive, as it corresponds to solving for _all_ the undominated extreme points of the DDCP, which can be framed as solving MOLPs. Footnote 26: As there are only finitely many (undominated) extreme points, for large values of \(K\), many of the \(y_{k}^{\prime}s\) will coincide. Moreover, some of the \(y_{k}^{\prime}s\) can be redundant in the sense that the corresponding inequalities will be implied by the inequalities corresponding to the \(y_{i}^{\prime}s\) with \(i\neq q\). _Remark 2.12_.: Under the exchangeability assumption, it is possible to simplify the representation of the DDCPs \(\mathcal{Q}_{E}(x,\theta)\) in a way that makes the MOLPs2.11 more computationally tractable. Indeed, note that by Farkas' lemma, the set \(\{y\mid\exists z\text{ s.t }A^{\intercal}y\leq R_{E}z\}\) is equal to the set \(\{y|\;\forall q\geq 0\text{ s.t }R_{E}q=0,\text{ we have }q^{T}A^{\intercal}y\leq 0\}\) (see Theorem 1.1 in Balas 2005). Since \(q\neq 0\), \(q\geq 0\) and \(R_{E}q=0\) implies that \(q\) is up to scale an exchangeable distribution, the condition \(q^{T}A^{\intercal}y\leq 0\) for all \(q\geq 0\) such that \(R_{E}q=0\), is equivalent to \(q^{T}A^{\intercal}y\leq 0\) for all \(q\) that are extreme points of the set of exchangeable distributions. Let \(P_{E}\) be the matrix with each row corresponding to such an extreme point. I now discuss the construction of the matrix \(P_{E}\). Let \(F=\{f_{i}\}_{i=1}^{|F|}\) represent an enumeration/indexation of all the patches that occur in the discretization of \(\mathcal{P}_{E}(x,\theta)\). For all sequences \(1\leq i_{1}\leq i_{2}\leq\cdots\leq i_{T}\leq|F|\), let the vector \(q^{(i_{1},\cdots,i_{T})}\in\mathbb{R}^{|\mathcal{R}|}\) be defined by: For all \(j_{1},j_{2},\cdots,j_{T}\in[|F|]\) and \(R=f_{j_{1}}\times\cdots\times f_{j_{T}}\in\mathcal{R}\), let \(q^{(i_{1},\cdots,i_{T})}_{R}=1\) iff there exists a permutation \(\pi\) on \([D]\) such that \(\pi(i_{k})=j_{k}\) for all \(k\in[D]\) (i.e., \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \(T=2\), \(D=2\) & \(T=2\), \(D=4\) & \(T=2\), \(D=6\) & \(T=2\), \(D=8\) & \(T=3\), \(D=3\) & \(T=3\), \(D=4\) \\ \hline **Benson’s Algorithm** & 0.05 & 0.18 & * & * & * & * \\ **Cutting-plane Algorithm** & 0.07 & 0.26 & 1.02 & 13.15 & 3.39 & 439.22 \\ \hline \end{tabular} Running time for model 2.1 under Assumption 2.1. An asterisk (*) is used to indicate a running time that exceeds 3600s (1hr). \end{table} Table 1: Running Time (in seconds) the vector \(q^{(i_{1},\cdots,i_{T})}\) has an entry of one in the positions corresponding to regions \(R\in\mathcal{R}\) whose patches are a permutation of the patches \(f_{i_{1}},\cdots,f_{i_{T}}\)). The matrix \(P_{E}\) is the matrix with rows given by the vectors \(q^{(i_{1},\cdots,i_{T})}\), with \(1\leq i_{1}\leq i_{2}\leq\cdots\leq i_{T}\leq|F|\). It can be shown that the vectors \(q^{(i_{1},\cdots,i_{T})}\) represent (up to scale normalization), the extreme points of all exchangeable probability distributions on the discrete set obtained by considering the \(T\)-fold product of the set \(F\). When \(T=2\), then the rows of \(P_{E}\) are given by the vectors \(q^{(i,i)}=\delta_{(i,i)}\) for \(i\in[|F|]\), and \(q^{(i,j)}=\delta_{(i,j)}+\delta_{(j,i)}\) for all \(i<j\), \(i,j\in[|F|]\). Here \(\delta_{(i,j)}\in\mathbb{R}^{|F|^{2}}\) represents the vector with the component corresponding to the region \(R=f_{i}\times f_{j}\) equal to \(1\), and all other entries equal to zero. From the foregoing and the definition of \(P_{E}\), it then follows that the DDCPs \(Q_{E}=\{y|\ \exists z\text{ s.t. }A^{T}y\leq R^{T}z,\ \|y\|_{\infty}\leq 1\}\) have the equivalent representation \[Q_{E}=\{y|\ P_{E}A^{T}y\leq 0,\ \|y\|_{\infty}\leq 1\} \tag{2.12}\] and the latter characterization of \(Q_{E}\) no longer involves the auxiliary variable \(z\). I have used this simplification in the implementation of both Benson and the cutting-plane algorithm that is presented in Table 2. A similar characterization can be given under the stationarity assumption, where the matrix \(P_{E}\) is replaced with the matrix \(P_{S}\) that contains all the extreme points of stationary discrete probabilities on the \(T\)-fold product of \(F\). Such a characterization will however not be useful, as the matrix \(P_{S}\) is too large for computational purposes 27 Footnote 27: The number of extreme points of stationary probability distributions is exponential in \(D\), for \(T\) fixed; when \(T=2\), \(|F|\sim D^{2}\) and the number of extreme points of stationary probability distributions on \(F\times F\) is equal to the number of cyclic permutations on \([|F|]\), which is larger than \(F|\) (see the proof of Theorem 2.19). Before discussing the cutting plane procedure, I first present an impossibility result. In the following section, I show show that the method of this paper is not applicable, and that there is no "simple" characterization of the identified set, if the stochastic restriction in Assumption 2.2 or 2.1 is replaced by an alternative stochastic restriction under which the sets \(\mathcal{P}(x,\theta)\) are not polytopes. I show that the latter is for instance the case if we assume that the random utility shocks are IID conditional on the fixed effect and the explanatory variables. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \(T=2,\ D=2\) & \(T=2,\ D=4\) & \(T=2,\ D=6\) & \(T=2,\ D=8\) \\ \hline **Benson’s Algorithm** & 0.02 & 0.38 & 48.47 & * \\ **Cutting-plane Algorithm** & 0.14 & 0.97 & 35.52 & 3182.2 \\ \hline \end{tabular} Running time for model 2.1 under Assumption 2.2. An asterisk (*) is used to indicate a running time that exceeds 3600s (1hr). \end{table} Table 2: Running Time (in seconds) #### 2.3.1 An impossibility result In this section, I try to determine when it is possible to characterize the identified set for \(\theta\) using finitely many "implications" of the form \[\text{If $\mathsf{x}$ and $\theta$ satisfy condition}\cdots\,,\text{ then $\alpha^{\text{T}} \mathsf{p}(\mathsf{x})\leq\beta$}, \tag{2.13}\] where the \(\alpha\)'s are vectors, and \(\beta\)'s are scalars. I refer to such a characterization as a "simple characterization" of the identified set. Note that the characterization in 2.5, as well as the characterizations obtained in Khan, Ponomareva, and Tamer 2023 and Pakes and Porter 2023 are of this form. In Theorem 2.16 below, I demonstrate that whether or not such a characterization is possible depends in part on the geometry of the sets \(\mathcal{P}(\mathsf{x},\theta)\). Notably, if the sets \(\mathcal{P}(\mathsf{x},\theta)\) are not polytopes, then the sharp identified set does not admit a simple characterization, and the computational approach described in this paper is not applicable. For example, if the exchangeability or stationarity restriction on the random utility shocks is replaced by the conditional IID stochastic restriction (defined below), the resulting sets \(\mathcal{P}(\mathsf{x},\theta)\) are not polytopes (Proposition 2.14 below), and the identified set does not admit a simple characterization. **Assumption 2.13** (**Independence**).: 1. [label=)] 2. Conditional on \(\mathsf{x}_{\mathrm{i}}\) and \(\lambda_{\mathrm{i}}\), for each \(\mathsf{t}\in[\mathsf{T}]\), \(\epsilon_{\mathrm{ti}}\) (\(\in\mathbb{R}^{\mathrm{D}}\)) is continuously distributed. 3. Conditional on \(\mathsf{x}_{\mathrm{i}}\) and \(\lambda_{\mathrm{i}}\), the shocks \(\epsilon_{\mathrm{ti}}\) are independent and identically distributed (as \(\mathsf{t}\) varies in \([\mathsf{T}]\)). Under Assumption 2.13, we can define the analogue \(\mathcal{P}_{\mathrm{I}}(\mathsf{x},\theta)\) of the sets \(\mathcal{P}_{\mathrm{S}}(\mathsf{x},\theta)\) and \(\mathcal{P}_{\mathrm{E}}(\mathsf{x},\theta)\). That is, the set \(\mathcal{P}_{\mathrm{I}}(\mathsf{x},\theta)\) is the set of all CCPs that are consistent with the discrete choice model 2.1 under the stochastic restriction 2.13, when the index parameter takes the value \(\theta\) and the covariates take the values \(\mathsf{X}=\mathsf{x}\). The following proposition shows that unlike the sets \(\mathcal{P}_{\mathrm{S}}(\mathsf{x},\theta)\) and \(\mathcal{P}_{\mathrm{E}}(\mathsf{x},\theta)\), the sets \(\mathcal{P}_{\mathrm{I}}(\mathsf{x},\theta)\) are not polytopes. **Proposition 2.14**.: _The sets \(\mathcal{P}_{\mathrm{I}}(\mathsf{x},\theta)\) are closed and convex, but are not polytopes._ Theorem 2.16 below shows that when the sets \(\mathcal{P}(\mathsf{x},\theta)\) are not polytopes, the sharp identified set for \(\theta\) does not admit a simple characterization as in Theorem 2.10. Formally, the sharp identified set admits a simple characterization if for some positive integer \(M\) there exists a finite collection of subsets \(\{\mathsf{S}_{\mathsf{k}}\}_{\mathsf{k}\in[M]}\) of \(\mathcal{X}\times\Theta\) and a collection of "associated inequalities" \(\{(\alpha_{\mathsf{k}},\beta_{\mathsf{k}})\}_{\mathsf{k}\in[M]}\) such that \(\cup_{\mathsf{k}\in[M]}\mathsf{S}_{\mathsf{k}}=\mathcal{X}\times\Theta\) (here the sets \(\mathsf{S}_{\mathsf{k}}\) are allowed to have non-empty intersection, and can even be equal for different values of the subscript \(\mathsf{k}\)), and given any distribution of the observables \(\mathsf{F}_{\mathsf{Y},\mathsf{X}}\), the identified set for the parameter \(\theta\) is given by \[\Theta(F_{Y,X})=\{\theta\in\Theta\mid\forall x\in\text{supp}(X)\text{ s.t. }(x,\theta)\in S_{k},\text{ for some }k\in[M],\text{ we have }\alpha_{k}^{T}p(x)\leq\beta_{k}\}. \tag{2.14}\] Note that the representation 2.14 essentially implies that for each \(k\in[M]\) we have \[\alpha_{k}^{T}p\leq\beta_{k}\quad\forall\ p\in\mathcal{P}(x,\theta)\quad\text {such that }(x,\theta)\in S_{k}, \tag{2.15}\] and the latter \(M\) implications are sufficient to characterize the identified set. _Remark 2.15_.: The characterization of the identified set given in Manski 1987, Pakes and Porter 2023 and Khan, Ponomareva, and Tamer 2023, all have the form 2.14. From Equation 2.5, in the setting of Manski 1987 (for instance) the identified set can be characterized by (\(M=\)) 2 implications, where the sets \(S_{k}=O_{k}\cup O_{3}\), for \(k\in[2]\), with the sets \(O_{k}\) defined as in Remark 2.9, and where the associated inequalities are given by \(\alpha_{1}=(0,1,-1,0)^{\prime}\), \(\alpha_{2}=(0,-1,1,0)\), and \(\beta_{1}=\beta_{2}=0\)28. The characterization of the identified set in Theorem 2.10 is also of the form 2.14, where \(M:=\sum_{j\in[m]}|I_{j}|\) (where \(|I_{j}|\) is the number of elements in the set \(I_{j}\)), the sets \(\{S_{k}\}_{k\in[M]}\) consist of \(|I_{j}|\) copies of the set \(O_{j}\) for each \(j\in[m]\), all the \(\beta_{k}\)'s are equal to \(0\), and each \(\alpha_{k}\) corresponds to an elements of \(I_{j}\) if \(S_{k}=O_{j}\). Footnote 28: As in Section 2.1, given \(\alpha=(a_{1},a_{2},a_{3},a_{4})^{\prime}\) and a CCP vector \(p\), \(\alpha^{T}p=a_{1}p_{00}+a_{2}p_{01}+a_{3}p_{10}+a_{4}p_{11}\) where \(p_{dd^{\prime}}=P(Y_{1}=d,Y_{2}=d^{\prime}|x)\). **Theorem 2.16**.: _Suppose that for some \(\theta_{0}\in\Theta\) and \(x_{0}\in\mathcal{X}\), the set of CCPs that are consistent with the model at \(x_{0}\) when the parameter value is \(\theta_{0}\), \(\mathcal{P}(x_{0},\theta_{0})\), is not a polytope. Then the sharp identified set for \(\theta\) cannot be characterized by finitely many linear inequality restrictions as in 2.14._ A direct corollary of Proposition 2.14 and Theorem 2.16 is that at least infinitely many linear inequality restrictions are needed to characterize the sharp identified set for \(\theta\) in model 2.1 under the (conditional) \(IID\) stochastic restriction 2.13, and this is the case even if the vector of covariates \(X\) is degenerate and takes a single value \(x_{0}\).29 Footnote 29: Note that although the identified set cannot be characterized by a finite number of _linear_ conditional moment inequality restrictions, it may still be possible to characterize it using a finite number of _nonlinear_ conditional moment inequalities. See Dobronyi, Gu, and Kim 2022 for an example of a setting where the sets \(\mathcal{P}\) are not polytopes, and where a characterization of the identified set is provided that uses a finite set of _nonlinear_ conditional moment inequality restrictions. _Remark 2.17_.: In the setting of Section 2.1, under the alternative stochastic restriction 2.13, although a characterization of the identified set of the form 2.14 is not possible, a simple extension of the approach of this paper can yield the following representation for the identified set: \[\Theta_{1}(F_{Y,X})=\{\theta\in\Theta\mid\forall x\in\text{supp}(X)\text{ s.t. }(x,\theta)\in O_{k},\text{ for }k\in[3],\text{ we have }w^{T}p(x)\leq\mu^{(k)}(w)\] \[\forall\ w\text{ s.t. }\|w\|_{\infty}=1\}\] where the sets \(\{O_{k}\}_{k\in[3]}\) are as in Remark 2.9, and the functions \(\mu^{(k)}\), which can be computed explicitly (see the proof of Proposition 2.14 in Appendix A) represent the (common) support functions of the sets \(\mathcal{P}_{1}(x,\theta)\) for values of \((x,\theta)\) in \(O_{k}\). Hence the implications that characterize the identified set are now of the type: \[\text{If $x$ and $\theta$ are in $O_{k}$, then $w^{T}p(x)\leq\mu^{(k)}(w)\ \forall w \text{ s.t. }\|w\|_{\infty}=1$.}\] ### Cutting-plane algorithm In this section, I propose a _cutting-plane_ algorithm, as an alternative to Benson's algorithm, to solve the MOLPs in 2.11. The algorithm relies on an integrality result. In particular, I show in Theorem 2.19 below that the extreme points of the DDCPs are integral (i.e., have integral coordinates), and all have equal and maximal rank (see Definition 2.18). The result is only established for the setting where \(T=2\), but extensive computations suggest that the result is still valid for \(T>2\). For comparison, the running time of the cutting-plane algorithm for few cases is presented in tables 1 and 2. Note however, that whereas the cutting-plane algorithm that is introduced below is only formally justified to solve the MOLPs that arise in static models when \(T=2\), Benson's algorithm is always valid in all settings (although it will not be practical whenever \(D^{T}\) is large). Before stating the main results of this section, I introduce the notion of rank. **Definition 2.18**.: The rank of a vector \(y\in\mathbb{R}^{n}\) is defined as the sum of its components, i.e, \[rank(y)=1^{T}y.\] The following theorem gives a partial characterization of the undominated extreme points of the DDCPs. **Theorem 2.19**.: _Let \(T=2\), \(D\geq 2\), and consider the static model 2.1 under either the exchange-ability assumption 2.2, or the stationarity assumption 2.1. Then the DDCPs are integral, i.e, their extreme points are in \(\{0,\pm 1\}^{D^{T}}\), and all of their undominated extreme points have the same rank, equal to the maximal rank._ _Remark 2.20_.: Extensive simulations seem to suggest that the conclusion of Theorem 2.19 also holds when \(T\geq 2\). When \(T=2\), I prove the results by linking the investigation of extreme points of the DDCPs to the investigation of flows on networks, and the techniques developed in the literature on network flows, to study integral flows, can be used to obtain the results stated in Theorem 2.19. This approach (study of DDCPs through network flows) does not seem to generalize to settings where \(T>2\). As the cutting plane algorithm relies on the integrality of the DDCP, although our results are not applicable when \(T\geq 3\), the following characterization of integrality can be used to assess through a numerical simulation whether or not the DDCP is integral. The result is due to Edmonds and Giles 1977 (see also Schrijver 2004, pg. 74). **Proposition 2.21**.: _A (rational) polyhedron \(P\subseteq\mathbb{R}^{n}\) is integral if and only if for each \(c\in\mathbb{Z}^{n}\) the value of \(\max\{c^{T}x|x\in P\}\) is an integer if it is finite.30_ Footnote 30: Here a polyhedron \(P:=\{x|\ Ax\leq b\}\) is rational if the matrix \(A\) and the vector \(b\) have rational components. The DCPs and the DDCPs are rational as the matrices and vectors that appear in their definitions are integral (i.e, the entries are integers). Moreover, for the DDCPs, the condition that the value of \(\max\{c^{T}x|x\in P\}\) is finite holds, as the DDCPs are compact sets. _Remark 2.22_.: The proposition implies that the integrality of a polytope \(Q\) (the DDCP in our case) can be assessed using the following numerical simulation: Let \(\sigma>0\) be large (say \(\sigma=100\)), and independently draw a large number \(K\) (say \(K=10^{4}\)) of vectors \(\{w_{k}\}_{k\in[K]}\), where \(w_{k}\sim N(0,\sigma I_{n})\), and let \(c_{k}=\lfloor w_{k}\rfloor\) (component-wise), i.e, \(c_{k}\) is the largest integral vector that is less than \(w_{k}\) component-wise. Compute \(V_{k}\), the value of the LP \(\max\{c_{k}^{T}x|\ x\in Q\}\). If one of the values \(\{V_{k}\}_{k\in[K]}\) is not an integer, then \(Q\) is not integral, and the cutting-plane procedure is not adequate (in this case \(Q\) has some non-integral extreme points). If on the other hand all of the \(V_{k}^{\prime}\)s are integers, then it is suggestive evidence that \(Q\) is integral, and that the cutting-plane procedure is likely valid. I now describe the cutting-plane algorithm. Let the polytopes \(Q\) under consideration be the DDCPs, i.e., \(Q=\{y|\ \exists z\ \text{s.t.}\ A^{T}y\leq R^{T}z,\ \|y\|_{\infty}\leq 1\}\), where \(R\in\{R_{S},R_{E}\}\). The cutting-plane algorithm that I propose works by iteratively locating integral points in \(Q\), say \(\hat{y}\), and by appending an inequality \(\alpha^{T}y\leq\beta\) to \(Q\), called a _cut_, such that \(\hat{y}\) does not satisfy the inequality, i.e, \(\alpha^{T}\hat{y}>\beta\), but any other integral point in \(Q\) does. That is the cut \(\alpha^{T}y\leq\beta\) excludes a unique integral point from \(Q\), the point \(\hat{y}\). I iterate this procedure (i.e, iteratively append more cuts to the polytope) until there are no more integral points of maximal rank in \(Q\). In practice, although the number of integral vectors in \(\{0,\pm 1\}^{D^{T}}\) of a given rank can be very large, the number of maximal rank integral vectors in \(Q\) is usually much smaller (for instance, less than \(50\) if \(D=5\) and \(T=2\)), and the algorithm terminates in reasonable time for problems of moderate size31. Footnote 31: Cutting-plane methods are a technique used to solve integer/mixed-integer linear programs (see Schrijver 2004 pg. 84). To implement the cutting plane algorithm, consider the auxiliary polytope \(Q^{\prime}\) defined by \[\{w=(u,v)|\ u,v\in\mathbb{R}^{D^{T}},\ u,v\geq 0,\ u+v\leq 1,\ \exists z\ \text{s.t}\ A^{T}u-A^{T}v\leq R^{T}z\}.\] For \(y\in\mathbb{R}^{D^{T}}\), let \(y^{+}\) and \(y^{-}\), both in \(\mathbb{R}^{D^{T}}\), denote respectively, the positive and negative part of \(y\), i.e., \(y^{+}=\max\{y,0\}\) (component-wise) and \(y^{-}=\max\{-y,0\}\). It can be shown that for \(y\in Q\) \((y^{+},y^{-})\in Q^{\prime}\), and that if \(y\) is integral then so is \((y^{+},y^{-})\). Moreover for each \(w=(u,v)\in Q^{\prime}\), the vector \(y=u-v\) belongs to \(Q\), and \(y\) is integral if \(w\) is integral. Thus if \(r:=max\{1^{T}y|\ y\in Q\}\) is the maximal rank in \(Q\), we have: \[r=max\{1^{T}y|\ y\in Q,\text{and $y$ is integral}\}=max\{1^{T}u-1^{T}v|\ (u,v)\in Q^{\prime}\text{and (u,v) is integral}\}.\] where, for the first equality, we have used the fact that there exist integral maximizers of the rank function in \(Q\) (the undominated extreme points of \(Q\)), which follows from Theorem 2.19. From the foregoing, it follows that we can recover all integral points of \(Q\) of maximum rank from all the integral points of \(Q^{\prime}\) that maximize the function \(r^{\prime}\) on \(Q^{\prime}\), where \(r^{\prime}\) is defined by \[r^{\prime}(u,v):=1^{T}u-1^{T}v.\] If \((u,v)\in Q^{\prime}\) is an integral point that maximizes \(r^{\prime}\), then \(y=u-v\) is an integral point in \(Q\) of maximum rank; and if \(y\in Q\) is an integral point of maximum rank, then \((y^{+},y^{-})\) is in \(Q^{\prime}\) and maximizes \(r^{\prime}\). Thus to recover all integral extreme points of \(Q\) of maximum rank, it suffices to recover all integral points of \(Q^{\prime}\) that maximize the function \(r^{\prime}\). Now, since integral points in \(Q^{\prime}\) are binary vectors (i.e, all entries are in \(\{0,1\}\)), if \(\tilde{w}\) is an integral vector in \(Q^{\prime}\) that maximizes the function \(r^{\prime}\), then the inequality \[(1-\tilde{w})^{T}(1-w)+\tilde{w}^{T}w\leq 2D^{T}-1 \tag{2.16}\] is satisfied by all other integral points in \(w\in Q^{\prime}\), but not satisfied by \(\tilde{w}\); inequalities of this type will be used as our cuts. All of the integral points that maximize \(r^{\prime}\) on \(Q^{\prime}\) can be recovered as follows: First find an integral point \(y_{1}\in Q\) of maximal rank 32, and let \(w_{1}=(y_{1}^{+},y_{1}^{-})\). Then iteratively solve, for \(k\geq 2\), the following mixed-integer linear program Footnote 32: Using the simplex algorithm for instance will return a corner solution (i.e, extreme point solution) which will be integral by Theorem 2.19. \[w_{k}=argintmax\{0\ |\ w\in Q^{\prime},(1-w_{s})^{T}(1-w)+w_{s}^{T}w \leq(2D^{T}-1),\] \[\text{for all $s\in[k-1]$, and $r^{\prime}(w)=r$}\}\] until infeasibility 33. Let \(M+1\) be the last iteration when the program terminates (i.e., the first iteration when the mixed-integer linear program becomes infeasible), and for each \(k\in[M]\) define \(y_{k}=u_{k}-v_{k}\), where \(w_{k}=(u_{k},v_{k})\). Then the set \(\{y_{k}\}_{k\in[M]}\) represents the set of all the integral points in \(Q\) of maximum rank. #### 2.4.1 Redundancy elimination Since the algorithm recovers all the integral points in \(\mathcal{Q}\) of maximum rank, and not just the extreme points, some of the vectors \(y_{k}\) may be redundant, in the sense that the inequality \(y_{k}^{T}p\leq 0\) is implied by \(y_{s}^{T}p\leq 0\) for all \(s\in[M]\backslash\{k\}\). To get rid of the redundant inequalities, and thus obtain a more compact summary of the information in \(\mathcal{Q}\), we use Farkas' Lemma, which yields that \(y_{t}\) is redundant if and only if there exists \(\lambda\geq 0\), \(\lambda\in\mathbb{R}^{M-1}\) such that \[y_{k}\leq Y_{-k}\lambda,\] where \(Y_{-k}\) is the matrix whose columns are given by the vectors \(y_{s}\), \(s\in[M]\backslash\{k\}\). Hence, a simple algorithm to get rid of redundancies goes as follows: Let \(L_{1}:=\{y_{k}\}_{k\in[M]}\) be the initial list; starting from \(k=1\) until \(k=M\), remove the vector \(y_{k}\) from the list, and set \(L_{k+1}=L_{k}\backslash\{y_{k}\}\), whenever \[0=\min\{0|\ y_{k}\leq L_{k,-k}\lambda,\ \lambda\geq 0\}\] where \(L_{k,-k}\) is the matrix whose columns are all the vectors in the list \(L_{k}\) except \(y_{k}\); let \(L_{k+1}=L_{k}\) otherwise. It is not difficult to show that no vector in the final list \(L_{M+1}\) is redundant, and every vector in \(L_{1}\backslash L_{M+1}\) is redundant given the vectors in \(L_{M+1}\). Thus the vectors in \(L_{M+1}\) are a compact summary of the information in \(L_{1}\), and the list \(L_{M+1}\) is the final output of the cutting-plane procedure. ### Examples of applications of the algorithm in static settings In this section, I implement the foregoing procedure on two examples. The first example considers the model in Pakes and Porter 2023 (where \(T=2\)) with \(D=4\) alternatives. Here I show that the algorithm recovers the inequalities derived in Pakes and Porter 2023. In the second example, I consider the same setting of Pakes and Porter 2023 (with \(D=4\) alternatives), where I replace their assumption of stationarity with the exchangeability assumption, and I obtain some new inequalities. I then proceed to prove these inequalities analytically, and through that process, I guess their generalization to a setting with an arbitrary number \(D\) of alternatives. When \(D^{T}\) is large, it is recommended to use the latter approach, as the method discussed in this paper may be impractical. In such cases, one can first uses the algorithm to generate the inequalities that characterize the identified set for small values of \(D\). Once one obtains these inequalities, one can then try to prove them analytically. From the proof one can then guess and prove their generalization to a setting with arbitrarily large \(D\). _Example 2.23_ (**Pakes and Porter 2023**).: This example considers the setting of Pakes and Porter 2023, i.e., model 2.1 under Assumption 2.1, with \(T=2\). I consider a setting with 4 alternatives (D=4). Let \(v_{1}=(v_{11},\cdots,v_{41})=(0,0,0,0)\) and \(v_{2}=(v_{12},\cdots,v_{42})=(4,3,2,1)\) represent fixed values of the deterministic component of the indirect utilities for period \(1\) and \(2\) respectively (see the second paragraph of Section 2.2). Running our algorithm with \(v_{1}\) and \(v_{2}\) as inputs 34, yields the following list of vectors (one vector per row): Footnote 34: This involves creating the A and \(R_{S}\) matrices, and then running the cutting-plane algorithm of Section 2.4 (or running Benson’s algorithm to solve the corresponding MOLP) on the DDCPs (constructed using the aforementioned matrices A and \(R_{S}\)), and then running the redundancy elimination algorithm described at the end of Section 2.4. \[\begin{split}& P_{11}\quad P_{12}\quad P_{13}\quad P_{14}\quad P_{21} \quad P_{22}\quad P_{23}\quad P_{24}\quad P_{31}\quad P_{32}\quad P_{33}\quad P _{34}\quad P_{41}\quad P_{42}\quad P_{43}\quad P_{44}\\ &\left(\begin{array}{cccccccccccccccc}0&1&1&1&-1&0&0&0&-1&0&0&- 1&0&0&0\\ 0&0&1&1&0&0&1&1&-1&-1&0&0&-1&-1&0&0\\ 0&0&0&1&0&0&0&1&0&0&0&1&-1&-1&-1&0\end{array}\right)\end{split}\] The first row corresponds to the inequality \[p_{12}+p_{13}+p_{14}\leq p_{21}+p_{31}+p_{41}\] and adding \(p_{11}\) to both sides yields the inequality \[P(y_{1}=1|x)\leq P(y_{2}=1|x).\] The second row corresponds to the inequality \[p_{13}+p_{14}+p_{23}+p_{24}\leq p_{31}+p_{32}+p_{41}+p_{42}\] and adding \(p_{11}+p_{12}+p_{21}+p_{22}\) to both sides yields \[P(y_{1}\in\{1,2\}|x)\leq P(y_{2}\in\{1,2\}|x).\] The third row corresponds to the inequality \[p_{14}+p_{24}+p_{34}\leq p_{41}+p_{42}+p_{43}\] and adding \(p_{44}\) to both sides yields the inequality \[P(y_{1}\in\{1,2,3\}|x)\leq P(y_{2}\in\{1,2,3\}|x).\] The inequalities that we have derived are completely determined by the matrices A and \(R_{S}\), which in turn are completely determined by the rankings of the index function differences \(\Delta v_{d}d\in[4]\), where \(\Delta v_{d}:=vd2-v_{d1}\) (see Remark 2.5). Thus, these inequalities are always valid whenever the index function differences have the same ranking as in our example. Specifically, the inequalities are valid whenever \(\Delta v_{1}>\Delta v_{2}>\Delta v_{3}>\Delta v_{4}\) holds (as is the case for \(v_{1}\) and \(v_{2}\) with the values given above), or equivalently, whenever \(\Delta\theta^{\mathsf{T}}x_{1}>\Delta\theta^{\mathsf{T}}x_{2}>\Delta\theta^{ \mathsf{T}}x_{3}>\Delta\theta^{\mathsf{T}}x_{4}\) holds, where \(\Delta\theta^{\mathsf{T}}x_{4}\coloneqq\theta^{\mathsf{T}}x_{42}-\theta^{ \mathsf{T}}x_{41}\). Furthermore, as we can freely relabel the alternatives, we have shown that \[P(y_{1}\in U_{i}|x)\leq P(y_{2}\in U_{i}|x) \tag{2.17}\] for \(i\in[3]\), where \(U_{i}\) represents the set of indices with \(i\) largest values of index function differences. Equivalently, in the context of Theorem 2.10, we have shown that the inequalities in 2.17 are valid for all CCP vector at \((x,\theta)\), whenever \((x,\theta)\in O\), where the set \(O\) is the subset of \(\mathcal{X}\times\Theta\) defined by \[O=\{(x,\theta)\in\mathcal{X}\times\Theta\mid\theta^{\mathsf{T}}x_{42}-\theta^ {\mathsf{T}}x_{41}\neq\theta^{\mathsf{T}}x_{4^{\prime}2}-\theta^{\mathsf{T}}x _{4^{\prime}1},\ \forall d\neq d^{\prime},\ d,d^{\prime}\in[4]\}.\] As our algorithm recovers all the restrictions that the model puts on the set \(\mathcal{P}_{S}(x,\theta)\), the inequalities in 2.17 represent all the restrictions that the model puts on the observable CCPs, whenever the index function differences are distinct, and \(D=4\). The same conclusion (when restricted to \(D=4\)) was derived in Pakes and Porter 2023 by analytical means. _Remark 2.24_.: As can be seen from Theorem 2.10, the inequalities that characterize the identified set only need to be computed for a finite number of cases (equal to \(m\) in Theorem 2.10). In the setting of the preceding example (\(D=4\) and \(T=2\)), only \(8\) cases need to be considered (i.e., we only need to solve \(8\) MOLPs). Each such case corresponds to one way in which ties can arise among the ranked index function differences. In particular, assume w.l.o.g. that \(v_{1}=(0,0,0,0)\) in all cases. 35 Case 1 can be the one above, i.e., \(v_{2}=(4,3,2,1)\); for case 2 use \(v_{2}=(4,3,2,2)\); for case 3 use \(v_{2}=(4,3,3,3)\); for case 5 use \(v_{2}=(4,4,3,2)\); for case 6 use \(v_{2}=(4,4,3,3)\); for case 7 use \(v_{2}=(4,4,4,3)\); and for case 8 use \(v_{2}=(4,4,4,4)\). Here case 1 corresponds to the case where there is no tie in the index function differences across the 4 alternatives, and case 2 through 8 are all possible ways that the ties among the ordered index function differences can arise. Running the algorithm in these 8 cases and arguing by symmetry as above (i.e., we can freely relabel the alternatives) will yield all the inequalities restrictions for any possible values of \(v_{1}\) and \(v_{2}\).36 Footnote 35: Since we are conditioning on \(x\), and there is no location normalization on the shocks, the period 1 deterministic component of the indirect utilities can be absorbed into the shocks, making \(v_{1}=0\), and the period 2 deterministic components of the utilities can be redefined to be \(v_{2}=\Delta v\). Footnote 36: It is important to note that this step can be parallelized, and all 8 MOLPs can be solved independently and simultaneously. Also, since we can also freely relabel the time periods, the inequalities for case 5 can be obtained from those of case 2, and the inequalities for case 7 can be obtained from those of case 4. Thus we really only need to solve \(6\) MOLPs. One of the drawbacks of the procedure is that the identifying inequalities that the algorithm generates are case specific, i.e, the conclusion does not generalize to other models. For instance, after running the procedure on a model with \(D=4\) alternatives (and considering all 8 cases mentioned above), we have to re-run the procedure again if we now want to include an extra alternative in our model (i.e., \(D=5\)), in order to generate the identifying inequalities for a setting with \(D=5\). If one wants to generalize from the inequalities obtained for \(D=4\) to other values of \(D\), an intermediate step is required where one will need to prove analytically the inequalities generated by the algorithm for \(D=4\), and in the process guess their generalizations. I do this in the next example. _Example 2.25_ (**Pakes and Porter 2023 with exchangeability**).: The setting here is the same as that of the preceding example, with the only difference being that \(I\) replace the stationarity assumption considered by Pakes and Porter 2023 with the exchangeability assumption. Running the algorithm with the same values of \(v_{1}\) and \(v_{2}\) as in Example 2.23 yields the following 13 inequalities: 1. \(p_{12}+p_{13}+p_{14}+p_{23}+p_{24}+p_{34}\leq p_{21}+p_{31}+p_{32}+p_{41}+p_{42}+ p_{43}\) 2. \(p_{12}+p_{13}+p_{14}+p_{23}+p_{24}\leq p_{21}+p_{31}+p_{32}+p_{41}+p_{42}\) 3. \(p_{12}+p_{13}+p_{14}+p_{24}+p_{34}\leq p_{21}+p_{31}+p_{41}+p_{42}+p_{43}\) 4. \(p_{12}+p_{13}+p_{14}+p_{24}\leq p_{21}+p_{31}+p_{41}+p_{42}\) 5. \(p_{12}+p_{13}+p_{14}\leq p_{21}+p_{31}+p_{41}\) 6. \(p_{13}+p_{14}+p_{23}+p_{24}+p_{34}\leq p_{31}+p_{32}+p_{41}+p_{42}+p_{43}\) 7. \(p_{13}+p_{14}+p_{23}+p_{24}\leq p_{31}+p_{32}+p_{41}+p_{42}\) 8. \(p_{13}+p_{14}+p_{24}+p_{34}\leq p_{31}+p_{41}+p_{42}+p_{43}\) 9. \(p_{13}+p_{14}+p_{24}\leq p_{31}+p_{41}+p_{42}\) 10. \(p_{13}+p_{14}\leq p_{31}+p_{41}\) 11. \(p_{14}+p_{24}+p_{34}\leq p_{41}+p_{42}+p_{43}\) 12. \(p_{14}+p_{24}\leq p_{41}+p_{42}\) 13. \(p_{14}\leq p_{41}\). By construction these inequalities represent the set of all restrictions that the model places on the observable CCPs if the index function differences satisfy the relations \(\Delta v_{1}>\Delta v_{2}>\Delta v_{3}>\Delta v_{4}\). Moreover, each inequality here is non-redundant, in the sense that each inequality is not implied by the others 37. Inequalities 5,7 and 11 are Pakes and Porter 2023 inequalities (there are the same inequalities that were obtained in Example 2.23), and the remaining inequalities are of a new kind. Thus, as expected, the stronger assumption of exchangeability translates into more restrictions on the observable CCPs than when stationarity is assumed. An attempt to prove these inequalities analytically suggests their natural generalization to an arbitrary number \(D\) of alternatives, and \(I\) provide this generalization in Theorem 2.26 below. The inequalities in Theorem 2.26 can then be used in settings with large \(D\), where our algorithm is not computationally feasible. Let \(D\geq 2\), and let \((x,\theta)\in\mathcal{X}\times\Theta\) be fixed. For \(d\in[D]\) and \(t\in[2]\), let \(v_{dt}\coloneqq\theta^{T}x_{dt}\) and \(\Delta v_{d}\coloneqq v_{d2}-v_{d1}\). Let \(d^{(i)}\), with \(i\in[D]\), represent the alternative with the \(i^{th}\) largest value of \(\Delta v_{d}\): i.e., \(\Delta v_{d^{(1)}}\geq\Delta v_{d^{(2)}}\geq\cdots\geq\Delta v_{d^{(D)}}\) (with an arbitrary ranking among ties). For \(i,j\in[D]\), let the subsets of alternatives \(U_{i}\) and \(L_{j}\) be defined by \(U_{i}\coloneqq\{d^{(1)},\cdots,d^{(i)}\}\) and \(L_{j}=\{d^{(j)},\cdots,d^{(D)}\}\) (\(U_{i}\) denotes the set of all the \(i\) alternatives with index function differences of largest rank, and \(L_{j}\) denotes the set of the \(D-j+1\) alternatives with index function difference of lowest rank). Finally, let the family of sets \(\mathcal{A}(x,\theta)\) (subsets of \([D]\times[D]\)) be defined by \[\mathcal{A}(x,\theta)\coloneqq\left\{\bigcup_{i=1}^{m}U_{k_{i}}\times L_{k^{ \prime}_{i}}|\ m\in\mathbb{N},\ k_{i},k^{\prime}_{i}\in[D]\right\}=\left\{ \bigcup_{i=1}^{m}U_{k_{i}}\times L_{k^{\prime}_{i}}|\ 1\leq m\leq D,\ k_{i},k^{ \prime}_{i}\in[D]\right\} \tag{2.18}\] where the second equality follows since it can be shown that an arbitrary union of sets of the type \(U_{k_{i}}\times L_{k^{\prime}_{i}}\) can always be written as a union of up to \(D\) sets of the same type38. Given a set \(A\in\mathcal{A}(x,\theta)\), with \(A=\bigcup_{i=1}^{m}U_{k_{i}}\times L_{k^{\prime}_{i}}\), define the associated set \(per(A)\) by Footnote 38: If we assume w.l.o.g. that \(\Delta v_{1}\geq\Delta v_{2}\geq\cdots\geq\Delta v_{D}\) (which can be accomplished by relabeling the alternatives), then \(\mathcal{A}\) can be identified with the non-decreasing sequences on \([D]\) of length at most \(D\): \(\forall A\in\mathcal{A}\), \(\exists m\in[D]\) and a non-decreasing sequence \(1\leq i_{1}\leq\cdots\leq i_{m}\leq D\), such that \(A=\{(d,d^{\prime})\in[m]\times[D]\ |\ 1\leq d\leq m\text{ and }d^{\prime}\geq i_{d}\}\) (this can be seen by plotting the sets \(A\) on a \(D\)-by-\(D\) grid). Moreover, every set of the latter type is in \(\mathcal{A}\), and we get \(|\mathcal{A}|=\binom{2D}{D}-1\sim 4^{D}/\sqrt{D}\). \[per(A)\coloneqq\bigcup_{i=1}^{m}L_{k^{\prime}_{i}}\times U_{k_{i}}. \tag{2.19}\] \({}^{39}\)The following theorem generalizes the inequalities produced by our algorithm. **Theorem 2.26**.: _Let \(D\geq 2\) and suppose that Assumption 2.2 holds. Let \(\theta\) be the true value of the index parameter. Then for all \(x\in\operatorname{supp}(X)\) and for all \(A\in\mathcal{A}(x,\theta)\) we have:_ \[P((Y_{1},Y_{2})\in A|x)\leq P((Y_{1},Y_{2})\in per(A)|x) \tag{2.20}\] _where \(\mathcal{A}(x,\theta)\) is as defined in Equation 2.18._ _Remark 2.27_.: The inequalities of Pakes and Porter 2023 correspond to the sets \(A=\bigcup_{i=1}^{m}U_{k_{i}}\times L_{k^{\prime}_{i}}\) in \(\mathcal{A}(x,\theta)\), with \(m=1\) and \(k^{\prime}_{1}=1\). Under exchangeability, we get many more inequalities (\(|\mathcal{A}|=\binom{2D}{D}-1\sim 4^{D}/\sqrt{D}\) \(\binom{2D}{D}-1\)). However, some of our inequalities may be redundant. This can be seen by noting that when \(D=4\), even though Theorem 2.26 considers \(|\mathcal{A}|=\binom{8}{4}-1=69\) inequalities, our algorithm only reports 13 such inequalities.40 Although the procedure of this paper is impractical for large values of \(D\), the set of inequalities derived in Theorem 2.26 is likely to characterize the sharp identified set for all values of \(D\).41 Footnote 40: Recall that our algorithm deletes all redundant inequalities by construction (see the end of Section 2.4) _Remark 2.28_.: It is important to note that the inequalities of Theorem 2.26 (and more generally, the inequalities generated by our procedure) are useful even if one is interested in point identification; in that case, one can try to determine additional (and preferably minimal) assumptions that when combined with the derived inequalities restrictions, yield point identification, and the inequalities can then be used to construct maximum score like estimators of the parameter \(\theta\) (see for instance Manski 1987, Khan, Ponomareva, and Tamer 2023 and Shi, Shum, and Song 2018). ## 3 Dynamic models In this section, I show how the procedure that is discussed in Section 2 can be extended to dynamic settings. In particular consider the following simple dynamic multinomial choice model:42 Agent \(i\) chooses alternative \(d\in[D]\) at time \(t\in[T]\), which \(I\) represent by \(Y_{ti}=d\), if and only if Footnote 42: This was verified computationally for up to 8 alternatives. \[X^{\prime}_{\text{dti}}\beta+\gamma\mathbb{1}_{[Y_{(t-1)i}=d]}+\lambda_{di}+ \epsilon_{\text{dti}}>X^{\prime}_{d^{\prime}t!}\beta+\gamma\mathbb{1}_{[Y_{(t- 1)i}=d^{\prime}]}+\lambda_{d^{\prime}i}+\epsilon_{d^{\prime}ti}\quad\forall d^ {\prime}\in[D],\ d^{\prime}\neq d. \tag{3.1}\] Here, the data consists of a random sample of observed choices and covariates of individuals across \(T\) time periods, as well as \(Y_{0}\), which represents the individuals' choices in period \(0\) (i.e., the initial condition). In other words, we have access to a random sample of the variables \((Y,X)\), where \(Y=(Y_{0},Y_{1},\cdots,Y_{T})^{\prime}\in[D]^{T+1}\) and \(X=(X_{1},\cdots,X_{T})\in\mathbb{R}^{d_{X}\times DT}\) (see the notation following equation 2.1). Having a specification of the indirect utility that includes both fixed effects and lagged choices is motivated by the desire to distinguish the impact of unobserved heterogeneity from that of state dependence (see Heckman 1978, 1981a, 1981b, 1981c). The construction presented below easily extends to dynamic models that consider more than one lag and where the lag/state dependence parameters are alternative dependent (see Example 3.12 below for a setting with two lags). I chose to use a simple setting with one lag primarily for the sake of expositional simplicity Following Khan, Ponomareva, and Tamer 2023, I study the identification of the common parameters \(\theta=(\beta,\gamma)\) While imposing only mild stochastic restrictions on the distribution of both observed and unobserved variables in our model. Specifically, I consider two types of restrictions: the first type imposes stationarity or exchangeability of the shocks conditional on the covariates and fixed effects, but not on the initial conditions (as assumed in assumptions 2.1 and 2.2). The second, stronger type of restriction, which I formally define next, imposes stationarity or exchangeability of the shocks conditional on the initial condition, the covariates, and the fixed effects. **Assumption 3.1** (**Conditional Stationarity**).: 1. [label=)] 2. Conditional on \(Y_{0i}\), \(X_{i}\) and \(\lambda_{i}\), for each \(t\in[T]\), \(\epsilon_{ti}\) is continuously distributed. 3. Conditional on \(Y_{0i}\), \(X_{i}\) and \(\lambda_{i}\), the shocks \(\epsilon_{ti}\) have the same distribution: \[\epsilon_{ti}|Y_{0i},X_{i},\lambda_{i}-\epsilon_{si}|Y_{0i},X_{i},\lambda_{i} \quad\forall\ \ s,t\in[T]\] **Assumption 3.2** (**Conditional Exchangeability**).: 1. [label=)] 2. Conditional on \(Y_{0i}\), \(X_{i}\) and \(\lambda_{i}\), for each \(t\in[T]\), \(\epsilon_{ti}\) is continuously distributed. 3. Conditional on \(Y_{0i}\), \(X_{i}\) and \(\lambda_{i}\), the joint distribution of the shocks is exchangeable: \[(\epsilon_{ti},\epsilon_{2i},\cdots,\epsilon_{Ti})|Y_{0i},X_{i},\lambda_{i} \sim(\epsilon_{\pi(1)i},\epsilon_{\pi(2)i},\cdots,\epsilon_{\pi(T)i})|Y_{0i}, X_{i},\lambda_{i}\] for all permutations \(\pi\) on the set \([T]\). _Remark 3.3_.: In the dynamic setting, as in the static case, the approach I outline below can accommodate other stochastic restrictions, provided they can be represented by a finite number of linear inequality constraints once the model is 'locally discretized.' Furthermore, as mentioned in Remark 2.4, since there are no extra location restrictions on the shocks, we can assume without loss of generality that the fixed effects are degenerate and take the value of \(0\) in the assumptions of conditional stationarity and conditional exchangeability. I now discuss the construction of the DCPs and DDCPs. First, I will discuss their construction assuming stationarity or exchangeability conditional on the initial condition (Assumptions 3.1 and 3.2). Then, I will discuss further below the analogous construction under the assumption of unconditional stationarity or exchangeability (Assumptions 2.1 and 2.2). ### Construction of DCP under conditional stochastic restrictions In this section, I discuss the construction of the DCP when the conditional (on the initial condition) stationarity or exchangeability assumption is maintained (i.e., assumptions 3.1 and 3.2). Following the exposition in Section2.2, I first discuss the construction of the patches, and discuss how the patches can be used to discretize the space of shocks. I then discuss the construction of the A matrices and R matrices that appear in the definition of the DCPs. Fix a covariate value \(x\), a parameter value \(\theta=(\beta,\gamma)\) and a value \(y_{0}\in[D]\) of the initial condition. For \(t\in[T]\) and \(d\in[D]\), let \(v_{dt}:=x^{\prime}_{dt}\theta\). For \(i\in\{CE,CS\}\), let \(\mathcal{P}_{i}(y_{0},x,\theta)\) denote the set of all CCP vectors that are consistent with the model 3.1 at the covariate value \(x\), when the initial condition is \(y_{0}\), the parameter value is \(\theta\), and either the conditional stationarity restriction (CS) or conditional exchangeability restriction (CE) holds (see equations 2.2 and 2.3 for analogous definitions in the static setting). I define the patches in terms of the choices that an individual makes at each time period \(t\in[T]\) and in each possible "state". Here, as we are considering a setting with one lag, the state in period \(t\) refers to the individual's choice in period \(t-1\).43 At time \(1\), the initial condition determines the unique state, and for each \(t\geq 2\), there are \(D\) possible states, one for each of the possible choices in period \(t-1\). For each \(t\geq 2\), each \(d\in[D]\) and each \(s\in[D]\), let \(\varepsilon_{d,t,s}\) denote the set of all possible realizations of the shocks that induce an individual to choose alternative \(d\) at time \(t\) when the state is \(s\): Footnote 43: In a setting with \(L\) lags, the state in period \(t\) is determined by the choices in the past \(L\) periods. See Example 3.12 for a case with two lags. \[\varepsilon_{d,t,s}:=\{\zeta\in\mathbb{R}^{D}|\ v_{dt}+\gamma 1[d=s]+ \zeta_{d}>v_{d^{\prime}t}+\gamma 1[d^{\prime}=s]+\zeta_{d^{\prime}},\ \ \forall d^{\prime}\neq d,\ d^{\prime}\in[D]\}. \tag{3.2}\] When \(t=1\),for each \(d\in[D]\), let \(\varepsilon_{d,1,y_{0}}\) represent the set of shocks that induce an individual to choose option \(d\) at time \(1\) (given the state \(y_{0}\) which is fixed by the initial condition): \[\varepsilon_{d,1,y_{0}}:=\{\zeta\in\mathbb{R}^{D}|\ v_{d1}+\gamma 1[d=y_{0}]+ \zeta_{d}>v_{d^{\prime}1}+\gamma 1[d^{\prime}=y_{0}]+\zeta_{d},\ \ \forall d^{\prime}\neq d,\ d^{\prime}\in[D]\}. \tag{3.3}\] I define a patch as a (non-empty) subset of realizations of the shocks that induces an agent to make a prescribed choice in each possible state and time period. More concretely, a tuple \(f=(c,C)\), where \(c\in[D]\) and \(C\) is a (T-1)-by-\(D\) matrix with all of its entries in \([D]\), indexes a patch if and only if \[\varepsilon_{c,1,y_{0}}\cap(\cap_{t=2}^{T}\cap_{s=1}^{D}\varepsilon_{C(t-1,s),t,s})\neq\emptyset \tag{3.4}\] where \(C(i,j)\) denotes the \((i,j)^{th}\) entry of the matrix \(C\). Essentially, given a patch \(f=(c,C)\), \(c\) denotes the choice in period \(1\), in the unique state determined by the initial condition, and the entry in the \(t^{th}\) row and \(s^{th}\) column of \(C\) represents the choice in period \(t+1\) when the state is \(s\). I denote the set of all possible patches by \(F\). Determining whether a pair \((c,C)\) (with \(c\in[D]\) and \(C\in[D]^{(T-1)\times D}\)) represents a patch, can be achieved by checking for the feasibility of a linear program as in 2.10. The set of patches partition \(\mathbb{R}^{D}\) (up to the boundary of the patches). As in Section 2.2, I use the patches to partition the space of joint (across all \(T\) time periods) shocks into rectangular regions of the type \(f_{1}\times\cdots\times f_{T}\) where \(f_{t}\in F\) for each \(t\in[T]\). I denote the set \(|F|^{T}\)-element set of all such regions by \(\mathcal{R}\). Once we obtain the set \(F\) of patches, the matrices \(A(y_{0},x,\theta)\) and \(R_{i}(y_{0},x,\theta)\) used in the definition of the DCPs can be constructed as follows. The matrix \(A\) is a \(D^{T}\) by \(|\mathcal{R}|\) matrix, with each row representing a "choice sequence" and each column indexing a particular region of \(\mathcal{R}\). Let \(d=(d_{1},\cdots,d_{T})\) and \(R=f_{1}\times\cdots\times f_{T}\), with each \(d_{t}\in[D]\) and each \(f_{t}=(c_{t},C_{t})\in F\). The entry of the \(d^{th}\) row and \(R^{th}\) column of \(A\), is equal to \(1\) (and is equal to zero otherwise) if and only if for each time period \(t\in[T]\), realizations of the shocks in the patch \(f_{t}\), when the state is \(d_{t-1}\) (with \(d_{0}=y_{0}\)), induce the agent to choose alternative \(d_{t}\). That is, \(A(d,R)=1\) if and only if \[d_{1}=c_{1}\quad\text{and}\quad d_{t}=C_{t}(t-1,d_{t-1})\ \ \forall\ t\in[T] \backslash[1].\] The matrices \(R_{i}(y_{0},x,\theta)\), with \(i\in\{CS,CE\}\) are easy to construct, and simply enforce the stochastic restrictions 3.1 and 3.2 on distributions defined on the discrete set \(\mathcal{R}\). For instance, the matrix \(R_{CS}\) simply enforces the restrictions: \(\forall f\in F\) and \(\forall i,j\in[T]\) \[\sum_{\{R=f_{1}\times\cdots\times f_{T}\in\mathcal{R}\ |\ f_{i}=f\}}q_{R}= \sum_{\{R=f_{1}\times\cdots\times f_{T}\in\mathcal{R}\ |\ f_{j}=f\}}q_{R}.\] The proof of Proposition 2.6 can easily be modified to show that for \(i\in\{CS,CE\}\), we have \[\mathcal{P}_{i}(y_{0},x,\theta)=\{p\in\mathbb{R}^{D^{T}}|1^{T}p=1\}\cap\{p=A( y_{0},x,\theta)q|\ q\in\mathbb{R}^{|\mathcal{R}|},\ R_{i}(y_{0},x,\theta)q=0,\ q\geq 0\}.\] Moreover, in the current setting, Proposition 2.8 and Theorem 2.10 can be adapted straightforwardly and remain valid. ### Construction of DCP under unconditional stochastic restrictions In this section, I discuss how the construction of Section 3.1 needs to be modified if the stationarity or exchangeability restriction that we impose on the shocks is not made conditional on the initial condition (i.e., if the shocks satisfy assumptions 2.1 or 2.2). Fix a covariate value \(x\) and a parameter value \(\theta=(\beta,\gamma)\). For \(t\in[T]\) and \(d\in[D]\), let \(v_{dt}:=x_{dt}^{\prime}\theta\). For \(i\in UE\), \(US\), let \(\mathcal{P}_{i}(x,\theta)\) denote the set of all possible CCP vectors that are consistent with model 3.1 at the covariate value \(x\) and the parameter value \(\theta\), under either the unconditional stationarity (US) or the unconditional exchangeability (UE) restriction. I define the patches in terms of the choices made by an individual at each time period \(t\in[T]\) and in each possible state. Here (again), the state in period \(t\) simply refers to the individual's choice in period \(t-1\). For each \(t\in[T]\), there are \(D\) possible states, one for each \(s\in[D]\). For each \(d\in[D]\), each \(t\in[T]\), and each \(s\in[D]\), let \(\varepsilon_{d,t,s}\) denote the set of all possible realizations of the shocks that induce an individual to choose alternative \(d\) at time \(t\) when the state is \(s\): \[\varepsilon_{d,t,s}:=\{\zeta\in\mathbb{R}^{D}|\ v_{dt}+\gamma 1[d=s]+\zeta_{d}>v_{d^{ \prime}t}+\gamma 1[d^{\prime}=s]+\zeta_{d^{\prime}}\ \ \forall d^{\prime}\neq d,\ d^{\prime}\in[D]\}. \tag{3.5}\] Here, I define a patch as a non-empty set of values of the shocks that induces an agent to make a prescribed choice in each possible state and time period. Specifically, a T-by-D matrix \(f\), with all of its entries in \([D]\), indexes a patch if and only if \[\cap_{t=1}^{T}\cap_{s=1}^{D}\ \varepsilon_{f(t,s),t,s}\neq\emptyset \tag{3.6}\] where \(f(i,j)\) denotes the \((i,j)^{th}\) entry of the matrix \(f\). Here, the entry in the \(t^{th}\) row and \(s^{th}\) column of \(f\) represents the choice that an agent will make at time \(t\) and in state \(s\), if the realization of her shock in period \(t\) belong to the patch \(f\). I denote the set of all possible patches by \(F\). Determining whether a matrix \(f\in[D]^{T\times D}\) indexes a patch can be done by checking for the feasibility of a linear program as in 2.10. The set of patches partitions \(\mathbb{R}^{D}\) (up to the boundary of the patches). I use the patches to partition \(\mathbb{R}^{D\!T}\), the space of shocks across all alternatives and time periods, into rectangular regions of the type \(f_{1}\times\cdots\times f_{T}\) where \(f_{t}\in F\) for each \(t\in[T]\). I denote the set \(|F|^{T}\)-element set of all such regions by \(\mathcal{R}\). Let \(\{R_{1},R_{2},\ldots,R_{|\mathcal{R}|}\}\) denote a fixed indexation/enumeration of the elements of \(\mathcal{R}\). I now discuss the construction of the matrix \(A\). For each \(g\in[D]\), let \(A_{g}\) be a \(D^{T+1}\) by \(|\mathcal{R}|\) matrix that encodes the choices made by individuals with the initial condition \(Y_{0}=g\), in each region \(R\in\mathcal{R}\). Each row of \(A_{g}\) represents a possible "choice sequence" (i.e., an element of \([D]^{T}\)) and a possible value of the initial condition (i.e., an element of \([D]\)), and each column of \(A_{g}\) represents a region in \(\mathcal{R}\). Let \(d=(d_{0},d_{1},\cdots,d_{T})\) and \(k\in[|\mathcal{R}|]\). Then the entry in the \(d^{th}\) row and \(k^{th}\) column of \(A_{g}\) is equal to \(1\) (and is equal to zero otherwise) if and only if \[d_{0}=g\quad\text{and}\quad f_{t}(t,d_{t-1})=d_{t},\ \text{for each}\ t\in[D],\] where \(f_{1},\cdots,f_{T}\) represent the patches that make up the region \(R_{k}\), i.e., \(R_{k}=f_{1}\times\cdots\times f_{T}\). The matrix \(A(x,\theta)\) is obtained by concatenating the \(A_{g}\) matrices \[A=[A_{1}\ A_{2}\cdots\ A_{D}].\] I now discuss the construction of the matrices \(R_{i}(x,\theta)\), with \(i\in\{US,UE\}\), that enforce the stochastic restrictions 2.1 and 2.2. For \(g\in[D]\) and \(R\in\mathcal{R}\), let \(q_{g,R}\) represent the probability (conditional on the covariate value \(x\)) that an individual chooses alternative \(g\) in period \(0\) (i.e., \(Y_{0}=g\)) and that her shocks are in region \(R\): \[q_{g,R}=P(Y_{0}=g,(\zeta_{1},\cdots,\zeta_{T})\in R|x).\] Let \(q\in\mathbb{R}^{D\!\left|\mathcal{R}\right|}\) (the same dimension as the number of columns of \(A\)) be the vector with \((k+(g-1)|\mathcal{R}|)^{th}\) entry (for \(g\in[D]\) and \(k\in|\mathcal{R}|\)) given by \(q_{g,R_{k}}\). Under the unconditional stationarity, the matrix \(R_{US}(x,\theta)\) encodes the linear restrictions: For each \(f\in F\) and each \(i,j\in[T]\) \[\sum_{g=1}^{D}\ \sum_{\{R=f_{1}\times\cdots\times f_{T}\in\mathcal{R}\ |\ f_{i}=f\}}q_{g,R}=\sum_{g=1}^{D}\ \sum_{\{R=f_{1}\times\cdots\times f_{T}\in\mathcal{R}\ |\ f_{j}=f\}}q_{g,R}.\] Similarly, under unconditional exchangeability, the matrix \(R_{\text{U}E}(x,\theta)\) encodes the linear restrictions: For each region \(R=f_{1}\times\cdots\times f_{T}\) and each permutation \(\pi\) on \([T]\) \[\sum_{g=1}^{D}q_{g,f_{1}\times\cdots\times f_{T}}=\sum_{g=1}^{D}q_{g,f_{n(1)} \times\cdots\times f_{n(T)}}.\] The proof of Proposition 2.6 can be adapted to show that for \(i\in\{US,UE\}\), we have \[\mathcal{P}_{i}(x,\theta)=\{p\in\mathbb{R}^{D^{T}}|1^{T}p=1\}\cap\{p=A(x, \theta)q|\ q\in\mathbb{R}^{|R|},\ R_{i}(x,\theta)q=0,\ q\geq 0\}.\] Moreover, it can easily be shown that the adaptation of Proposition 2.8 and Theorem 2.10 to the current setting remain valid. ### Examples of applications of the algorithm in dynamic settings In this section, I present three examples of dynamic settings where the algorithm described in the preceding sections is implemented. The first example examines the dynamic binary choice model with one lag studied in Khan, Ponomareva, and Tamer 2023. There, I use to algorithm to recover the inequalities that were obtained in Khan, Ponomareva, and Tamer 2023 by analytical means. In the second example, I consider a two-period dynamic multinomial choice model with one lag. This model can be seen as a simultaneous extension of the settings of Pakes and Porter 2023 and Khan, Ponomareva, and Tamer 2023.4 Using the algorithm, I generate the inequalities that characterize the identified set of the common parameter when there are four alternatives (D=4). These inequalities are then proven analytically, and I provide their generalizations to settings with any number of alternatives. The set of inequalities obtained is novel and subsumes the inequalities of Pakes and Porter 2023 when \(\gamma=0\) (i.e., when there is no dynamic) as well as the inequalities of Khan, Ponomareva, and Tamer 2023 when \(D=2\). Moreover, the inequalities that I obtain are complementary to those derived in Pakes et al. 2022 under a stronger stochastic restriction. In the third example, a three-period dynamic binary choice model with two lags is examined. Here, some of the inequalities that the algorithm generates are natural extensions of those in the one-lag setting of Khan, Ponomareva, and Tamer 2023, while the others are less intuitive, as evidenced by their intricate analytical proofs. Footnote 4: It can be viewed as adding dynamics to the setting of Pakes and Porter 2023, or it can alternatively be viewed as extending the setting of Khan, Ponomareva, and Tamer 2023 to more than two alternatives. _Example 3.4_ (**Khan, Ponomareva, and Tamer 2023** ).: In a two periods dynamic binary choice model, let agent's \(i\) choice in each period \(t\in[2]\) be modeled by \[Y_{ti}=1(X_{it}^{\prime}\beta+\gamma Y_{(t-1)i}-\lambda_{i}-\varepsilon_{ti}>0) \tag{3.7}\] and suppose that the random utility shocks satisfy the unconditional stationarity condition (Assumption 2.1). As discussed in Section 3.2, it is without loss of generality to assume that the fixed effects are degenerate and take the value \(0\). To implement the procedure, it is necessary to construct the set \(\mathbb{F}\) of patches. In the current setting, I now provide a detailed discussion of the construction of these patches. Let us assume a value of the covariates \(x\) and a value of the parameter \(\theta=(\beta,\gamma)\). For \(t\in[2]\), set \(v_{t}=x_{t}^{\prime}\beta\). Here a patch \(f\) is an element of \(\{0,1\}^{2\times 2}\), where the entry \(f(t,s)\) (with \(t,s\in[2]\)) is interpreted as the choice made by the individual at time \(t\) when the state (i.e., the choice in period \(t\)-\(1\)) is given by \(s-1\). An matrix \(f\in\{0,1\}^{2\times 2}\) represents a patch if and only if the following system of inequalities admits a solution \(\zeta\): \[f(1,1)(\zeta-v_{1})+(1-f(1,1))(v_{1}-\zeta) <0 \tag{3.8}\] \[f(1,2)(\zeta-\gamma-v_{1})+(1-f(1,2))(v_{1}+\gamma-\zeta) <0\] \[f(2,1)(\zeta-\gamma_{2})+(1-f(2,1))(v_{2}-\zeta) <0\] \[f(2,2)(\zeta-\gamma-v_{2})+(1-f(2,2))(v_{2}+\gamma-\zeta) <0.\] As the latter is a system of linear inequalities in a scalar variable \(\zeta\), checking whether such a system admits a solution simply reduces to checking whether four intervals (one interval for each inequality in the system) have non-empty intersections, and can be framed as a LP: minimize \[0\] (3.9) subject to \[\zeta\in\mathbb{R}\] \[f(1,1)(\zeta-v_{1})+(1-f(1,1))(v_{1}-\zeta)\leq\delta\] \[f(1,2)(\zeta-\gamma-v_{1})+(1-f(1,2))(v_{1}+\gamma-\zeta)\leq\delta\] \[f(2,1)(\zeta-v_{2})+(1-f(2,1))(v_{2}-\zeta)\leq\delta\] \[f(2,2)(\zeta-\gamma-v_{2})+(1-f(2,2))(v_{2}+\gamma-\zeta)\leq\delta.\] Here, \(\delta\) is a small nonnegative tolerance parameter (say \(\delta=-10^{-4}\)) that is used to replace the strict inequalities in 3.8 by weak inequalities, and the set of patches \(F\) consists of all matrices \(f\in\{0,1\}^{2\times 2}\) (there are \(16\) such matrices, and we thus only have to consider \(16\) LPs) for which the preceding LP has the value \(0\). After obtaining the set \(F\) of patches, we can construct the matrices \(A\) and \(R_{S}\) using the definitions provided in Section 3.2. Implementing the algorithm 45 with values of \(x\) and \(\theta\) chosen such that \(v_{1}=0\), \(v_{2}=1\) and \(\gamma=2\), 46 yields following set of vectors Footnote 45: This involves solving for the patches, constructing the matrices \(A\) and \(R_{S}\), solving for the resulting MOLP in 2.11 using Benson’s algorithm, and applying the redundancy elimination algorithm at the end of Section 2.4. For the current example, this all takes less than one second of running time. Footnote 46: As in Footnote 35, it is without loss of generality to assume that \(v_{1}=0\). \[\text{P}_{000}\quad\text{P}_{001}\quad\text{P}_{010}\quad\text{P}_{011}\quad \text{P}_{100}\quad\text{P}_{101}\quad\text{P}_{110}\quad\text{P}_{111}\] \[\left(\begin{array}{cccccccc}-1&0&-1&-1&0&1&-1&-1\\ 0&-1&1&0&0&-1&0&-1\\ -1&-1&1&0&-1&-1&1&0\end{array}\right)\] where for \(d_{0},d_{1},d_{2}\in\{0,1\}\), \(p_{d_{0}d_{1}d_{2}}\) represents the entry of the CCP vector equal to \(P(Y_{0}=d_{0},Y_{1}=d_{1},Y_{2}=d_{2}|X=x)\). The first row corresponds to the inequality \[p_{101}\leq p_{000}+p_{010}+p_{011}+p_{110}+p_{111}\] and adding \(p_{001}\) to both sides yields the inequality \[P(Y_{1}=0,Y_{2}=1|x)\leq 1-P(Y_{0}=1,Y_{1}=0|x). \tag{3.10}\] The second row corresponds to the inequality \[p_{010}\leq p_{001}+p_{101}+p_{111}\] and adding \(p_{110}+p_{100}+p_{000}\) to both sides yields \[P(Y_{2}=0|x)\leq 1-P(Y_{0}=0,Y_{1}=1|x). \tag{3.11}\] The third row corresponds to the inequality \[p_{010}+p_{110}\leq p_{000}+p_{001}+p_{100}+p_{101}\] which can be rewritten as \[P(Y_{1}=1,Y_{2}=0|x)\leq P(Y_{1}=0|x). \tag{3.12}\] From the system of inequalities 3.8, the set of patches \(F\) that arise in the discretization is uniquely determined from the ordering of the quantities \(v_{1}\),\(v_{2}\), \(v_{1}+\gamma\), and \(v_{2}+\gamma\). Hence the inequalities that we have derived above represent the only restrictions that the model places on the CCP vectors whenever \[v_{1}<v_{2}<v_{1}+\gamma<v_{2}+\gamma, \tag{3.13}\] as this is the ordering that holds for our chosen values of \(v_{1}=0\), \(v_{2}=1\) and \(\gamma=2\). When Inequality 3.13 holds, it was established in Khan, Ponomareva, and Tamer 2023 that the set of all the restrictions that the model places on the identified CCP vectors are given by inequalities 3.10-3.12 as well as: \[P(Y_{0}=1,Y_{1}=1|x)\leq 1-P(Y_{1}=1,Y_{2}=0|x) \tag{3.14}\] \[P(Y_{0}=0,Y_{1}=1|x)\leq 1-P(Y_{1}=0,Y_{2}=0|x). \tag{3.15}\] As our algorithm generates a minimal set of inequalities that characterize the identified set, it can be shown that the latter two inequalities are redundant given 3.10-3.12. Indeed, inequality 3.14 follows from 3.12 since \(P(Y_{1}=0|x)\leq 1-P(Y_{0}=1,Y_{1}=1)\), and inequality 3.15 follows from 3.12 since \(P(Y_{1}=0,Y_{2}=0|x)\leq P(Y_{2}=0|x)\). To obtain all the implications that characterize the identified set we need to consider other possible orderings of the quantities in 3.13. The three main additional cases to consider correspond to the orderings: \(v_{1}<v_{1}+\gamma<v_{2}<v_{2}+\gamma\), \(v_{1}+\gamma<v_{2}+\gamma<v_{1}<v_{2}\) and \(v_{1}+\gamma<v_{1}<v_{2}+\gamma<v_{2}\).47 Repeating the above procedure on each of these additional cases should generate all the inequalities that were obtained in Khan, Ponomareva, and Tamer 2023 for \(T=2\). Note that while Khan, Ponomareva, and Tamer 2023 showed that the conditional moment inequalities that characterize the sharp identified set of \(\theta\) when \(T=2\) also characterize the sharp identified set when \(T>2\) (if we consider all such inequalities for all possible pairs of time periods), the algorithm presented in this paper can only be applied on a case-by-case basis. Specifically, running the algorithm for \(T=2\) generates all the inequalities that characterize the identified set for \(T=2\), but our approach does not provide information on whether these inequalities, when applied to all pairs of time periods, will characterize the identified set for \(T>2\). Footnote 47: By relabeling the time periods if necessary, we can always assume without loss of generality that \(v_{1}\leq v_{2}\). Then, the latter three cases and 3.13 represent all possible orderings of the quantities \(v_{1}\), \(v_{2}\), \(v_{1}+\gamma\) and \(v_{2}+\gamma\) that do not involve ties. There are then 6 possible additional orderings of the same quantities that involve ties. These are: \(v_{1}=v_{2}<v_{1}+\gamma=v_{2}+\gamma\), \(v_{1}+\gamma=v_{2}+\gamma<v_{1}=v_{2}\), \(v_{1}=v_{2}=v_{1}+\gamma=v_{2}+\gamma\), \(v_{1}=v_{1}+\gamma<v_{2}=v_{2}+\gamma\), \(v_{1}<v_{1}+\gamma=v_{2}<v_{2}+\gamma\) and \(v_{1}+\gamma<v_{1}=v_{2}+\gamma<v_{2}\). Hence a total of 10 cases (and we thus only need to solve 10 MOLPs) need to be considered to generate all the inequalities that characterize the identified set. _Example 3.5_ (**Pakes and Porter 2023** with one lag ).: In this example, I extend the model presented in Pakes and Porter 2023 to a dynamic setting. Specifically, I analyze model 3.1 under the assumption of conditional stationarity 3.1, where \(T=2\) and \(D=4\).48 Following the procedure outlined in Section 3.1, I construct the matrices \(A\) and \(R_{S}\) for the input values: \(v_{1}=(0,\cdots,0)\), \(v_{2}=(0,3,5,7)\), \(\gamma=7\), and the initial condition \(y_{0}=3\). Here, similar to Example 2.23, given a fixed covariate value \(x\) and a fixed parameter value \(\beta\), for \(t\in[2]\) and \(d\in[4]\), we have \(v_{t}=(v_{1t},\cdots,v_{\text{DT}})\), where \(v_{dt}=x_{dt}^{\prime}\beta\) represents the "index component" of utility for alternative \(d\) at time \(t\). Solving the resulting MOLP using Benson's algorithm, yields the following set of inequalities, where \(p_{dd^{\prime}}=P(Y_{1}=d,Y_{2}=d^{\prime}|X=x,Y_{0}=3)\) (the whole procedure takes about 1800 seconds): Footnote 48: This example can also be viewed as an extension of the model of Khan, Ponomareva, and Tamer 2023 to a setting with more than two alternatives. 1. \(p_{31}+p_{32}\leq p_{11}+p_{12}+p_{13}+p_{14}+p_{21}+p_{22}+p_{23}+p_{24}\) 2. \(p_{12}+p_{13}+p_{43}\leq p_{21}+p_{22}+p_{24}+p_{31}+p_{32}+p_{33}+p_{34}\) 3. \(p_{12}+p_{13}+p_{14}\leq p_{21}+p_{22}+p_{24}+p_{31}+p_{32}+p_{33}+p_{34}+p_{41}+ p_{42}+p_{44}\) 4. \(p_{31}\leq p_{11}+p_{12}+p_{13}+p_{14}\) 5. \(p_{41}+p_{42}+p_{43}\leq p_{14}+p_{22}+p_{24}+p_{34}\) 6. \(p_{21}+p_{23}+p_{41}+p_{43}\leq p_{11}+p_{12}+p_{14}+p_{32}+p_{33}+p_{34}\) 7. \(p_{21}+p_{23}+p_{24}\leq p_{11}+p_{12}+p_{14}+p_{32}+p_{33}+p_{34}+p_{42}+p_{44}\) 8. \(p_{13}+p_{23}+p_{43}\leq p_{31}+p_{32}+p_{33}+p_{34}\). With a little algebra, the latter inequalities can be rewritten as (all probabilities below are implicitly conditioned on \(X=x\) and \(Y_{0}=3\)): 1. \(P(Y_{1}=3,Y_{2}\in\{1,2\})\leq P(Y_{1}\in\{1,2\})\) 2. \(P(Y_{1}=1,Y_{2}=2)+P(Y_{1}\in\{1,2,4\},Y_{2}=3)\leq P(Y_{1}\in\{2,3\})\) 3. \(P(Y_{1}=1,Y_{2}\in\{2,3,4\})+P(Y_{1}\in\{2,4\},Y_{2}=3)\leq P(Y_{1}\in\{2,3,4\})\) 4. \(P(Y_{1}=3,Y_{2}=1)\leq P(Y_{1}=1)\) 5. \(P(Y_{2}\in\{1,3\})+P(Y_{1}\in\{1,3,4\},Y_{2}=2)\leq P(Y_{1}\in\{1,2,3\})\) 6. \(P(Y_{1}\in\{2,3,4\},Y_{2}=1)+P(Y_{1}\in\{1,2,4\},Y_{2}=3)\leq P(Y_{1}\in\{1,3 \})\) 7. \(P(Y_{1}\in\{2,3,4\},Y_{2}=1)+P(Y_{1}\in\{1,2,4\},Y_{2}=3)+P(Y_{1}=2,Y_{2}=4) \leq P(Y_{1}\in\{1,3,4\})\) 8. \(P(Y_{1}\in\{1,2,4\},Y_{2}=3)\leq P(Y_{1}=3)\). These inequalities represent all of the restrictions that the model places on the CCP vectors in \(\mathcal{P}_{\mathrm{CS}}(y_{0},x,\theta)\), when \(y_{0}=3\) and \(x\) and \(\theta\) are such that \(\gamma=7\), \(v_{1}=(0,\cdots,0)\) and \(v_{2}=(0,3,5,7)\). As in Example 2.25, I proceed to prove these inequalities analytically, and from the proof I guess and prove their general form (Theorem 3.6 below), which I now discuss. Given a parameter value \(\theta\), a covariate value \(x\) and an initial condition \(y_{0}\in[D]\) (with \(D\geq 2\)) let the family of sets \(\mathcal{A}(y_{0},x,\theta)\subseteq 2^{[D]}\) be defined as follows: For each \(d_{1}\in[D]\), let \(\Delta(d,d_{1},y_{0},x,\theta)\) be defined by \[\Delta(d,d_{1},y_{0},x,\theta)=v_{d2}-v_{d1}+\gamma 1\{d=d_{1}\}-\gamma 1\{d=y_{0}\}\] where for \(t\in[2]\), \(v_{dt}:=x_{dt}^{\prime}\beta\). The quantity \(\Delta(d,d_{1},y_{0},x,\theta)\) represents the improvement (from period 1 to 2) of the deterministic component of the utility of alternative \(d\), when the "state" in period \(2\) is given by \(d_{1}\) (i.e., \(d_{1}\) is the alternative chosen in period \(1\)) and the state in period \(1\) is given by \(y_{0}\) (i.e., \(y_{0}\) is the initial condition). Note that if \(\gamma=0\), then \(\Delta(d,d_{1},y_{0},x,\theta)\) simply represents the index function differences, and does not depend on the initial condition \(y_{0}\) nor on \(d_{1}\). Let \(\mathcal{L}(d_{1},y_{0},x,\theta)\) be the family of subsets of \([D]\), the "lower sets", defined by \[\mathcal{L}(d_{1},y_{0},x,\theta)=\{A\subset[D]\,|\,\emptyset\subsetneq A \subset[D],\ \Delta(d^{\prime},d_{1},y_{0},x,\theta)\leq\Delta(d^{\prime\prime},d_{1},y_{0 },x,\theta)\ \forall d^{\prime}\in A,d^{\prime\prime}\in[D]\backslash A\}. \tag{3.16}\] For instance, if we have \(3\) alternatives, and the quantities \(\Delta(d,d_{1},y_{0},x,\theta)\) satisfy the ordering \(\Delta(2,d_{1},y_{0},x,\theta)<\Delta(1,d_{1},y_{0},x,\theta)=\Delta(3,d_{1}, y_{0},x,\theta)\), then \(\mathcal{L}(d_{1},y_{0},x,\theta)=\{(2),\{1,2\},\{2,3\}\}\). The family \(\mathcal{A}(y_{0},x,\theta)\) is then defined by \[\mathcal{A}(y_{0},x,\theta)=\bigcup_{d\in[D]}\mathcal{L}(d,y_{0},x,\theta). \tag{3.17}\] With the foregoing definitions, the following theorem generalizes the eight inequalities generated by the algorithm to a setting with arbitrary \(D\), arbitrary values of the initial condition \(y_{0}\), the covariates \(x\), and arbitrary values of the parameter \(\theta\); its proof is provided in Section 5.0.1 of the Appendix. _Theorem 3.6_.: _Consider model 3.1 in a two-period setting, and suppose that Assumption 3.1 holds. Let \(\theta_{0}=(\beta_{0},\gamma_{0})\) denote the true parameter value. Then for all \((y_{0},x)\in\mathrm{supp}(Y_{0},X)\) and for all \(A\in\mathcal{A}(y_{0},x,\theta_{0})\), we have:_ \[\begin{split}\mathrm{P}\left(\bigcup_{d\in[D]}\Biggl{\{}& Y_{1}=d,Y_{2}\in\bigcup\Bigl{\{}B\ |\ B\subseteq A,B\in\mathcal{L}(d_{1},y_{0},x,\theta_{0})\Bigr{\}}\Biggr{\}} \Biggr{|}Y_{0}=y_{0},X=x\right)\\ &\leq\mathrm{P}(Y_{1}\in A|Y_{0}=y_{0},X=x).\end{split} \tag{3.18}\] _Remark 3.7_.: Note that for the input values we use in our algorithm to generate the \(8\) inequalities listed above (i.e., \(D=4\), \(y_{0}=3\), \(\gamma=7\), \(v_{1}=(0,0,0,0)\) and \(v_{2}=(0,3,5,7)\)), the inequalities in Theorem 3.6 correspond exactly to these \(8\) inequalities. Indeed, for these inputs, when \(d_{1}=1\), the quantities \(\Delta(d,d_{1},y_{0},x,\theta)\) are given by: \[\Delta(1,1,y_{0},x,\theta)=7\quad\Delta(2,1,y_{0},x,\theta)=3\quad\Delta(3,1,y _{0},x,\theta)=-2\quad\Delta(4,1,y_{0},x,\theta)=7\] leading to the ordering \[\Delta(3,1,y_{0},x,\theta)<\Delta(2,1,y_{0},x,\theta)<\Delta(1,1,y_{0},x, \theta)=\Delta(4,1,y_{0},x,\theta),\] and the resulting set \(\mathcal{L}(1,y_{0},x,\theta)\) is given by \[\mathcal{L}(1,y_{0},x,\theta)=\{(3),\{2,3\},\{1,2,3\},\{2,3,4\}\}.\] Similar computations show that the sets \(\mathcal{L}(2,y_{0},x,\theta)\), \(\mathcal{L}(3,y_{0},x,\theta)\) and \(\mathcal{L}(4,y_{0},x,\theta)\) are given by \[\mathcal{L}(2,y_{0},x,\theta) =\{\{3\},\{1,3\},\{1,3,4\}\}\] \[\mathcal{L}(3,y_{0},x,\theta) =\{\{1\},\{1,2\},\{1,2,3\}\}\] \[\mathcal{L}(4,y_{0},x,\theta) =\{\{3\},\{1,3\},\{1,2,3\}\}.\] Hence the set \(\mathcal{A}(y_{0},x,\theta)\) is given by \[\mathcal{A}(y_{0},x,\theta)=\{\{1\},\{3\},\{1,2\},\{1,3\},\{2,3\},\{1,2,3\},\{ 1,3,4\},\{2,3,4\}\}.\] The latter set has 8 elements, and I now show that each of its elements corresponds to one of the 8 inequalities generated by the algorithm. Consider the case where \(A=\{1\}\). Since the set \(\{1\}\) only belongs to \(\mathcal{L}(d,y_{0},x,\theta)\) when \(d=3\), inequality 3.18 can be written as follows in this case: \[\mathrm{P}(Y_{1}=3,Y_{2}=1)\leq\mathrm{P}(Y_{1}=1),\] which coincides with the \(4^{\mathrm{th}}\) inequality generated by the algorithm. When \(A=\{1,2,3\}\), the largest element of \(\mathcal{L}(d,y_{0},x,\theta)\) contained in \(A\) is given by \(\{1,2,3\}\) when \(d\in\{1,3,4\}\), and by \(\{1,3\}\) when \(d=2\). Thus, inequality 3.18 yields \[\mathrm{P}(Y_{1}\in\{1,3,4\},Y_{2}\in\{1,2,3\})+\mathrm{P}(Y_{1}=2,Y_{2}\in\{ 1,3\})\leq\mathrm{P}(Y_{1}\in\{1,2,3\}),\] which coincides with the \(5^{\mathrm{th}}\) inequality generated by the algorithm. The remaining inequalities can be derived by analogous computations. Furthermore, as the algorithm deletes redundant inequalities, the 8 inequalities that the theorem produces in this case give a compact (i.e., free of redundancies) characterization of all the restrictions that the model puts on the CCPs. _Remark 3.8_.: When \(\gamma=0\), the dynamic model in 3.1 reduces to the static model in 2.1. In this case, the assumptions of conditional stationarity (3.1) and stationarity (2.1) are observationally equivalent. This is so since, for a fixed value of the index parameter, the sets of conditional choice probability vectors (for periods 1 and 2) that are consistent with either assumption coincide. Furthermore, the inequalities of Theorem 3.6 coincide with those of Proposition 1 in Pakes and Porter 2023. Indeed, when \(\gamma=0\), the quantities \(\Delta(d,d^{\prime},y_{0},x,\theta)\) and the sets \(\mathcal{L}(d^{\prime\prime},y_{0},x,\theta)\) do not respectively depend on \(d^{\prime}\) and \(d^{\prime\prime}\in[D]\), and we have \(A\in\mathcal{A}(y_{0},x,\theta)\) if and only if \(A^{c}\in\bar{D}(x,\theta)\), where the set \(\bar{D}(x,\theta)\) is as defined in Proposition 1 of Pakes and Porter 2023. Therefore, when \(\gamma=0\), Theorem 3.6 encompasses the result of Proposition 1 in Pakes and Porter 2023. _Remark 3.9_ (**Comparison with the inequalities in Pakes et al. 2022**).: Pakes et al. 2022 also consider model 3.1,49 and derive inequality restrictions on the CCPs under a stochastic restriction that is stronger than Assumption (3.1). As such, the inequalities of Theorem 3.6 are valid in the setting of Pakes et al. 2022. Moreover, I show that under the assumptions of Pakes et al. 2022, inequalities 3.18 are complementary to those derived in Theorem 3.5 of Pakes et al. 2022, as they neither imply nor are implied by the latter inequalities. Thus, under their maintained assumptions, the inequalities derived in Theorem 3.5 of Pakes et al. 2022 do not characterize the sharp identified set for the parameter \(\theta\), and Theorem 3.6 provide some additional inequality restrictions that can be used to identify \(\theta\). Footnote 49: In Pakes et al. 2022, the actual specification of the utility level for alternative d at time t is given by \(U_{\mathrm{d}ti}=X_{\mathrm{d}ti}^{\prime}\beta-\gamma\mathbb{I}_{|\mathcal{A} \neq y_{\mathrm{(t-1)}i}|}+\lambda_{\mathrm{d}i}+\epsilon_{\mathrm{d}ti}\) (a positive \(\gamma\) in this specification can be interpreted as a “switching cost”). The specification in model 3.1 is obtained from the latter by adding the constant \(\gamma\) to all utility levels (a positive \(\gamma\) in the resulting specification can be interpreted as “inertia” or “stickiness”). Specifically, Pakes et al. 2022 consider the model given by Equation 3.1, and assume that the shocks satisfy the following stochastic restriction: \[\epsilon_{\mathrm{t}i}|Y_{\mathrm{(t-1)}i},Y_{\mathrm{(t-2)}i},\cdots,Y_{ \mathrm{I}i},Y_{\mathrm{O}i},X_{\mathrm{i}},\lambda_{\mathrm{i}}\ \sim\ \epsilon_{\mathrm{t}i}|\lambda_{\mathrm{i}}\ \sim\ \epsilon_{\mathrm{t}i}|\lambda_{\mathrm{i}}.\] It can be shown that the latter is observationally equivalent to assuming that, conditional on the fixed effects, the shocks are serially i.i.d., and independent of the initial condition \(Y_{0}\) and the covariates \(X\):50 i.e., Footnote 50: Note that given a parameter value \(\theta\in\Theta\), the CCPs that are consistent with the model under either assumption are those that can be represented as \(P(Y_{\mathrm{i}}=d_{1},\cdots,Y_{\mathrm{T}}=d_{\mathrm{T}}|Y_{0}=d_{0},X=x)= \int\prod_{\mathrm{t}\in[T]}G_{\lambda}(\varepsilon_{d_{\mathrm{t}},\mathrm{t },d_{\mathrm{t}-1}})d\mu(\lambda|Y_{0}=d_{0},X=x)\), for all \(d_{0},d_{1},\cdots,d_{\mathrm{T}}\in[D]\), where \(\varepsilon_{d_{\mathrm{t}},\mathrm{t},\mathrm{t},d_{\mathrm{t}-1}}\) is as defined in Equation 3.2 and \(G_{\lambda}\) are continuous distributions on \(\mathbb{R}^{D}\). \[\epsilon_{\mathrm{I}i}\perp\cdots\perp\epsilon_{\mathrm{T}i}|\lambda_{\mathrm{ i}}\quad\text{and}\quad\epsilon_{\mathrm{t}i}|\lambda_{\mathrm{i}}\sim\epsilon_{ \mathrm{t}i}|\lambda_{\mathrm{i}}\quad\text{and}\quad\epsilon_{\mathrm{i}}| \lambda_{\mathrm{i}},Y_{\mathrm{O}i},X_{\mathrm{i}}\sim\epsilon_{\mathrm{i}}| \lambda_{\mathrm{i}}. \tag{3.19}\] Clearly, as a conditionally i.i.d. sequence is stationary, Assumption 3.1 is implied by Assumption 3.19, and the inequalities of Theorem 3.6 are valid in the setting of Pakes et al. 2022.51 Footnote 51: As stated in Pakes et al. 2022, under Assumption 3.19, all serial dependence in choices not associated with the covariates is attributed to the fixed effects and to the state dependence parameter. Under our weaker assumption 3.1, all serial dependence in the choices that is not associated with the covariates can additionally be attributed to the serial correlation of the shocks. When \(T=2\), the inequalities in Pakes et al. 2022 can be stated as follows. Let \(\mathcal{D}(y_{0},x,\theta)\) be the family of subsets of \([D]\) defined by \[\mathcal{D}(y_{0},x,\theta)=\{A\subsetneq[D]\ |\ A^{c}\in\mathcal{L}(d,y_{0},x, \theta),\forall d\in A\}, \tag{3.20}\] where the family \(\mathcal{L}(d_{1},y_{0},x,\theta)\) is as defined in Equation 3.16. Then the inequalities provided in Theorem 3.5 of Pakes et al. 2022 are: For all \(A\in\mathcal{D}(y_{0},x,\theta)\), we have \[P(Y_{2}\in A|Y_{1}\in A,Y_{0}=y_{0},X=x)\geq P(Y_{1}\in A|Y_{0}=y_{0},X=x). \tag{3.21}\] I show in Section 5.0.1 of the Appendix that the latter inequalities do not imply, nor are implied by inequalities 3.18, as it is possible for inequalities 3.21 to be informative about \(\theta\) while inequalities 3.18 are not, and vice-versa. As a consequence, the inequalities of Theorem 3.5 of Pakes et al. 2022 do not yield a characterization of the sharp identified set, and using them in conjunction with the inequalities of Theorem 3.6 can yield more informative bounds on the parameter \(\theta\) (i.e., a smaller outer set of the identified set). Furthermore, as the conditional exchangeability assumption (3.2) is stronger than conditional stationarity (3.1) but weaker than Assumption 3.19, the algorithm can be used to generate the inequalities that characterize the identified set for \(\theta\) under the assumption of conditional exchangeability, and the resulting inequalities will be valid in setting of Pakes et al. 2022, and will provide further identifying restrictions for \(\theta\). Note that Assumption 3.19 does not impose any restrictions on the conditional distribution of the fixed effects, given the initial condition and the covariates. Therefore, model 3.1 under Assumption 3.19 is observationally equivalent to model 3.1 under the assumption52 Footnote 52: Note that if the covariates \(X\) contain a “strongly exogenous” component \(Z\) (here \(X=(W,Z)\)), in the sense that condition 3.19 holds and \(\lambda_{i}|Y_{0i},W_{i},Z_{i}\sim\lambda_{i}|Y_{0i},W_{i}\), then such a \(Z\) makes it possible to exploit “between” variations in choices across individuals to identify \(\theta\). \[\epsilon_{1i}\perp\cdots\perp\epsilon_{Ti}|Y_{0i},X_{i},\lambda_{i}\quad \text{and}\quad\epsilon_{ti}|Y_{0i},X_{i},\lambda_{i}\sim\epsilon_{1i}|Y_{0i},X_{i},\lambda_{i}.\] The latter is a natural extension of the conditional IID stochastic restriction 2.13 to the dynamic setting, and as in Proposition 2.14 it can be shown that the set \(\mathcal{P}(y_{0},x,\theta)\) of CCPs that are consistent with the model at the covariate value \(X=x\), the initial condition \(Y_{0}=y_{0}\), and the parameter value \(\theta\), are not polytopes 53. Hence, it follows from Theorem 2.16 that, under the assumptions of Pakes et al. 2022, the sharp identified set for the parameter \(\theta\) does not admit a "simple" characterization,54 and our algorithm cannot be used to generate the inequalities that characterize the sharp identified set. Footnote 53: If the parameter value \(\theta=(\beta,\gamma)\) is such that \(\gamma=0\) (i.e., there are no dynamics), then the sets \(\mathcal{P}(y_{0},x,\theta)\) coincide with those considered in Proposition 2.14 _Remark 3.10_.: When \(D=T=2\), the setting of Theorem 3.6 reduces to a two-period dynamic binary choice model with the assumption of conditional stationarity on the shocks. In this setting, the sharp identified set for the common parameter \(\theta=(\beta,\gamma)\) is characterized by the conditional moment inequalities provided in Theorem 2 of Khan, Ponomareva, and Tamer 2023. In the Appendix, I demonstrate that these inequalities are equivalent to those of Theorem 3.6. Therefore, Theorem 3.6 encompasses the findings of Theorem 2 of Khan, Ponomareva, and Tamer 2023 when \(D=T=2\).55 For further details, see Section 5.0.1 of the Appendix. Footnote 55: When \(D=2\), the model in Theorem 3.6 is relatively small, thus to confirm that the inequalities of Theorem 3.6 characterize the sharp identified set, one can use the computational algorithm proposed in this paper for all relevant configurations of the initial condition \(y_{0}\), covariates \(x\), and parameter \(\theta\) (see Proposition 2.8). _Remark 3.11_.: When \(D\) is large and Benson's algorithm is not computationally practical, we can use a probabilistic approach to determine whether the inequalities in Theorem 3.6 characterize the sharp identified set. Following Footnote 22, this approach involves solving the LPs \(\max\{w^{\prime}y\mid A(y_{0},x,\theta)^{T}y\leq R_{CS}(y_{0},x,\theta)^{T}z, \ \|y\|_{\infty}\leq 1\}\) for a large number, \(K\), of randomly chosen objective vectors \(w>0\). The solution of each such LP is an undominated extreme point of the DDCP and corresponds to an inequality restriction on the CCPs. When \(K\) is large, the inequalities that correspond to the solutions of these \(K\) LPs will contain most, if not all, of the inequality restrictions that the model places on the CCPs. We can then compare this resulting set of inequalities to those of Theorem 3.6, to determine whether all of the inequalities produced by the probabilistic approach are accounted for by the inequalities in 3.18 (or are redundant given the inequalities in 3.18). If not, then the inequalities in Theorem 3.6 do not characterize the sharp identified set of \(\theta\), and the new (unaccounted for) inequalities represent additional restrictions that the model places on the CCPs beyond those in Theorem 3.6. It is important to reemphasize that the probabilistic approach is not guaranteed to retrieve all of the undominated extreme points of the DDCP, and as such it may not retrieve all of the inequality restrictions that the model places on the CCPs. However, it is a useful alternative when Benson's algorithm is not feasible, as solving even a relatively large number of LPs can be done within a reasonable amount of time. Below, I illustrate the performance of the probabilistic approach with a simulation experiment. I consider the same inputs as those used by the algorithm to generate the eight inequalities listed above (i.e., \(y_{0}=3\) and \(x\) and \(\theta\) are such that \(\gamma=7\), \(v_{1}=(0,\cdots,0)\) and \(v_{2}=(0,3,5,7)\)). These eight inequalities represent all of the inequality restrictions that the model places on CCPs vector in the local model \(\mathcal{P}(y_{0},x,\theta)\) associated with the given input values. I implement the probabilistic approach with different values of \(K\): I consider \(K\in\{50,100,500,1000\}\). For a given value of \(K\), I consider 100 iterations. In each iteration, I randomly draw \(K\) independent and identically distributed objective vectors \(\{w_{i}\}_{i=1}^{K}\) from a distribution \(\mu\), where each objective vector \(w_{i}\) has dimension equal to \(D^{T}=16\) (the dimension of the CCP vectors). In particular, I let \(\mu\) be the distribution of a random vector of dimension 16, whose components are independently and identically distributed with distribution given by an exponential distribution with parameter 1.56 Given each objective \(w_{i}\), I solve the corresponding LP and obtain the corresponding solution \(y_{i}\). Thus solving all K LPs yields a set \(\{y_{i}\}_{i=1}^{K}\) of undominated extreme points of the DDCP, and each one of these solutions corresponds to an inequality restriction on the CCPs. When K is large the resulting set of inequalities is bound to have redundancies, and I use the redundancy removal algorithm outlined at the end of Section 2.4 to produce an equivalent and minimal (i.e., redundancy free) subset of \(M\) inequalities. I report in Table 3 below the number of time (out of the 100 iterations) that \(M\) takes each one of its possible values, for different values of K. Note that since Benson's algorithm solves for all the undominated extreme points of the DDCP, and using it produces eight non-redundant inequalities, it must be the case that \(M\), the number of non-redundant inequalities generated by the probabilistic approach, is always at most eight, and is equal to eight only when the probabilistic approach is able to retrieve all of the eight inequalities generated by Benson's algorithm. As can be seen from the table's output, when \(K=1000\), for 99 out of 100 iteration, the probabilistic approach is able to obtain all eight inequalities that were generated by Benson's algorithm. Moreover, these iterations each have an average running time of around 33 seconds, compared to the running time of 1800s for Benson's algorithm. Even when \(K=500\), the probabilistic approach successfully retrieves all eight inequalities for about 90% of the time, with an average running time of around 13 seconds per iteration, and the approach is able to recover seven of the eight restrictions for the remaining 10% of the iterations. Thus, our experiment suggests that even for moderate values of K, the probabilistic approach is able to retrieve, with probability close to 1, all of the inequality restrictions that the model places on the CCPs, while using a running time that is much smaller than that of Benson's algorithm.57 Footnote 56: Other choices of \(\mu\) are possible, and the only requirement is that the closure of the radial projection of the support of \(\mu\) onto the unit sphere is equal to the set of all vectors in the unit sphere (in \(\mathbb{R}^{16}\)) with nonnegative entries. This projection is of interest because the LPs with objectives \(w\) and \(ow\) have the same solutions for any scalar \(\alpha>0\). Essentially, we want \(\mu\) to “sample” all non-negative directions and only non-negative directions. Footnote 57: While it would be ideal to have a theory that guides the selection of the sampling distribution \(\mu\) and determines the appropriate number of objective vectors K needed to ensure that the probabilistic approach recovers all inequalities with a probability greater than \(1-\delta\), for a given value of \(\delta\in(0,1)\), such an analysis is beyond the scope of this paper and is left for future research. \begin{table} \begin{tabular}{c c c c c c c c c|c} \hline \#of objectives & \(M=1\) & \(M=2\) & \(M=3\) & \(M=4\) & \(M=5\) & \(M=6\) & \(M=7\) & \(M=8\) & Running time (in seconds) \\ \hline N = 50 & 0 & 0 & 3 & 14 & 37 & 33 & 11 & 2 & 146.57 \\ \hline N = 100 & 0 & 0 & 2 & 16 & 35 & 36 & 11 & 322.13 \\ \hline N = 500 & 0 & 0 & 0 & 0 & 0 & 12 & 88 & 1289.68 \\ \hline N = 1000 & 0 & 0 & 0 & 0 & 0 & 1 & 99 & 3297.08 \\ \hline \end{tabular} \end{table} Table 3: Simulation output _Example 3.12_ (**Khan**, **Ponomareva**, **and **Tamer 2023** with two lags ).: I now consider an AR(2) extension of the model in Example 3.4. Specifically, I generalize the model in 3.4 to include an additional lag as follows: \[Y_{\rm ti}=\mathbb{I}\{X_{\rm it}^{\prime}\beta+\gamma_{1}Y_{\rm i,t-1}+\gamma_{ 2}Y_{\rm i,t-2}-\lambda_{\rm i}-\varepsilon_{\rm ti}>0\}, \tag{3.22}\] where \(\gamma_{1}\) and \(\gamma_{2}\) denote respectively the coefficients on the first and second lags. The goal of the analysis is now to characterize the identified set of the common parameters \(\beta\), \(\gamma_{1}\), and \(\gamma_{2}\), denoted by \(\theta=(\beta,\gamma_{1},\gamma_{2})\). I consider a setting with three time periods, i.e., \(t\in[3]\), and when \(t=1\), I assume that the pair \((Y_{0},Y_{-1})\) is observed and represents the initial condition. In contrast to Example 3.4, I replace the unconditional stationarity restriction on the shocks with the following conditional (on the initial condition) stationarity restriction: For \(s,t\in[3]\) \[\varepsilon_{\rm ti}|Y_{0i},Y_{-1i},X_{\rm i},\lambda_{i}\sim\varepsilon_{s \rm i}|Y_{0i},Y_{-1i},X_{\rm i},\lambda_{\rm i}, \tag{3.23}\] where \(X_{\rm i}=(X_{1i}^{\prime},X_{2i}^{\prime},X_{3i}^{\prime})^{\prime}\) is the vector of covariates for all three time periods. Below, I extend the constructions of Section 3.1 to the current setting with two lags, and use it to generate the conditional moment inequalities that characterize the identified set. Given a parameter value \(\theta\), a covariate value \(x\), and a value \(y=(y_{0},y_{-1})\) of the initial condition, let \(\mathcal{P}(y,x,\theta)\) denote the set of all CCP vectors that are consistent with the model 3.22 and the stochastic restriction 3.23 at the covariate value \(x\) and the initial condition \(y\), when the true parameter is equal to \(\theta\). For \(t\in[3]\), let \(v_{t}:=x_{t}^{\prime}\beta\). As in Section 3.1 the sets \(\mathcal{P}(y,x,\theta)\) are polytopes, and I now show how to construct their "discretization". The first step in the discretization consists of constructing the patches, which as in Section 3.1 are sets of values of realizations of the shocks that induce an agent to make a determined choice in each time period \(t\in[3]\) and in each potential state. Here, the "state" in period \(t\), denoted \(s_{t}\), simply represents the choices that were made in the preceding two time periods. At time \(1\), since we are conditioning on the initial condition, the unique state is given by the initial condition; at time \(2\), there are only \(2\) potential states since the choice made in period \(0\) is determined by the initial condition; there are four states in period \(3\), corresponding to all possible choices made in period \(1\) and \(2\). Thus, a patch can be encoded by a vector \(f=(d_{1},\cdots,d_{7})\in\{0,1\}^{7}\), where the \(d_{1}\) represents the choice that the agent makes in period \(1\) when the unique state is given by \(s_{1}=(y_{0},y_{-1})\) (i.e., the state is equal to the initial condition), the second and third entries of \(f\) represent respectively the choices made in period \(2\) when the state is given by \(s_{2}=(0,y_{0})\) and \(s_{2}=(1,y_{0})\) (for interpretation, the state \(s_{2}=(1,y_{0})\) indicates that alternative \(1\) was chosen in period \(1\) and alternative \(y_{0}\) was chosen in period 0). Finally, the fourth through seventh entry of \(f\) represent the choices made in period 3 when the states are respectively given by \(s_{3}=(0,0)\), \(s_{3}=(0,1)\), \(s_{3}=(1,0)\) and \(s_{3}=(1,1)\). The set of all patches \(F\) then consists of all vectors \(f=(d_{1},\cdots,d_{7})\in\{0,1\}^{7}\) such that the following system of inequalities in \(\varepsilon\) admits a solution: \[(2d_{1}-1)(\varepsilon-(v_{1}+\gamma_{1}y_{0}+\gamma_{2}y_{-1})) <0\] \[(2d_{2}-1)(\varepsilon-(v_{2}+\gamma_{2}y_{0})) <0\] \[(2d_{3}-1)(\varepsilon-(v_{2}+\gamma_{1}+\gamma_{2}y_{0})) <0\] \[(2d_{4}-1)(\varepsilon-v_{3}) <0 \tag{3.24}\] \[(2d_{5}-1)(\varepsilon-(v_{3}+\gamma_{2})) <0\] \[(2d_{6}-1)(\varepsilon-(v_{3}+\gamma_{1})) <0\] \[(2d_{7}-1)(\varepsilon-(v_{3}+\gamma_{1}+\gamma_{2})) <0.\] As in 3.8, checking whether a solution to the latter system exists is equivalent to checking whether 7 intervals (one for each inequality in 3.24) have non-empty intersection. This can easily be done by solving a LP as in 3.10. Moreover, it follows from 3.24 that the set of patches \(F\) is uniquely determined by the relative ordering between the quantities \(v_{1}+\gamma_{1}y_{0}+\gamma_{2}y_{-1}\), \(v_{2}+\gamma_{2}y_{0}\), \(v_{2}+\gamma_{1}+\gamma_{2}y_{0}\), \(v_{3}\), \(v_{3}+\gamma_{2}\), \(v_{3}+\gamma_{1}\) and \(v_{3}+\gamma_{1}+\gamma_{2}\). Let the chosen values of \(x\), \(y\) and \(\theta\) be such that \[v_{1}=0,\ v_{2}=4,\ v_{3}=2,\ \gamma_{1}=3,\ \gamma_{2}=-4\ \text{and}\ y=(1,1). \tag{3.25}\] This particular choice leads to the ordering: \[v_{3}+\gamma_{2}<v_{1}+\gamma_{1}y_{0}+\gamma_{2}y_{-1}<v_{2}+\gamma_{2}y_{0} <v_{3}+\gamma_{1}+\gamma_{2}<v_{3}<v_{2}+\gamma_{1}+\gamma_{2}y_{0}<v_{3}+ \gamma_{1}, \tag{3.26}\] and the inequalities that \(I\) generate below represent all the restrictions that the model (3.22 and 3.23) places on the set of CCPs \(\mathcal{P}(y,x,\theta)\) whenever \(y\), \(x\) and \(\theta\) are chosen such that 3.26 holds. In order to generate all the inequalities that characterize the identified set of the parameter \(\theta\), the construction that \(I\) discuss below can be repeated for alternative orderings of the quantities in 3.26. Given the values in 3.25, the set \(F\) of all patches can easily be obtained by solving 3.24 for all potential vector \(f\in\{0,1\}^{7}\); doing so leads to only 8 "non-empty" patches (i.e., \(|F|\)=8). As in Section 3.1, this set of patches can then be used to partition \(\mathbb{R}^{3}\), the set of all possible realizations of \(\varepsilon=(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})^{\prime}\) (the vector of shocks in periods 1 through 3), into "rectangular regions" formed by taking the Cartesian product of elements of \(F\). The resulting partition \(\mathcal{R}\) has \(8^{3}=512\) elements/regions. The matrix \(A\) is a 8-by-512 array with each row indexed by a triple \(d=(d,d^{\prime},d^{\prime\prime})\in\{0,1\}^{3}\) and each column indexed by a region \(\mathbf{R}=f_{1}\times f_{2}\times f_{3}\in\mathcal{R}\) such that the entry in the \(d^{\text{th}}\) row and column is equal to 1 (and equal to zero otherwise) if and only if realizations of the shocks in the region \(\mathbf{R}\) induce the agent to respectively choose alternatives \(\mathrm{d}\), \(\mathrm{d}^{\prime}\) and \(\mathrm{d}^{\prime\prime}\) in periods 1, 2 and 3, when the initial condition is given by \(y=(1,1)\). That is, the entry in the \(\mathrm{d}^{\mathrm{th}}\) row and \(\mathbf{R}^{\mathrm{th}}\) column of \(\mathrm{A}\) is equal to 1 iff: \[\mathrm{d}=\mathrm{f}_{1}(1),\ \mathrm{d}^{\prime}=\mathrm{f}_{2}(2+\mathrm{d}), \ \mathrm{and}\ \mathrm{d}^{\prime\prime}=\mathrm{f}_{3}(4+\mathrm{d}+2\mathrm{d}^{\prime})\] where for \(j\in[7]\) and \(t\in[3]\), \(\mathrm{f}_{t}(j)\) represents the \(j^{(\mathrm{th})}\) entry of the patch \(\mathrm{f}_{t}\). I now discuss the construction of the matrix \(R_{S}\) which enforces the conditional stationarity restriction on the discretized local model. Let \(B_{1}\) and \(B_{2}\) be two 8-by-512 matrices. Each row of \(B_{i}\), for \(i\in[2]\), indexes a patch \(f\in F\) (recall that \(|F|=8\) for the particular choice in 3.25) and each column of \(B_{i}\) indexes a region \(\mathbf{R}=\mathrm{f}_{1}\times\mathrm{f}_{2}\times\mathrm{f}_{3}\in\mathcal{R}\). Let the entries in the \(f^{\mathrm{th}}\) row and \(\mathbf{R}^{\mathrm{th}}\) column of \(B_{1}\) and \(B_{2}\) be respectively given by \[B_{1}(f,\mathbf{R})=\mathbbm{1}\{\mathrm{f}_{1}=f\}-\mathbbm{1}\{\mathrm{f}_{ 3}=f\},\] and \[B_{2}(f,\mathbf{R})=\mathbbm{1}\{\mathrm{f}_{2}=f\}-\mathbbm{1}\{\mathrm{f}_{ 3}=f\}.\] Essentially, the matrix \(B_{1}\) enforces the conditional stationarity stochastic restriction 3.23 between periods 1 and 3, and the matrix \(B_{2}\) enforces the same restriction between periods 2 and 3. The matrix \(R_{S}\) is then obtained by vertically concatenating the matrices \(B_{1}\) and \(B_{2}\), i.e., \[R_{S}=(B_{1}^{\prime},B_{2}^{\prime})^{\prime}.\] The set of all inequality restrictions that the model places on the CCP vectors in \(\mathcal{P}(y,x,\theta)\) is then simply the set of solutions of the MOLP 2.11, with the \(A\) and \(R_{S}\) matrices that we constructed in the preceding paragraph. Solving this MOLP (using Benson's algorithm) yields the following solution set: 58 Footnote 58: The running time of the whole procedure (from the construction of the patches to solving for the MOLP) takes less than two seconds on the same machine used to generate the outputs of Table 1 and 2. \[\begin{split}&\mathrm{P}_{000}\quad\mathrm{P}_{001}\quad\mathrm{P}_{01 0}\quad\mathrm{P}_{011}\quad\mathrm{P}_{100}\quad\mathrm{P}_{110}\quad\mathrm{P }_{111}\\ &\left(\begin{array}{cccccc}0&-1&1&0&-1&-1&0&-1\\ -1&-1&1&0&-1&-1&0&0\\ 0&0&-1&-1&1&1&0&0\\ 0&-1&0&-1&0&0&1&0\end{array}\right)\end{split}\] where for \(\mathrm{d}_{1},\mathrm{d}_{2},\mathrm{d}_{3}\in\{0,1\}\), \(\mathrm{p}_{\mathrm{d}_{1}\mathrm{d}_{2}\mathrm{d}_{3}}\) represents the entry of the CCP vector equal to \(\mathrm{P}(Y_{1}=\mathrm{d}_{1},Y_{2}=\mathrm{d}_{2},Y_{3}=\mathrm{d}_{3}|X=x,Y_{0}=1,Y_{-1}=1)\). The first row corresponds to the inequality \[\mathrm{p}_{010}\leq\mathrm{p}_{001}+\mathrm{p}_{100}+\mathrm{p}_{101}+\mathrm{ p}_{111}\] which, after adding \(p_{110}\) to both sides, can be rewritten as \[\begin{split} P(Y_{2}=1,Y_{3}=0|x,Y_{0}=1,Y_{-1}=1)& \leq P(Y_{1}=0,Y_{2}=0,Y_{3}=1|x,Y_{0}=1,Y_{-1}=1)\\ &+P(Y_{1}=1|x,Y_{0}=1,Y_{-1}=1).\end{split} \tag{3.27}\] The second row corresponds to the inequality \[P(Y_{1}=0,Y_{2}=1,Y_{3}=0|x,Y_{0}=1,Y_{-1}=1)\leq P(Y_{2}=0|x,Y_{0}=1,Y_{-1}=1). \tag{3.28}\] The third row (after adding \(p_{001}+p_{000}\) to both sides) corresponds to the inequality \[P(Y_{2}=0|x,Y_{0}=1,Y_{-1}=1)\leq P(Y_{1}=0|x,Y_{0}=1,Y_{-1}=1). \tag{3.29}\] The fourth row (after adding \(p_{010}+p_{000}\) to both sides) corresponds to the inequality \[P(\{Y_{1}=0\}\cup\{Y_{2}=1\},Y_{3}=0|x,Y_{0}=1,Y_{-1}=1)\leq P(Y_{1}=0|x,Y_{0} =1,Y_{-1}=1). \tag{3.30}\] Analytical proofs of inequalities 3.27-3.30 are provided in Section 5.1 of the Appendix. I provide a proof of Inequalities 3.28-3.30, as well as their generalization, in Proposition 5.1 of the Appendix; these proofs are relatively simple, and follow arguments similar to those in Khan, Ponomareva, and Tamer 2023. However, establishing inequality 3.27 is more challenging and relies on an intricate use of the stationarity assumption 59. As such, in the absence of an algorithm like the one presented in this paper, it may be difficult to guess an inequality like 3.27. Its proof is provided in Proposition 5.2 of the Appendix. Footnote 59: Of all the inequalities that were generated by the algorithm in this paper, Inequality 3.27 was the most difficult one to verify analytically. The inequalities 3.27-3.30 generated by the algorithm represent some, but not all, of the inequalities that characterize the identified set. In particular, they are valid only when the ordering 3.26 holds. To determine _all_ of the inequalities that characterize the identified set, it can be helpful to prove analytically the inequalities generated by the algorithm for a few orderings of the quantities in 3.26. By doing so, one can then guess and prove their generalization to alternative orderings of the quantities in 3.26 (see Proposition 5.1 for instance). These predictions can then be compared against the inequality restrictions that the algorithm generates for alternative orderings of 3.26. If there is a mismatch (i.e., the algorithm generates some new inequalities that are not implied by the predicted inequalities) for some ordering, the new inequalities can be proven analytically, and the future predictions can be refined by taking into consideration these new inequalities. ## 4 Conclusion This paper has presented an algorithm for generating the conditional moment inequality restrictions that delineate the sharp identified set of the common parameter for various multinomial choice models. The algorithm is compatible with a broad range of stochastic restrictions, and I illustrated through numerous examples that it can recover established results and generate new ones. One drawback of the algorithm is that it may not be computationally feasible for large-scale models, particularly when \(D^{T}\) is "large". To overcome this limitation, future research could further explore the specific properties of MOLPs that are linked to multinomial choice models, in order to devise more practical algorithms for larger problems, as I have attempted to do in Section 2.4. And although my focus in this paper was on panel multinomial choice models, the methodology introduced here should be applicable to other discrete outcome models, such as the bivariate models and the panel data games discussed in Honore and Paula 2021, as well as to cross-sectional settings. The key features needed for the approach to succeed are that the local models are polytopes and that the "local model map" \((x,\theta)\to\mathcal{P}(x,\theta)\) assumes a finite number of values (as shown in Proposition 2.8). As long as these conditions hold, the approach pursued in this paper, with the requisite modifications, ought to be viable. Such extensions are left for future research. ## 5 Appendix A Proof of Proposition 2.6.: Let \(p\) be a given CCP in \(\mathcal{P}_{i}(x,\theta)\), and let \(q\) (satisfying \(R_{i}q=0\)) be a discrete probability vector that rationalizes \(p\). For each patch \(f\in F\), let \(g_{f}\) be a density on \(\mathbb{R}^{D}\) that is supported on a compact subset of \(f\) (such densities exist since the patches \(f\) are open sets). Then the density \(g\) defined on \(\mathbb{R}^{DT}\) by \[g(x_{1},\cdots,x_{T}):=\sum_{f_{1},\cdots,f_{T}\in F}q_{f_{1}\times\cdots, \times f_{T}}\prod_{t\in[T]}g_{f_{t}}(x_{t})\] puts the same mass as \(q\) in each region of \(R\in\mathcal{R}\) (hence it rationalizes \(p\) and is thus observationally equivalent to \(q\)), and it can be checked that \(g\) is stationary (resp. exchangeable) if \(q\) is stationary (resp. exchangeable). Proof of Proposition 2.8.: For fixed \(D\) and \(T\), and \(i\in\{E,S\}\), there are only finitely many possible matrices \(A\) and \(R_{i}\) that we can obtain from the discretization step from Section 2.2 (the maximum number of columns of the \(A\) and \(R\) matrices is upper bounded by \(D^{T^{2}}\), and their entries are in \(\{0,\pm 1\}\)), and if the \(A\) and \(R\) matrices are the same at two covariate-parameter pairs, then DDCP is the same at these two points, and thus the corresponding MOLPs coincide. Let \(\{(A^{(k)},R_{i}^{(k)})\}_{k\in[m]}\) denote all possible configurations of the matrices \(A\) and \(R_{i}\). For \(k\in[m]\), let \(O_{k}\) denote the set of all \((x,\theta)\in\mathcal{X}\times\Theta\) such that the DCP at \((x,\theta)\) is given by \(\{p=A^{(k)}q\mid R_{i}^{(k)}q=0,\text{ and }q\geq 0\}\). And let \(I_{k}\) denote the solution of the MOLP 2.11 when the \(A\) and \(R_{i}\) matrices are equal to \(A^{(k)}\) and \(R_{i}^{(k)}\). Then the statement of the proposition follows with \(m\), \(O_{k}\) and \(I_{k}\) as above. Note that although the suggested upper bound on the quantity \(m\) from the preceding argument can be quite large, the actual value of \(m\) is much smaller, and by exploiting the symmetries of the problem we can further reduce the number of MOLPs that need to be solved (\(m=3\) in the example of Section 2.1 where \(D=T=2\), and \(m=5\) in Examples 2.23 and 2.25 where \(D=4\) and \(T=2\)). Proof of Proposition 2.14.: I prove the proposition here for a two periods binary choice model (setting of Section 2.1) under the conditional IID assumption, but the same argument can be extended (with some additional work) to both dynamic and static models with \(D\geq 2\) alternatives, \(T\geq 2\) time periods. Let \(v_{t}:=x_{t}^{\prime}\theta\), for \(t\in[2]\), and assume that \(v_{1}>v_{2}\) (a similar argument holds if \(v_{1}=v_{2}\) or \(v_{1}<v_{2}\)). Each \(p\in\mathcal{P}(x,\theta)\), is of the form: \[p=\begin{pmatrix}p_{00}\\ p_{01}\\ p_{10}\\ p_{11}\end{pmatrix}=\int_{\mathbb{R}}\begin{pmatrix}(1-r_{\lambda})(1-s_{ \lambda})\\ (1-r_{\lambda})s_{\lambda}\\ r_{\lambda}(1-s_{\lambda})\\ r_{\lambda}s_{\lambda}\end{pmatrix}d\nu(\lambda) \tag{5.1}\] where \(\nu\) is a conditional distribution of the fixed effects given \(x\), \(r_{\lambda}=P(\epsilon_{1}+\lambda<v_{1}|x,\lambda)\), \(s_{\lambda}=P(\epsilon_{2}+\lambda<v_{2}|x,\lambda)\). Since \(\epsilon_{1}\) and \(\epsilon_{2}\) are IID conditional on \(x\) and \(\lambda\) and \(v_{1}>v_{2}\), we have \(r_{\lambda}\geq s_{\lambda}\) for all values of \(\lambda\) in the support of \(\nu\). Equation 5.1 implies that \(\mathcal{P}(x,\theta)\) is convex, and is the convex hull of the compact set \[\mathcal{S}=\{p=((1-r)(1-s),s(1-r),r(1-s),rs)^{T}\mid r\in[0,1]\text{ and }0\leq s\leq r\}. \tag{5.2}\] which is closed (by Caratheodory's Theorem and the compactness of \(\mathcal{S}\), each element of \(\mathcal{P}(x,\theta)\) can be written as a convex combination of at most 4 elements of \(\mathcal{S}\), and it is without loss of generality to assume that the conditional distribution of the fixed effects given \(x\) is supported on at most 4 support points6061). By duality, a point \(p\in\mathbb{R}^{4}\) belongs to \(\mathcal{P}(x,\theta)\) if and only if \(w^{\prime}p\leq\mu_{\mathcal{P}}(w)\) for all \(w\in\mathbb{R}^{4}\). where \(\mu_{\mathcal{P}}\) denotes the support function of the set \(\mathcal{P}(x,\theta)\) and is defined by Footnote 60: The set \(\mathcal{S}\) is a subset of \(\mathbb{R}^{4}\) and its elements satisfy the restriction \(1^{T}p=1\). Hence by Caratheodory’s Theorem, every convex combination of elements of \(\mathcal{S}\) can be represented as a convex combination of at most 4 elements of \(\mathcal{S}\). Footnote 6: The set \(\mathcal{C}_{1}\) obtained by mixing elements of \(\mathcal{S}\) using a general probability measure for the fixed effects is a closed and convex set. The set \(\mathcal{C}_{2}\) obtained by mixing elements of \(\mathcal{S}\) using probability measures for the fixed effects supported on at most 4 points is convex and dense in \(\mathcal{C}_{1}\), and it is closed since \(\mathcal{S}\) is compact. Thus \(\mathcal{C}_{1}=\mathcal{C}_{2}\) \[\mu_{\mathcal{P}}(w)=\sup(w^{\prime}p\mid p\in\mathcal{P}(x,\theta))=\max\{w^ {\prime}p\mid p\in\mathcal{S}\}.\] As the support function of a polytope is piecewise-linear, its derivative is constant on any open set on which it is differentiable. To show that \(\mathcal{P}(x,\theta)\) is not a polytope, it suffices to find an open set on which \(\mu_{\mathcal{P}}\) is differentiable with non-constant derivative (see Hug and Weil 2020, Theorem 2.10). For \(w\in\mathbb{R}^{4}\) an element of the open set \(\{w=(w_{1},w_{2},w_{3},w_{4})^{\mathsf{T}}\in\mathbb{R}^{4}\mid w_{1}>w_{3}>w_{4},\ w_{3}+w_{4}>2w_{1},\ \text{and}\ w_{2}+w_{3}>w_{1}+w_{4}\}\), solving the maximization problem defining the support function yields \[\mu_{\mathcal{P}}(w)=w_{1}+\frac{(w_{2}+w_{3}-2w_{1})^{2}}{4(w_{2}+w_{3}-w_{1} -w_{4})}\] which is differentiable on the open set under consideration, and is clearly non-linear. Proof of Theorem 2.16.: As \(\operatorname{supp}(X)=x_{0}\), let \(\mathcal{P}(\theta):=\mathcal{P}(x_{0},\theta)\). Assume, for a contradiction, that the sharp identified set has a representation of the type 2.14. Then for any observable CCP vector \(p_{0}\) at \(x_{0}\), the sharp identified set \(\Theta(p_{0})\) (possibly empty, if \(p_{0}\) is not consistent with the model) can be written as: \[\Theta(p_{0})=\{\theta\in\Theta\mid\text{For all }k\in[M],\ \text{s.t.}\ \theta\in \tilde{\theta}_{k},\ \text{we have}\ \alpha_{k}^{\mathsf{T}}p_{0}\leq\beta_{k}\} \tag{5.3}\] where \[\tilde{\Theta}_{k}:=\{\theta\in\Theta\mid(x_{0},\theta)\in S_{k}\},\] and \(S_{k}\) is as in 2.14, and some of the \(\tilde{\Theta}_{k}\) possibly empty. Note that since \(\cup_{k\in[M]}S_{k}=\mathcal{X}\times\Theta\), we have \[\bigcup_{k\in[M]}\tilde{\Theta}_{k}=\Theta.\] Let \(\theta_{0}\) be as in the statement of Theorem 2.16, and let \(\Delta=\{k\in[M]\mid\theta_{0}\in\tilde{\Theta}_{k}\}\). Then \(\forall p\in\mathcal{P}(\theta_{0})\), Equation 5.3 implies that \(\theta_{0}\in\Theta(p)\) and \[\mathcal{P}(\theta_{0})\subseteq\mathcal{Q}:=\{q\mid\alpha_{k}^{\mathsf{T}}q \leq\beta_{k},\ \forall k\in\Delta\}.\] As \(\mathcal{Q}\) is a polytope and \(\mathcal{P}(\theta_{0})\) is not (by assumption), there exists \(\tilde{p}\in\mathcal{Q}\backslash\mathcal{P}(\theta_{0})\). But then \(\theta_{0}\in\Theta(\tilde{p})\) (by Equation 5.3, the definition of \(\Delta\) and \(\mathcal{Q}\)), and by the definition of the identified set \(\tilde{p}\), \(\tilde{p}\) should be a CCP vector consistent with the model at \((x_{0},\theta_{0})\). But this contradicts \(\tilde{q}\notin\mathcal{P}(\theta_{0})\). Proof of Theorem 2.26.: Let \(x\), \(\theta\) and \(D\geq 2\) be fixed, and assume (w.l.o.g.) that \[\Delta v_{1}\geq\Delta v_{2}\geq\cdots\geq\Delta v_{D}, \tag{5.4}\] which can be made to hold by relabeling the alternatives. Without loss of generality, we can assume that, \(\forall d\in[D]\), \(v_{d1}=0\) and \(v_{d2}=\Delta v_{d}\) (see Footnote 35). Let the sets \(\varepsilon_{d,t}\), for \(d\in[D]\) and \(t\in[2]\), be defined as in Equation 2.9. Then for \(A=\bigcup_{i=1}^{m}U_{k_{i}}\times L_{k_{i}^{\prime}}\), we have \[\mathsf{P}((y_{1},y_{2})\in A)=\mathsf{P}\left((\zeta_{1},\zeta_{2})\in\bigcup _{i=1}^{m}\left[(\bigcup_{d\leq k_{i}}\varepsilon_{d,1})\times(\bigcup_{d^{ \prime}\geq k_{i}^{\prime}}\varepsilon_{d^{\prime},2})\right]\right)\] where \(\zeta_{1},zeta_{2}\in\mathbb{R}^{D}\) are the composite errors (i.e, \(\zeta_{1}=\lambda+e_{1}\)), and where we have dropped the conditioning on \(x\) for notational simplicity. I now prove the following two inclusions \[\bigcup_{d\leq k_{i}}\varepsilon_{d,1}\subseteq\bigcup_{d\leq k_{i}} \varepsilon_{d,2}, \tag{5.5}\] and \[\bigcup_{d^{\prime}\geq k_{i}^{\prime}}\varepsilon_{d^{\prime},2}\subseteq \bigcup_{d^{\prime}\geq k_{i}^{\prime}}\varepsilon_{d^{\prime},1}, \tag{5.6}\] from which it will follow that \[\bigcup_{i=1}^{m}\left[(\bigcup_{d\leq k_{i}}\varepsilon_{d,1})\times(\bigcup _{d^{\prime}\geq k_{i}^{\prime}}\varepsilon_{d^{\prime},2})\right]\subseteq \bigcup_{i=1}^{m}\left[(\bigcup_{d\leq k_{i}}\varepsilon_{d,2})\times(\bigcup _{d^{\prime}\geq k_{i}^{\prime}}\varepsilon_{d^{\prime},1})\right]. \tag{5.7}\] Combining inclusion 5.7 with the fact that \((\zeta_{1},\zeta_{2})\) is exchangeable then yields \[\begin{split}\mathrm{P}\left((\zeta_{1},\zeta_{2})\in\bigcup_{i= 1}^{m}\left[\bigcup_{d\leq k_{i}}\varepsilon_{d,1}\times\bigcup_{d^{\prime} \geq k_{i}^{\prime}}\varepsilon_{d^{\prime},2}\right]\right)&\leq \mathrm{P}\left((\zeta_{1},\zeta_{2})\in\bigcup_{i=1}^{m}\left[(\bigcup_{d \leq k_{i}}\varepsilon_{d,2})\times(\bigcup_{d^{\prime}\geq k_{i}^{\prime}} \varepsilon_{d^{\prime},1})\right]\right)\\ &=\mathrm{P}\left((\zeta_{2},\zeta_{1})\in\bigcup_{i=1}^{m}\left[( \bigcup_{d\leq k_{i}}\varepsilon_{d,2})\times(\bigcup_{d^{\prime}\geq k_{i}^{ \prime}}\varepsilon_{d^{\prime},1})\right]\right)\\ &=\mathrm{P}\left((y_{1},y_{2})\in\bigcup_{i=1}^{m}\mathrm{L}_{k_ {i}^{\prime}}\times\mathrm{L}_{k_{i}}\right)\\ &=\mathrm{P}\left((y_{1},y_{2})\in\mathrm{Per}(A)\right).\end{split}\] Thus, it remains to show Inclusions 5.5 and 5.6. To see 5.5, note that we have (using \(v_{d1}=0\) for all \(d\in[D]\)) \[\bigcup_{d\leq k_{i}}\varepsilon_{d,1}=\{\zeta\in\mathbb{R}^{D}\ |\ \max_{d\leq k_{i}} \zeta_{d}>\max_{k_{i}<d^{\prime}\leq D}\zeta_{d^{\prime}}\}.\] Inequalities 5.4 implies that \[\min_{d\leq k_{i}}v_{d2}\geq\max_{k_{i}<d^{\prime}\leq D}v_{d^{\prime}2}\] (recall that \(v_{d2}=\Delta v_{d}\) for all \(d\in[D]\)), and thus whenever \(\max_{d\leq k_{i}}\zeta_{d}>\max_{k_{i}<d^{\prime}\leq D}\zeta_{d^{\prime}}\), we must have \[\max_{d\leq k_{i}}\zeta_{d}+v_{d2}>\max_{k_{i}<d^{\prime}\leq D}\zeta_{d^{ \prime}}+v_{d^{\prime}2}.\] Combining the preceding observations yields \[\bigcup_{d\leq k_{i}}\varepsilon_{d,1}\subseteq\{\zeta\in\mathbb{R}^{D}\ |\ \max_{d\leq k_{i}}\zeta_{d}+v_{d2}>\max_{k_{i}<d^{\prime}\leq D}\zeta_{d^{\prime} }+v_{d^{\prime}2}\}=\bigcup_{d\leq k_{i}}\varepsilon_{d,2},\] and this establishes Inclusion 5.5. Inclusion 5.6 follows by a similar argument; we have \[\bigcup_{d^{\prime}\geq k_{i}^{\prime}}\varepsilon_{d^{\prime},2}=\{\zeta\in \mathbb{R}^{D}\ |\ \max_{d<k_{i}^{\prime}}v_{d2}+\zeta_{d}<\max_{k_{i}^{\prime}\leq d^{\prime} \leq D}v_{d^{\prime}2}+\zeta_{d^{\prime}}\}.\] Inequalities 5.4 imply that \[\max_{d<k_{i}^{\prime}}-v_{d2}\leq\max_{k_{i}^{\prime}\leq d^{\prime}\leq D}-v _{d^{\prime}2},\] which when combined with the preceding equality yields \[\bigcup_{d^{\prime}\geq k_{i}^{\prime}}\varepsilon_{d^{\prime},2}\subseteq\{ \zeta\in\mathbb{R}^{D}\ |\ \max_{d<k_{i}^{\prime}}\zeta_{d}<\max_{k_{i}^{\prime}\leq D}\zeta_{d^{\prime}} \}=\bigcup_{d^{\prime}\geq k_{i}^{\prime}}\varepsilon_{d^{\prime},1},\] and this establishes 5.6. #### 5.0.1 Proof of Theorem 3.6 and omitted details of Remarks 3.9 and 3.10 Proof of Theorem 3.6.: As in Remark 2.4, it is without loss of generality to assume that the fixed effects are identically equal to zero, and I do so below. Fix \(A\in\mathcal{A}(y_{0},x,\theta)\), and let \(E_{A}\) denote the event that appears on the left hand side of Inequality 3.18: \[E_{A}:=\bigcup_{d\in[D]}\Big{\{}Y_{1}=d,Y_{2}\in\cup\big{\{}B\ |\ B\subseteq A,B\in \mathcal{L}(d_{1},y_{0},x,\theta_{0})\big{\}}\Big{\}}.\] For each \(d\in[D]\), let the set \(B_{d|A}(\subset[D])\), possibly empty, be defined by \[B_{d|A}:=\cup\big{\{}B\ |\ B\subseteq A,B\in\mathcal{L}(d,y_{0},x,\theta_{0}) \big{\}}\] and let the _associated_ event \(E_{d|A}\) be defined by \[E_{d|A}:=\{Y_{1}=d,Y_{2}\in B_{d|A}\}.\] Note that the events \(E_{d|A}\) are pairwise disjoint, and their union is equal to \(E_{A}\). Moreover, it follows from the definition of elements of \(\mathcal{L}(d_{1},y_{0},x,\theta_{0})\), that \(B_{d|A}\) is itself a "lower set", in that it satisfies \(B_{d|A}\subsetneq[D]\) (since \(A\subsetneq[D]\)) and \(\Delta(d^{\prime},d,y_{0},x,\theta)\leq\Delta(d^{\prime\prime},d,y_{0},x,\theta)\) for all \(d^{\prime}\in B_{d|A}\) and \(d^{\prime\prime}\in[D]\backslash B_{d|A}\). I now show that the events \(E_{d|A}\) are all included in the event \(F_{A}:=\{e_{2}\ |\max_{d\in A}v_{d1}+\gamma 1\{d=y_{0}\}+\varepsilon_{d2}>\max_{d \in[D]\backslash A}v_{d1}+\gamma 1\{d=y_{0}\}+\varepsilon_{d2}\}\). Assuming the latter is true, it then follows that \(P(E_{A})\leq P(F_{A})\), since the events \(E_{d|A}\) are pairwise disjoint, and the stationarity restriction then gives \(P(F_{A})=P(\{e_{1}\ |\max_{d\in A}v_{d1}+\gamma 1\{d=y_{0}\}+\varepsilon_{d1}>\max_{d \in[D]\backslash A}v_{d1}+\gamma 1\{d=y_{0}\}+\varepsilon_{d1}\})=P(Y_{1}\in A)\), which yields Inequality 3.18. Hence, it remains to show that \(E_{d_{1}|A}\subseteq F_{A}\) for all \(d_{1}\in[D]\). We have \[\begin{split}\mathsf{E}_{d_{1}|A}&\subseteq\big{\{} \epsilon_{2}\mid\max_{d\in\mathbb{B}_{d_{1}|A}}v_{d2}+\gamma 1\{d=d_{1}\}+\epsilon_{d2}>\max_{d\in[D]\setminus\mathbb{B}_{d_{1}|A}}v_{d2} +\gamma 1\{d=d_{1}\}+\epsilon_{d2}\big{\}}\\ &\subseteq\big{\{}\epsilon_{2}\mid\max_{d\in\mathbb{B}_{d_{1}|A}} v_{d2}+\gamma 1\{d=d_{1}\}-\Delta(d,d_{1},y_{0},x,\theta)+\epsilon_{d2}>\\ &\max_{d\in[D]\setminus\mathbb{B}_{d_{1}|A}}v_{d2}+\gamma 1\{d=d_{1}\}-\Delta(d,d_{1},y_{0},x,\theta)+ \epsilon_{d2}\big{\}}\\ &=\big{\{}\epsilon_{2}\mid\max_{d\in\mathbb{B}_{d_{1}|A}}v_{d1} +\gamma 1\{d=y_{0}\}+\epsilon_{d2}>\max_{d\in[D]\setminus\mathbb{B}_{d_{1}|A}}v_{d1} +\gamma 1\{d=y_{0}\}+\epsilon_{d2}\big{\}}\\ &\subseteq\big{\{}\epsilon_{2}\mid\max_{d\in\mathbb{A}}v_{d1}+ \gamma 1\{d=y_{0}\}+\epsilon_{d2}>\max_{d\in[D]\setminus A}v_{d1}+\gamma 1\{d=y_{0}\}+ \epsilon_{d2}\big{\}}{=F_{A}}\end{split}\] where the second inclusion follows since \(\mathbb{B}_{d_{1}|A}\) is a lower set, and where the last inclusion follows from \(\mathbb{B}_{d_{1}|A}\subseteq A\). **Omitted details of Remark 3.9** I show below that the inequalities of Theorem 3.6 and Theorem 3.5 of Pakes et al. 2022 are complementary, under Assumption 3.19. By the latter I mean that it is possible for the inequalities of Theorem 3.6 to be informative about the model's parameters while the inequalities 3.21 are not, and it is equally possible for the inequalities 3.21 to be informative about the model's parameters while those of Theorem 3.6 are not. I will illustrate this point using the application considered in Pakes et al. 2022. Pakes et al. 2022 use the following dynamic discrete choice model to estimate the extend of state dependence in the choice of health insurance plans. The indirect utility of individual i for health insurance plan d (\(d\in[D]\)) at time \(t\in[2]\) is modeled by \[U_{dti}=-X_{dti}-\gamma 1_{[Y_{(t-1)i}\neq d]}+\lambda_{di}+\epsilon_{dti},\] where \(X_{dti}\) is a scalar that denotes the price of health insurance plan d at time t. The goal is to identify the state dependence parameter \(\gamma\) (the switching cost) from choice data, when the only stochastic restriction on the shocks is given by Assumption 3.19. As noted in Footnote 49, the preceding specification of the indirect utilities is observationally equivalent to the following one \[U_{dti}=-X_{dti}+\gamma 1_{[Y_{(t-1)i}=d]}+\lambda_{di}+\epsilon_{dti}.\] Suppose that we know a priori that \(\gamma\geq 0\), and consider for simplicity a setting where \(D=2\), and where the support of the covariates and initial conditions is degenerate. That is, assume that \(X\) takes a single value \(\bar{x}\), and the initial condition takes the single value \(Y_{0}=1\). Then the conditional choice probability vector \(\bar{p}:=\{P(Y_{1}=d,Y_{2}=d^{\prime}|Y_{0}=1,X=\bar{x})\}_{d,d^{\prime}\in[0,1]}\), \(\bar{p}\in\mathbb{R}^{4}\), is identified from a random sample. For \(d\in[2]\), let \(\Delta\bar{x}_{d}:=\bar{x}_{d2}-\bar{x}_{d1}\) (i.e., the change in price of the \(d^{th}\) alternative from period 1 to 2). Suppose that the observed vector of prices in period 1 and 2, \(\bar{x}\), is such that \[\Delta\bar{x}_{2}-\Delta\bar{x}_{1}>0.\] I show below, by considering two cases, that depending on the value of \(\bar{\mathfrak{p}}\), inequalities 3.18 rule out some values of the parameter \(\gamma\) that inequalities 3.21 do not rule out, and vice-versa. Using the definitions of the quantities \(\Delta(\mathrm{d},\mathrm{d}_{1},y_{0},\mathrm{x},\theta)\) given in the paragraph that precedes Theorem 3.6, we have \[\Delta(1,1,1,\bar{\mathrm{x}},\gamma)=(-\bar{\mathrm{x}}_{12}+ \gamma)-(\bar{\mathrm{x}}_{11}+\gamma)=-\Delta\bar{\mathrm{x}}_{1}\quad\Delta( 2,1,1,\bar{\mathrm{x}},\gamma)=(-\bar{\mathrm{x}}_{22})-(\bar{\mathrm{x}}_{21} )=-\Delta\bar{\mathrm{x}}_{2},\] and \[\Delta(1,2,1,\bar{\mathrm{x}},\gamma)=(-\bar{\mathrm{x}}_{12})- (\bar{\mathrm{x}}_{11}+\gamma)=-\Delta\bar{\mathrm{x}}_{1}-\gamma\quad\Delta( 2,2,1,\bar{\mathrm{x}},\gamma)=(-\bar{\mathrm{x}}_{22}+\gamma)-(\bar{\mathrm{ x}}_{21})=-\Delta\bar{\mathrm{x}}_{2}+\gamma.\] Note that since \(\Delta\bar{\mathrm{x}}_{2}-\Delta\bar{\mathrm{x}}_{1}>0\), we have \(\Delta(1,1,1,\bar{\mathrm{x}},\gamma)>\Delta(2,1,1,\bar{\mathrm{x}},\gamma)\) and \(\mathcal{L}(1,1,\bar{\mathrm{x}},\gamma)=\{\{2\}\}\) for all values of \(\gamma\geq 0\), where \(\mathcal{L}(1,1,\bar{\mathrm{x}},\gamma)\) is as defined in Equation 3.16. It thus follows that for all \(\gamma\geq 0\), \(\{1\}\in\mathcal{D}(1,\bar{\mathrm{x}},\gamma)\), where \(\mathcal{D}(1,\bar{\mathrm{x}},\gamma)\) is as defined in Equation 3.20, and the corresponding inequality from 3.21 yields \[P(Y_{2}=1|Y_{1}=1,Y_{0}=1,X=\bar{\mathrm{x}})\geq P(Y_{1}=1|Y_{0} =1,X=\bar{\mathrm{x}}). \tag{5.8}\] As inequality 5.8 is valid for all values \(\gamma\geq 0\), it must hold for the identified CCP vector \(\bar{\mathfrak{p}}\) (unless the model is misspecified). Thus in both cases that I consider below, I assume that \(\bar{\mathfrak{p}}\) satisfies inequality 5.8. Note that \(\Delta(1,2,1,\bar{\mathrm{x}},\gamma)-\Delta(2,2,1,\bar{\mathrm{x}},\gamma)= \Delta\bar{\mathrm{x}}_{2}-\Delta\bar{\mathrm{x}}_{1}-2\gamma\); the two cases that I consider below depend on whether the latter quantity is positive or negative. Case 1. Suppose that \(0\leq\gamma<(\Delta\bar{\mathrm{x}}_{2}-\Delta\bar{\mathrm{x}}_{1})/2\). Then \(\Delta(1,2,1,\bar{\mathrm{x}},\gamma)>\Delta(2,2,1,\bar{\mathrm{x}},\gamma)\), \(\mathcal{L}(2,1,\bar{\mathrm{x}},\gamma)=\{\{2\}\}\), and we have \(\mathcal{A}(1,\bar{\mathrm{x}},\gamma)=\{\{2\}\}\) and \(\mathcal{D}(1,\bar{\mathrm{x}},\gamma)=\{\{1\}\}\), where \(\mathcal{A}(1,\bar{\mathrm{x}},\gamma)\) is as defined in equations 3.17. In this case, inequality 3.18 then becomes \[P(Y_{2}=2|Y_{0}=1,X=\bar{\mathrm{x}})\leq P(Y_{1}=2|Y_{0}=1,X= \bar{\mathrm{x}}) \tag{5.9}\] and inequality 3.21 yields inequality 5.8. Suppose that \(\bar{\mathfrak{p}}\) satisfies 5.8 but does not satisfy inequality 5.962 Then, for such \(\bar{\mathfrak{p}}\), the inequalities in Theorem 3.6 will rule out values of the parameter \(\gamma\) in the range \([0,(\Delta\bar{\mathrm{x}}_{2}-\Delta\bar{\mathrm{x}}_{1})/2)\), while the inequalities in Pakes et al. 2022 will fail to do so. Footnote 6: It can easily be shown that there are probability vectors \(\mathfrak{p}\in\mathbb{R}^{4}\) that satisfy inequality 5.8 but violate inequality 5.9. Case 2 Suppose now that \(\gamma>(\Delta\bar{\mathrm{x}}_{2}-\Delta\bar{\mathrm{x}}_{1})/2\). Then \(\Delta(1,2,1,\bar{\mathrm{x}},\gamma)<\Delta(2,2,1,\bar{\mathrm{x}},\gamma)\), \(\mathcal{L}(2,1,\bar{\mathrm{x}},\gamma)=\{\{1\}\}\), and we have \(\mathcal{A}(1,\bar{\mathrm{x}},\gamma)=\{\{1\},\{2\}\}\) and \(\mathcal{D}(1,\bar{\mathrm{x}},\gamma)=\{\{1\},\{2\}\}\). The inequalities 3.18 then yield \[P(Y_{2}=2,Y_{1}=1|Y_{0}=1,X=\bar{\mathrm{x}})\leq P(Y_{1}=2|Y_{0} =1,X=\bar{\mathrm{x}}) \tag{5.10}\] and \[P(Y_{2}=1,Y_{1}=2|Y_{0}=1,X=\bar{\mathrm{x}})\leq P(Y_{1}=1|Y_{0} =1,X=\bar{\mathrm{x}}). \tag{5.11}\] And the inequalities 3.21 yield inequality 5.8 and \[P(Y_{2}=2|Y_{1}=2,Y_{0}=1,X=\bar{x})\geq P(Y_{1}=2|Y_{0}=1,X=\bar{x}). \tag{5.12}\] Suppose that the identified CCP vector \(\bar{p}\) satisfies inequalities 5.8, 5.10 and 5.11, but violates inequality 5.1263. Then, for such \(\bar{p}\), the inequalities in Pakes et al. 2022 rule out values of \(\gamma\) in the range \(((\Delta\bar{x}_{2}-\Delta\bar{x}_{1})/2,+\infty)\), while the inequalities in Theorem 3.6 fail to do so. Footnote 63: It is straight forward to show that there exists such probability vectors \(p\in\mathbb{R}^{4}\). **Omitted details of Remark 3.10** I here show that the restrictions of Theorem 3.6 coincide with those of Theorem 2 of Khan, Ponomareva, and Tamer 2023 when \(D=T=2\). In the latter setting, the model 3.1 becomes: Alternative 2 is chosen at time \(t\in[2]\) if and only if (and alternative 1 is chosen otherwise) \[X^{\prime}_{2ti}\beta+\gamma 1[Y_{t-1,i}=2]+\lambda_{2i}+\epsilon_{2ti}>X^{ \prime}_{1ti}\beta+\gamma 1[Y_{t-1,i}=1]+\lambda_{1i}+\epsilon_{1ti}.\] After relabeling the alternatives to the set of values \(\{0,1\}\), the latter is equivalent to \[Y_{t}=1_{\{X^{\prime}_{1ti}\beta+\gamma Y_{t-1,i}+\lambda_{1i}+\epsilon_{1ti}> X^{\prime}_{0ti}\beta+\gamma(1-Y_{t-1,i})+\lambda_{0i}+\epsilon_{0ti}\}}. \tag{5.13}\] When \(d^{\prime}=0\), the "deterministic utility increments" \(\Delta(d,d^{\prime},y_{0},x,\theta)\) are given by: \[\Delta(0,0,y_{0},x,\theta)=v_{02}-v_{01}+\gamma-\gamma(1-y_{0})\quad\text{and} \quad\Delta(1,0,y_{0},x,\theta)=v_{12}-v_{11}-\gamma y_{0},\] and the set \(\mathcal{L}(0,y_{0},x,\theta)\) is determined by the order relation between \(\Delta(0,0,y_{0},x,\theta)\) and \(\Delta(1,0,y_{0},x,\theta)\). Specifically, \(\mathcal{L}(0,y_{0},x,\theta)=\{(0)\}\) (and is equal to \(\{(1)\}\) otherwise 64) if and only if Footnote 64: Here I have ignored ties for simplicity. Note that if \(\Delta(0,0,y_{0},x,\theta)=\Delta(1,0,y_{0},x,\theta)\) then \(\mathcal{L}(0,y_{0},x,\theta)=\{(0),\{1\}\}\). \[a_{2}-a_{1}>\hat{\gamma}y_{0} \tag{5.14}\] where \[\hat{\gamma}=2\gamma\quad\text{and}\quad a_{t}:=v_{1t}-v_{0t} \tag{5.15}\] for \(t\in[2]\). Similarly, when \(d^{\prime}=1\), the deterministic utility increments are given by \[\Delta(0,1,y_{0},x,\theta)=v_{02}-v_{01}-\gamma(1-y_{0})\quad\text{and}\quad \Delta(1,1,y_{0},x,\theta)=v_{12}-v_{11}+\gamma-\gamma y_{0},\] and, ignoring ties, we get \(\mathcal{L}(1,y_{0},x,\theta)=\{(0)\}\) (and is equal to \(\{(1\}\)) otherwise) if and only if \[a_{2}-a_{1}+\hat{\gamma}(1-y_{0})>0. \tag{5.16}\] Hence, ignoring potential ties between the deterministic utility differences, the inequalities of Theorem 3.6, in the \(D=T=2\) setting, are explicitly given by: 1. If inequalities 5.14 and 5.16 are true, i.e., \(a_{2}-a_{1}>\tilde{\gamma}y_{0}\) and \(a_{2}-a_{1}+\tilde{\gamma}(1-y_{0})>0\) (which can easily be shown to be equivalent to \(a_{2}-a_{1}+\min\{0,\tilde{\gamma}\}>\tilde{\gamma}y_{0}\)), then we have \(\mathcal{L}(0,y_{0},x,\theta)=\mathcal{L}(1,y_{0},x,\theta)=\mathcal{A}(y_{0},x,\theta_{0})=\{\{0\}\}\), and inequality 3.18 becomes \[P(Y_{2}=0|y_{0},x)\leq P(Y_{1}=0|y_{0},x).\] (5.17) 2. If inequalities 5.14 and 5.16 are false, i.e., \(a_{2}-a_{1}<\tilde{\gamma}y_{0}\) and \(a_{2}-a_{1}+\tilde{\gamma}(1-y_{0})<0\) (which is equivalent to \(a_{2}-a_{1}+\max(0,\tilde{\gamma})<\tilde{\gamma}y_{0}\)), then we have \(\mathcal{L}(0,y_{0},x,\theta)=\mathcal{L}(1,y_{0},x,\theta)=\mathcal{A}(y_{0},x,\theta_{0})=\{\{1\}\}\), and inequality 3.18 becomes \[P(Y_{2}=1|y_{0},x)\leq P(Y_{1}=1|y_{0},x).\] (5.18) 3. If inequality 5.14 is true and inequality 5.16 is false, i.e., \(a_{2}-a_{1}>\tilde{\gamma}y_{0}\) and \(a_{2}-a_{1}+\tilde{\gamma}(1-y_{0})<0\), then we have \(\mathcal{L}(0,y_{0},x,\theta)=\{\{0\}\}\), \(\mathcal{L}(1,y_{0},x,\theta)=\{\{1\}\}\), \(\mathcal{A}(y_{0},x,\theta_{0})=\{\{0\},\{1\}\}\), and Theorem 3.6 yields the following two inequalities: \[P(Y_{1}=0,Y_{2}=0|y_{0},x)\leq P(Y_{1}=0|y_{0},x)\] (5.19) and \[P(Y_{1}=1,Y_{2}=1|y_{0},x)\leq P(Y_{1}=1|y_{0},x).\] 4. If inequality 5.14 is false and inequality 5.16 is true, i.e., \(a_{2}-a_{1}<\tilde{\gamma}y_{0}\) and \(a_{2}-a_{1}+\tilde{\gamma}(1-y_{0})>0\), then we have \(\mathcal{L}(0,y_{0},x,\theta)=\{\{1\}\}\), \(\mathcal{L}(1,y_{0},x,\theta)=\{\{0\}\}\), \(\mathcal{A}(y_{0},x,\theta_{0})=\{\{0\},\{1\}\}\), and Theorem 3.6 yields the following two inequalities: \[P(Y_{1}=0,Y_{2}=1|y_{0},x)\leq P(Y_{1}=1|y_{0},x)\] (5.20) and \[P(Y_{1}=1,Y_{2}=0|y_{0},x)\leq P(Y_{1}=0|y_{0},x).\] Inequalities 5.17 - 5.20 represent all of the inequality restrictions on the CCPs that are implied by Theorem 3.6 when \(D=2\). To see that these are the same restrictions as those derived in Theorem 2 of Khan, Ponomareva, and Tamer 2023, when \(T=2\), note first that model 5.13 is equivalent to \[Y_{t}=\mathbb{1}_{\{\tilde{X}_{t}^{\prime}\beta+\tilde{\gamma}Y_{t-1,i}+ \tilde{\lambda}_{i}+\tilde{\epsilon}_{ti}>0\}} \tag{5.21}\] where \(\tilde{X}_{ti}:=X_{1ti}-X_{0ti}\), \(\tilde{\lambda}_{i}:=\lambda_{1i}-\lambda_{0i}\), and \(\tilde{\epsilon}_{ti}:=\epsilon_{titi}-\epsilon_{0ti}-\gamma\). It can easily be shown that \(\tilde{\epsilon}_{ti}\) satisfies the conditional stationarity assumption given \(y_{0}\), \(\tilde{x}\) and \(\tilde{\lambda}\), since the shocks \(\epsilon_{ti}\) are assumed to satisfy Assumption 3.1. Moreover, given \(y_{0}\), \(x\) and \(\theta\), the quantities \(a_{1}\) and \(a_{2}\), defined in 5.15, coincide respectively with \(\tilde{x}_{1}^{\prime}\beta\) and \(\tilde{x}_{2}^{\prime}\beta\). When choice is modeled as in 5.21 and \(T=2\), it is shown in Khan, Ponomareva, and Tamer 2023 that the inequality restrictions that characterize the sharp identified set are given by (rewritten here in a way that facilitates comparison): 1. If \(a_{2}-a_{1}+\min\{0,\tilde{\gamma}\}\geq\tilde{\gamma}y_{0}\) then \[P(Y_{2}=0|y_{0},x)\leq P(Y_{1}=0|y_{0},x).\] (5.22) 2. If \(a_{2}-a_{1}+\max\{0,\tilde{\gamma}\}\leq\tilde{\gamma}y_{0}\) then \[P(Y_{2}=1|y_{0},x)\leq P(Y_{1}=1|y_{0},x).\] (5.23) 3. If \(a_{2}-a_{1}+\tilde{\gamma}(1-y_{0})\geq 0\) then \[P(Y_{1}=1,Y_{2}=0|y_{0},x)\leq P(Y_{1}=0|y_{0},x).\] (5.24) 4. If \(a_{2}-a_{1}\geq\tilde{\gamma}y_{0}\) then \[P(Y_{1}=0,Y_{2}=0|y_{0},x)\leq P(Y_{1}=0|y_{0},x).\] (5.25) 5. If \(a_{2}-a_{1}+\tilde{\gamma}(1-y_{0})\leq 0\) then \[P(Y_{1}=1,Y_{2}=1|y_{0},x)\leq P(Y_{1}=1|y_{0},x).\] (5.26) 6. If \(a_{2}-a_{1}\leq\tilde{\gamma}y_{0}\) then \[P(Y_{1}=0,Y_{2}=1|y_{0},x)\leq P(Y_{1}=1|y_{0},x).\] (5.27) When inequalities 5.14 and 5.16 hold, Theorem 3.6 and Theorem 2 of Khan, Ponomareva, and Tamer 2023 have equivalent implications. Specifically, Theorem 3.6 yields inequality 5.17, while Theorem 2 of Khan, Ponomareva, and Tamer 2023 yields inequalities 5.22, 5.24 and 5.25, and the latter two are easily seen to be redundant given 5.22. When inequality 5.14 is true and inequality 5.16 is false, Theorem 3.6 provides restrictions 5.19, while Khan, Ponomareva, and Tamer 2023 provides restrictions 5.25 and 5.26, and both sets of restrictions coincide. When 5.14 is false and 5.16 is true, Theorem 3.6 yields restrictions 5.20, while Khan, Ponomareva, and Tamer 2023 yields restrictions 5.24 and 5.27, and both sets of restrictions coincide. Finally, when 5.14 and 5.16 are false, Theorem 3.6 yields restriction 5.18, while Khan, Ponomareva, and Tamer 2023 yields restrictions 5.23, 5.26, and 5.27, and both sets of restrictions are equivalent as inequalities 5.26 and 5.27 are redundant given 5.23. Therefore, the inequality restrictions provided by Theorem 3.6 encompass those of Theorem 2 in Khan, Ponomareva, and Tamer 2023 when \(D=T=2\). ### Proofs (and generalization) of Inequalities 3.27-3.30 The following proposition gives the general form of inequalities 3.28-3.30. The proof of inequality 3.27 is discussed further below. Consider model 3.22. Given \(X=x\), for \(t\in[T]\), let \(v_{t}:=x_{t}^{\prime}\beta\). Let \(\Delta_{1,2}(d_{1})\), for \(d_{1}\in\{0,1\}\), be defined by \[\Delta_{1,2}(d_{1}|y_{0},y_{-1},x,\theta):=(v_{2}+\gamma_{1}d_{1}+\gamma_{2}y_{ 0})-(v_{1}+\gamma_{1}y_{0}+\gamma_{2}y_{-1}), \tag{5.28}\] and for \(t\geq 3\), and \(d_{t-1},d_{t-2}\in\{0,1\}\), let \(\Delta_{1,t}(d_{t-1},d_{t-2})\) be defined by \[\Delta_{1,t}(d_{t-1},d_{t-2}|y_{0},y_{-1},x,\theta):=(v_{t}+\gamma_{1}d_{t-1}+ \gamma_{2}d_{t-2})-(v_{1}+\gamma_{1}y_{0}+\gamma_{2}y_{-1}). \tag{5.29}\] For \(2<t\leq T\) and \(d_{t-1},d_{t-2}\in\{0,1\}\), let \(\Delta_{2,t}^{+}(d_{t-1},d_{t-2})\) and \(\Delta_{2,t}^{-}(d_{t-1},d_{t-2})\)be defined by \[\Delta_{2,t}^{+}(d_{t-1},d_{t-2}|y_{0},y_{-1},x,\theta):=(v_{t}+\gamma_{1}d_{ t-1}+\gamma_{2}d_{t-2})-(v_{2}+\max\{\gamma_{1},0\}+\gamma_{2}y_{0}), \tag{5.30}\] and \[\Delta_{2,t}^{-}(d_{t-1},d_{t-2}|y_{0},y_{-1},x,\theta):=(v_{t}+\gamma_{1}d_{ t-1}+\gamma_{2}d_{t-2})-(v_{2}+\min\{\gamma_{1},0\}+\gamma_{2}y_{0}). \tag{5.31}\] For \(3\leq s<t\leq T\) and \(d_{t-1},d_{t-2}\in\{0,1\}\), let \(\Delta_{s,t}^{+}(d_{t-1},d_{t-2})\) and \(\Delta_{s,t}^{-}(d_{t-1},d_{t-2})\) be defined by \[\Delta_{s,t}^{+}(d_{t-1},d_{t-2}|y_{0},y_{-1},x,\theta):=(v_{t}+\gamma_{1}d_{ t-1}+\gamma_{2}d_{t-2})-(v_{s}+\max\{\gamma_{1},0\}+\max\{\gamma_{2},0\}). \tag{5.32}\] and \[\Delta_{s,t}^{-}(d_{t-1},d_{t-2}|y_{0},y_{-1},x,\theta):=(v_{t}+\gamma_{1}d_{ t-1}+\gamma_{2}d_{t-2})-(v_{s}+\min\{\gamma_{1},0\}+\min\{\gamma_{2},0\}). \tag{5.33}\] **Proposition 5.1**.: _Consider model 3.22, with \(T\geq 2\), and suppose that Assumption 3.23 holds. Let \(\theta=(\beta,\gamma_{1},\gamma_{2})\) denote the true parameter value. Then for all \((y_{0},y_{-1},x)\in\operatorname{supp}(Y_{0},Y_{-1},X)\), the following conditional moment inequalities hold (all probabilities are computed conditional on \(Y_{0}=y_{0}\), \(Y_{-1}=y_{-1}\) and \(X=x\)):_ \[P\left(\bigcup\{Y_{1}=d_{1},Y_{2}=0\ |\ d_{1}\in\{0,1\},\ \text{s.t.}\ \Delta_{1,2}(d_{1})\geq 0\}\right) \leq P(Y_{1}=0), \tag{5.34}\] \[P\left(\bigcup\{Y_{1}=d_{1},Y_{2}=1\ |\ d_{1}\in\{0,1\},\ \text{s.t.}\ \Delta_{1,2}(d_{1})\leq 0\}\right) \leq P(Y_{1}=1). \tag{5.35}\] _For \(t\geq 3\leq T\), we have_ \[P\left(\bigcup\{Y_{t-2}=d_{2},Y_{t-1}=d_{1},Y_{t}=0\ |\ d_{1},d_{2}\in\{0,1\},\ \text{s.t.}\ \Delta_{1,t}(d_{1},d_{2})\geq 0\}\right) \leq P(Y_{1}=0), \tag{5.36}\] \[P\left(\bigcup\{Y_{t-2}=d_{2},Y_{t-1}=d_{1},Y_{t}=1\ |\ d_{1},d_{2}\in\{0,1\},\ \text{s.t.}\ \Delta_{1,t}(d_{1},d_{2})\leq 0\}\right) \leq P(Y_{1}=1),\] (5.37) \[P\left(\bigcup\{Y_{t-2}=d_{2},Y_{t-1}=d_{1},Y_{t}=0\ |\ d_{1},d_{2}\in\{0,1\},\ \text{s.t.}\ \Delta_{2,t}^{+}(d_{1},d_{2})\geq 0\}\right) \leq P(Y_{2}=0),\] (5.38) \[P\left(\bigcup\{Y_{t-2}=d_{2},Y_{t-1}=d_{1},Y_{t}=1\ |\ d_{1},d_{2}\in\{0,1\},\ \text{s.t.}\ \Delta_{2,t}^{-}(d_{1},d_{2})\leq 0\}\right) \leq P(Y_{2}=1). \tag{5.39}\] _For \(3\leq s<t\leq T\), we have_ \[P\left(\bigcup\{Y_{t-2}=d_{2},Y_{t-1}=d_{1},Y_{t}=0\ |\ d_{1},d_{2}\in\{0,1\},\ \text{s.t.}\ \Delta_{s,t}^{+}(d_{1},d_{2})\geq 0\}\right) \leq P(Y_{s}=0), \tag{5.40}\] \[P\left(\bigcup\{Y_{t-2}=d_{2},Y_{t-1}=d_{1},Y_{t}=1\ |\ d_{1},d_{2}\in\{0,1\},\ \text{s.t.}\ \Delta_{s,t}^{-}(d_{1},d_{2})\leq 0\}\right) \leq P(Y_{s}=1). \tag{5.41}\] Proof.: As in Remark 2.4, we can assume without loss of generality that the fixed effects are identically equal to zero. To see why Inequality 5.34 holds, for \(d_{1}\in\{0,1\}\), let the event \(E_{d_{1},0}^{1,2}\) be defined by \[E_{d_{1},0}^{1,2} \coloneqq\{(\epsilon_{1},\epsilon_{2})\in\mathbb{R}^{2}\mid Y_{1}=d _{1},Y_{2}=0\}\] \[=\{(\epsilon_{1},\epsilon_{2})\in\mathbb{R}^{2}\mid(2d_{1}-1)[v_{ 1}+\gamma_{1}y_{0}+\gamma_{2}y_{-1}-\epsilon_{1}]>0,\ v_{2}+\gamma_{1}d_{1}+ \gamma_{2}y_{0}-\epsilon_{2}<0\}.\] When \(\Delta_{1,2}(d_{1})\geq 0\) we have \(\{\epsilon_{2}\in\mathbb{R}\mid v_{2}+\gamma_{1}d_{1}+\gamma_{2}y_{0}- \epsilon_{2}<0\}\subseteq\{\epsilon_{2}\in\mathbb{R}\mid v_{1}+\gamma_{1}y_{0 }+\gamma_{2}y_{-1}-\epsilon_{2}<0\}\). Inequality 5.34 then follows since the events \(E_{0,0}^{1,2}\) and \(E_{1,0}^{1,2}\) are disjoint, and the stationarity restriction (Assumption 3.23) yields \[P(\{\epsilon_{2}\in\mathbb{R}\mid v_{1}+\gamma_{1}y_{0}+\gamma_{2}y_{-1}- \epsilon_{2}<0\})=P(\{\epsilon_{2}\in\mathbb{R}\mid v_{1}+\gamma_{1}y_{0}+ \gamma_{2}y_{-1}-\epsilon_{1}<0\})=P(Y_{1}=0).\] The proof of Inequality 5.35 follows by a similar argument and is omitted. I now prove Inequality 5.36. For \(d_{1},d_{2},d\in\{0,1\}\), let the event \(E_{d_{2},d_{1},d}^{t-2,t-1,t}\) be defined by \[E_{d_{2},d_{1},d}^{t-2,t-1,t}:=\{(\epsilon_{t-2},\epsilon_{t-1},\epsilon_{t}) \in\mathbb{R}^{3}\mid Y_{t-2}=d_{2},Y_{t-1}=d_{1},Y_{t}=d\} \tag{5.42}\] We have \(E_{d_{2},d_{1},0}^{t-2,t-1,t}\subseteq\{\epsilon_{t}\in\mathbb{R}\mid v_{t}+ \gamma_{1}d_{1}+\gamma_{2}d_{2}<\epsilon_{t}\}\), and the inequality \(\Delta_{1,t}(d_{1},d_{2})\geq 0\) implies that \(\{\epsilon_{t}\in\mathbb{R}\mid v_{t}+\gamma_{1}d_{1}+\gamma_{2}d_{2}<\epsilon _{t}\}\subseteq\{\epsilon_{t}\in\mathbb{R}\mid v_{1}+\gamma_{1}y_{0}+\gamma_ {2}y_{-1}<\epsilon_{t}\}\). As above, Inequality 5.36 then follows since the events \(E_{d_{2},d_{1},0}^{t-2,t-1,t}\) are pairwise disjoint for \(d_{1},d_{2}\in\{0,1\}\) and the stationarity restriction yields \[P(\{\epsilon_{t}\in\mathbb{R}\mid v_{1}+\gamma_{1}y_{0}+\gamma_{2}y_{-1}< \epsilon_{t}\})=P(\{\epsilon_{1}\in\mathbb{R}\mid v_{1}+\gamma_{1}y_{0}+ \gamma_{2}y_{-1}<\epsilon_{1}\})=P(Y_{1}=0).\] Inequality 5.37 follows by a similar argument and its proof is omitted. To establish Inequality 5.38, for the events \(E_{d_{2},d_{1},0}^{t-2,t-1,t}\) defined as in 5.42, note that when \(\Delta_{2,t}^{+}(d_{1},d_{2})\geq 0\), we have \[E_{d_{2},d_{1},0}^{t-2,t-1,t}\subseteq\{\epsilon_{t}\in\mathbb{R}\mid v_{t}+ \gamma_{1}d_{1}+\gamma_{2}d_{2}<\epsilon_{t}\}\subseteq\{\epsilon_{t}\in \mathbb{R}\mid v_{2}+\max\{\gamma_{1},0\}+\gamma_{2}y_{0}<\epsilon_{t}\}.\] It then follows from the fact that the events \(E_{d_{2},d_{1},0}^{t-2,t-1,t}\) are pairwise disjoint, and by the stationarity restriction, that the term on the left of Inequality 5.38 is upper bounded by \[P(\{\epsilon_{2}\in\mathbb{R}\mid v_{2}+\max\{\gamma_{1},0\}+\gamma_{2}y_{0}< \epsilon_{2}\})\leq P(\{\epsilon_{2}\in\mathbb{R}\mid v_{2}+\gamma_{1}Y_{1}+ \gamma_{2}y_{0}<\epsilon_{2}\})=P(Y_{2}=0).\] The proof of Inequality 5.39 proceeds by a similar argument and is omitted. I now prove Inequality 5.41. For \(d_{1},d_{2}\in\{0,1\}\) and \(\Delta_{s,t}^{-}(d_{1},d_{2})\leq 0\), we have \[E_{d_{2},d_{1},1}^{t-2,t-1,t} \subseteq\{\epsilon_{t}\in\mathbb{R}\mid v_{t}+\gamma_{1}d_{1}+ \gamma_{2}d_{2}>\epsilon_{t}\}\] \[\subseteq\{\epsilon_{t}\in\mathbb{R}\mid v_{s}+\min\{\gamma_{1},0 \}+\min\{\gamma_{2},0\}>\epsilon_{t}\}.\] Since the events \(E_{d_{2},d_{1},1}^{t-2,t-1,t}\) (for \(d_{1},d_{2}\in\{0,1\}\)) are pairwise disjoint, the left hand side of Inequality 5.41 is upper bounded by \(P(\{e_{\mathrm{t}}\in\mathbb{R}\mid v_{\mathrm{s}}+\min\{\gamma_{1},0\}+\min\{ \gamma_{2},0\}>e_{\mathrm{t}}\})\) which, by the stationarity restriction, is equal to \(P(\{e_{\mathrm{s}}\in\mathbb{R}\mid v_{\mathrm{s}}+\min\{\gamma_{1},0\}+\min\{ \gamma_{2},0\}>e_{\mathrm{s}}\})\). Inequality 5.41 then follows since \[P(\{e_{\mathrm{s}}\in\mathbb{R}\mid v_{\mathrm{s}}+\min\{\gamma_{1},0\}+\min\{ \gamma_{2},0\}>e_{\mathrm{s}}\})\leq P(\{e_{\mathrm{s}}\in\mathbb{R}\mid v_{ \mathrm{s}}+\gamma_{1}Y_{\mathrm{s}-1}+\gamma_{2}Y_{\mathrm{s}-2}>e_{\mathrm{ s}}\})=P(Y_{\mathrm{s}}=1).\] The proof of Inequality 5.40 follows by a similar argument, and is omitted. **Analytic verification of Inequalities 3.28-3.30:** Recall that in the setting of Example 3.12, we have \(v_{1}=0\), \(v_{2}=4\), \(v_{3}=2\), \(\gamma_{1}=3\), \(\gamma_{2}=-4\), \(y_{0}=y_{-1}=1\). Inequality 3.28 is an instance of Inequality 5.38. Indeed, since \(v_{2}+\max(0,\gamma_{1})+\gamma_{2}y_{0}=v_{2}+\gamma_{1}+\gamma_{2}=3\), we have \(\Delta_{2,t}^{+}(1,0)=v_{3}+\gamma_{1}-(v_{2}+\max(0,\gamma_{1})+\gamma_{2}y_ {0})=5-3>0\). Similar computations show that \(\Delta_{2,t}^{+}(0,1)=-4<0\), \(\Delta_{2,t}^{+}(0,0)=-1<0\) and \(\Delta_{2,t}^{+}(1,1)=-2<0\). Inequality 3.29 is an instance of Inequality 5.34. Indeed, both \(\Delta_{1,2}(0)\) and \(\Delta_{1,2}(1)\) are positive (\(\Delta_{1,2}(0)=(v_{2}+\gamma_{2})-(v_{1}+\gamma_{1}+\gamma_{2})=1\) and \(\Delta_{1,2}(1)=4\)). Finally, Inequality 3.30 is an instance of Inequality 5.36. Indeed, \(\Delta_{1,3}(0,0)=v_{3}-(v_{1}+\gamma_{1}+\gamma_{2})=3\), \(\Delta_{1,3}(0,1)=-1\), \(\Delta_{1,3}(1,0)=6\) and \(\Delta_{1,3}(1,1)=2\). In the following proposition, I now turn to the proof of Inequality 3.27. **Proposition 5.2**.: _Consider model 3.22, with \(T=3\), and suppose that Assumption 3.23 holds. Let \(\theta=(\beta,\gamma_{1},\gamma_{2})\) denote the true parameter value, and suppose that \((y_{0},y_{-1},x)\in\mathrm{supp}(Y_{0},Y_{-1},X)\) is such that_ \[v_{3}+\gamma_{2}\leq v_{1}+\gamma_{1}y_{0}+\gamma_{2}y_{-1}\leq v_{2}+\gamma_ {2}y_{0}\leq v_{3}+\gamma_{1}+\gamma_{2}\leq v_{3}\leq v_{2}+\gamma_{1}+\gamma _{2}y_{0}\leq v_{3}+\gamma_{1}. \tag{5.43}\] _where, for \(t\in[3]\), \(v_{t}:=x_{t}^{t}\beta\). Then_ \[P(Y_{2}=1,Y_{3}=0|x,y_{0},y_{-1}) \leq P(Y_{1}=0,Y_{2}=0,Y_{3}=1|x,y_{0},y_{-1}) \tag{5.44}\] \[+P(Y_{1}=1|x,y_{0},y_{-1}).\] Proof.: Below, all probabilities are computed conditional on \(\{X=x,Y_{0}=y_{0},Y_{-1}=y_{-1}\}\) and, as in Remark 2.4, it is without loss of generality to assume that the fixed effects are identically equal to zero. We have \[P(Y_{2}=1,Y_{3}=0) =P(Y_{1}=1,Y_{2}=1,Y_{3}=0)+P(Y_{1}=0,Y_{2}=1,Y_{3}=0)\] \[=P(\epsilon_{1}<a_{1},\epsilon_{2}<a_{2}+\gamma_{1},\epsilon_{3} >a_{3}+\gamma_{2})+P(\epsilon_{1}>a_{1},\epsilon_{2}<a_{2},\epsilon_{3}>a_{3})\] \[=\text{\text@underline{a}}+\text{\text@underline{b}},\] where, for notational simplicity, \(a_{1}\), \(a_{2}\) and \(a_{3}\) are defined by \[a_{1}=v_{1}+\gamma_{1}y_{0}+\gamma_{2}y_{-1},\qquad a_{2}=v_{2}+\gamma_{2}y_{0}, \qquad a_{3}=v_{3}+\gamma_{1},\] and note that Inequality 5.43 implies that \(a_{1}\leq a_{2}\leq a_{3}\). The second term yields \[\vbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox to 0.0pt{$\vbox{ \hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox{\hbox{\hbox{ \hbox to 0.0pt{$\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox 0}}}}}}}}}}}}}}}} }=P(\epsilon_{1}>a_{1},\epsilon_{2}<a_{2},\epsilon_{3}>a_{3})=P(\epsilon_{1}>a_{1}, \epsilon_{2}>a_{3},\epsilon_{3}<a_{2})\] \[+P(\epsilon_{1}>a_{1},\epsilon_{2}<a_{2},\epsilon_{3}>a_{3})-P( \epsilon_{1}>a_{1},\epsilon_{2}>a_{3},\epsilon_{3}<a_{2})\] \[=P(\epsilon_{1}>a_{1},\epsilon_{2}>a_{3},\epsilon_{3}<a_{2})+ \vbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{ \hbox to 0.0pt{$\vbox{\hbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox{\hbox{ \hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{ \hbox{ \hbox{ }}}}}}}}}}}}}}}}}}.\] The first term of \(\vbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox to 0.0pt{$ \vbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox{\hbox{\hbox{\hbox{ \hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{ \hbox{ \hbox{ }}}}}}}}}}}}}}}}}}}}\) can be expanded as follows \[P(\epsilon_{1}>a_{1},\epsilon_{2}>a_{3},\epsilon_{3}<a_{2}) =P(\epsilon_{1}>a_{1},\epsilon_{2}>a_{2},\epsilon_{3}<a_{3}- \gamma_{1})-P(\epsilon_{1}>a_{1},a_{2}<\epsilon_{2}<a_{3},\epsilon_{3}<a_{2})\] \[-P(\epsilon_{1}>a_{1},\epsilon_{2}>a_{2},a_{2}<\epsilon_{3}<a_{3} -\gamma_{1})\] \[=P(Y_{1}=0,Y_{2}=0,Y_{3}=1)-\vbox{\hbox{\hbox to 0.0pt{$\vbox{ \hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox{\hbox{\hbox{ \hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{ \hbox \hbox { }}}}}}}}}}}}}}}}}- \vbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{ \hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{ \hbox{\hbox{\hbox{\hbox{\hbox { }}}}}}}}}}}}}}}}}}}\] By the stationarity restriction,65 can be rewritten as Footnote 65: Note that since \(P(X_{1}\in A,\{X_{2}\in B\}\cup\{X_{3}\in C\})+P(\{X_{1}\in A\}\cup\{X_{2}\in B \})+P(\{X_{1}\in A\}\cup\{X_{3}\in C\})+P(X_{1}\in A,X_{2}\in B,X_{3}\in C)=2P(X_{1} \in A)+P(X_{2}\in B)+P(X_{3}\in C)\), the quantity \(P(X_{1}\in A,\{X_{j}\in B\}\cup\{X_{k}\in C\})+P(\{X_{i}\in A\}\cup\{X_{j}\in B \})+P(\{X_{i}\in A\}\cup\{X_{k}\in C\})+P(X_{i}\in A,X_{j}\in B,X_{k}\in C)\) does not depend on the choice of \(i,j,k\in[3]\), if \(X_{1}\), \(X_{2}\) and \(X_{3}\) have the _same marginal distribution_ and are arbitrarily coupled. In the present context, the latter invariance yields \(P(\epsilon_{1}>a_{1},\epsilon_{2}<a_{2},\epsilon_{3}>a_{3})+P(\epsilon_{1}>a_ {1},\{\epsilon_{2}<a_{2}\}\cup\{\epsilon_{3}>a_{3}\})+P(\{\epsilon_{1}>a_{1} \}\cup\{\epsilon_{2}<a_{2}\})+P(\{\epsilon_{1}>a_{1}\}\cup\{\epsilon_{2}<a_{2} \})+P(\{\epsilon_{1}>a_{1}\}\cup\{\epsilon_{3}>a_{3}\})=P(\epsilon_{1}>a_{1},\epsilon_{2}>a_{3},\epsilon_{3}<a_{2})+P(\epsilon_{1}>a_{1},\{\epsilon_{2}>a_ {3}\}\cup\{\epsilon_{3}<a_{2}\})+P(\{\epsilon_{1}>a_{1}\}\cup\{\epsilon_{2}>a_ {3}\})+P(\{\epsilon_{1}>a_{1}\}\cup\{\epsilon_{3}<a_{2}\})\). \[\vbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{\hbox to 0.0pt{$\vbox{\hbox{ \hbox to 0.0pt{$\vbox{\hbox{{\hbox{\hbox{\hbox{\hbox{\hboxhboxhboxhboxhboxhboxhboxhboxhboxhboxhboxhboxhboxhboxhbox{ \hbox{\hbox{\hbox{\hboxhboxhboxhboxhboxhboxhboxhboxhboxhboxhboxhbox{\hboxhboxhbox{\hboxhboxhboxhboxhboxhboxhboxhbox{ \hboxhboxhbox{\hboxhboxhboxhboxhbox{\hboxhboxhbox{\hboxhbox{\hboxhboxhboxhbox{ \hboxhboxhboxhboxhboxhboxhbox{\hboxhboxhboxhbox{\hboxhbox{ \hboxhboxhboxhboxhboxhboxhboxhboxhbox{\hboxhboxhbox{ \hboxhboxhboxhboxhboxhboxhboxhbox{\hboxhboxhboxhbox{\hboxhboxhbox{ \hbox{\hbox{\hboxhboxhboxhboxhboxhboxhboxhboxhboxhboxhboxhboxhboxhboxhbox{ \hbox{\hboxhboxhboxhboxhboxhboxhboxhboxhboxhbox{\hboxhboxhbox{ \hboxhboxhbox{\hboxhboxhboxhboxhboxhboxhboxhbox{\hboxhboxhboxhboxhbox{ \hbox{\hboxhboxhboxhboxhboxhboxhboxhbox{\hboxhboxhboxhboxhbox{\hboxhboxhboxhbox{ \hbox{\hboxhboxhboxhboxhboxhboxhboxhbox{\hboxhboxhboxhboxhboxhbox{ \hbox{ \hboxhboxhbox{\hboxhboxhboxhboxhboxhboxhboxhboxhboxhbox{ \hbox{\hboxhboxhboxhboxhboxhboxhboxhbox{ \hbox{\hboxhboxhboxhboxhboxhboxhboxhboxhbox{ \hbox{ \hboxhboxhboxhbox \left \ Thus to establish Inequality 5.44, it suffices to show that \[P(\epsilon_{1}>a_{1},\{\epsilon_{2}>a_{3}\}\cup\{\epsilon_{3}<a_{2}\})-P(\epsilon_ {1}>a_{1},\{\epsilon_{2}<a_{2}\}\cup\{\epsilon_{3}>a_{3}\})-\big{(}\overleftarrow{ \mathbbm{d}}\big{)}-\big{(}\overleftarrow{\mathbbm{d}}\big{)}-\big{(} \overleftarrow{\mathbbm{d}}\big{)}-\big{(}\overleftarrow{\mathbbm{d}}\big{)}- \big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(}\overleftarrow{\mathbbm{d}} \big{)}\leq 0. \tag{5.45}\] We have \[P(\epsilon_{1}>a_{1},\{\epsilon_{2}>a_{3}\}\cup\{\epsilon_{3}<a_ {2}\})-P(\epsilon_{1}>a_{1},\{\epsilon_{2}<a_{2}\}\cup\{\epsilon_{3}>a_{3}\})\] \[=P(\epsilon_{1}>a_{1},\epsilon_{2}>a_{3})-P(\epsilon_{1}>a_{1}, \epsilon_{2}<a_{2})+P(\epsilon_{1}>a_{1},\epsilon_{2}<a_{3},\epsilon_{3}<a_{2})\] \[-P(\epsilon_{1}>a_{1},\epsilon_{2}>a_{2},\epsilon_{3}>a_{3})= \big{(}\overleftarrow{\mathbbm{d}}\big{)}-\big{(}\overleftarrow{\mathbbm{d}} \big{)}-\big{(}\overleftarrow{\mathbbm{d}},\] and Inequality 5.45 is equivalent to \[\big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(}\overleftarrow{\mathbbm{d}} \big{)}\leq\big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(}\overleftarrow{ \mathbbm{d}}\big{)}+\big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(} \overleftarrow{\mathbbm{d}}\big{)}.\] I first show that \(\big{(}\overleftarrow{\mathbbm{d}}\big{)}\leq\big{(}\overleftarrow{\mathbbm{d }}\big{)}+\big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(}\overleftarrow{ \mathbbm{d}}\big{)}\). Indeed, \(\big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(}\overleftarrow{\mathbbm{d}} \big{)}=P(\epsilon_{1}<a_{1},\epsilon_{2}<a_{2})+P(\epsilon_{1}>a_{1},\epsilon _{2}<a_{2})=P(\epsilon_{2}<a_{2})\) and \(\big{(}\overleftarrow{\mathbbm{d}}\big{)}-\big{(}\overleftarrow{\mathbbm{d }}\big{)}=P(\epsilon_{1}>a_{1},\epsilon_{2}<a_{3},\epsilon_{3}<a_{2})-P( \epsilon_{1}>a_{1},a_{2}<\epsilon_{2}<a_{3},\epsilon_{3}<a_{2})=P(\epsilon_{1} >a_{1},\epsilon_{2}<a_{2},\epsilon_{3}<a_{2})\), since \(a_{3}\geq a_{2}\). Hence \[\big{(}\overleftarrow{\mathbbm{d}}\big{)}-\big{(}\overleftarrow{ \mathbbm{d}}\big{)}-\big{(}\overleftarrow{\mathbbm{d}}\big{)}-\big{(} \overleftarrow{\mathbbm{d}}\big{)} =P(\epsilon_{1}>a_{1},\epsilon_{2}<a_{2},\epsilon_{3}<a_{2})-P( \epsilon_{2}<a_{2})\] \[=-P(\epsilon_{2}<a_{2},\{\epsilon_{1}<a_{1}\}\cup\{\epsilon_{3}>a _{2}\})\] \[=-P(\epsilon_{1}<a_{1},\epsilon_{2}<a_{2})-P(\epsilon_{1}>a_{1}, \epsilon_{2}<a_{2},\epsilon_{3}>a_{2})=-\big{(}\overleftarrow{\mathbbm{d}} \big{)}-\big{(}\overleftarrow{\mathbbm{d}}\big{)},\] and Inequality 5.45 is equivalent to \[\big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(}\overleftarrow{\mathbbm{d}} \big{)}\leq\big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(}\overleftarrow{ \mathbbm{d}}\big{)}+\big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(} \overleftarrow{\mathbbm{d}}\big{)}. \tag{5.46}\] We have \[\big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(}\overleftarrow{\mathbbm{d}} \big{)}=P(\epsilon_{1}<a_{1},\epsilon_{2}>a_{3},\epsilon_{3}<a_{2})+P(\epsilon_ {1}>a_{1},\epsilon_{2}>a_{3})\leq P(\epsilon_{2}>a_{3})=P(\epsilon_{3}>a_{3}) \tag{5.47}\] where the last equality follows from the stationarity restriction. Also, \(\big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(}\overleftarrow{\mathbbm{d}} \big{)}=P(\epsilon_{1}>a_{1},\epsilon_{2}>a_{2},\epsilon_{3}>a_{3})+P(\epsilon_ {1}>a_{1},\epsilon_{2}<a_{2},\epsilon_{3}>a_{2})\geq P(\epsilon_{1}>a_{1}, \epsilon_{3}>a_{3})\) (since \(a_{2}\leq a_{3}\)), and the latter implies that \[\big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(}\overleftarrow{\mathbbm{d}} \big{)}+\big{(}\overleftarrow{\mathbbm{d}}\big{)}\geq P(\epsilon_{1}>a_{1}, \epsilon_{3}>a_{3})+P(\epsilon_{1}<a_{1},\epsilon_{3}>a_{3})=P(\epsilon_{3}>a_{3}). \tag{5.48}\] Therefore, combining inequalities 5.47 and 5.48 yields \[\big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(}\overleftarrow{\mathbbm{d}} \big{)}\leq P(\epsilon_{3}>a_{3})\leq\big{(}\overleftarrow{\mathbbm{d}}\big{)}+ \big{(}\overleftarrow{\mathbbm{d}}\big{)}+\big{(}\overleftarrow{\mathbbm{d}} \big{)},\] and Inequality 5.46 follows. ### Appendix B (Proof of Theorem 2.19) In this section I give a proof of Theorem 2.19, starting first with the exchangeable case (Assumption 2.2). I present the proof under the assumption of stationarity further below. Let us assume for simplicity that the index function differences for the \(D\) alternatives are distinct (a similar argument can be made if there are ties among them), and assume (after relabeling alternatives is necessary) that \[\Delta v_{1}<\Delta v_{2}<\cdots<\Delta v_{D}. \tag{5.49}\] From Remark 2.5, the set \(F\) of patches is given by \(F:=\{(d,d^{\prime})\in[D]\times[D]\mid d\leq d^{\prime}\}\). Let \(F^{c}\) denote the complement of \(F\) in \([D]\times[D]\). #### 5.2.1 Exchangeable case The following claim gives an alternative description of the polytope \(\mathcal{Q}_{E}\) using its representation given by Equation 2.12. _Claim 5.3_.: The polytope \(\mathcal{Q}_{E}\) is the set of all \(y\in\mathbb{R}^{D^{2}}\) (with components denoted by \(y_{d,d^{\prime}}\), with \(d,d^{\prime}\in D\)) such that: \[y_{c,d}\geq-1\qquad\forall c,d\in[D] \tag{5.50}\] and \[y_{c,d^{\prime}}+y_{c^{\prime},d}\leq 0\quad\forall c,c^{\prime},d,d^{\prime} \in[D],\text{ s.t. }c\leq d\text{ and }c^{\prime}\leq d^{\prime}. \tag{5.51}\] Moreover, Inequality 5.51 implies \[y_{c,d}\leq 0\qquad\forall(c,d)\in F. \tag{5.52}\] Proof.: From 2.12 we have \(\mathcal{Q}_{E}=\{y|\ P_{E}A^{T}y\leq 0,\ \|y\|_{\infty}\leq 1\}\), where each row the matrix \(P_{E}\) corresponds to an extreme point of the set of exchangeable distributions on the set of rectangular regions \(\mathcal{R}\). Such distributions are determined (up to scale) by vectors of the type \(q^{(f,f^{\prime})}=\delta_{(f,f^{\prime})}+\delta_{(f^{\prime},f)}\) where \(f,f^{\prime}\in[F]\), and \(\delta_{(f,f^{\prime})}\in\mathbb{R}^{|F|^{2}}\) is the vector with all entries equal to zero, except for an entry of \(1\) in the position corresponding to the region \(R=f\times f^{\prime}\). The corresponding inequality \(q^{(f,f^{\prime})}{}^{T}A^{T}y\leq 0\) can be written as (for \(f=(c,d)\) and \(f^{\prime}=(c^{\prime},d^{\prime})\)): \[y_{c,d^{\prime}}+y_{c^{\prime},d}\leq 0 \tag{5.53}\] which yields Inequality 5.51 (recall from the construction of the matrix \(A\) that each row corresponding to a choice sequence \((d,d^{\prime})\) will have an entry of \(1\) in each column corresponding to a region \(f_{1}\times f_{2}\) such that \(f_{1},f_{2}\in[F]\) with \(f_{1}=(d,d_{1})\) and \(f_{2}=(d_{2},d^{\prime})\), for some \(d_{1},d_{2}\in[D]\)). Hence the inequalities in \(P_{E}A^{T}y\leq 0\) are equivalent to the inequalities in 5.51. That Inequality 5.52 follows from Inequality 5.51 is immediate. That Inequality 5.50 holds for all elements of \(\mathcal{Q}_{E}\), follows from the fact that all elements of \(\mathcal{Q}_{E}\) satisfy \(\|y\|_{\infty}\leq 1\). It thus remains to show that the inequalities \[y_{c,d}\leq 1\qquad\forall c,d\in[D] \tag{5.54}\] are redundant given inequalities 5.50 and 5.51. When \(c\leq d\), 5.54 follows from 5.52. Assume for a contradiction that \(y_{c,d}>1\) for some \(c>d\). Then Inequality 5.51 implies that \[y_{c,d}+y_{1,D}\leq 0\] which can only hold if \(y_{1,D}<-1\), contradicting the constraint 5.50. **Lemma 5.4**.: _Let \(\mathcal{Q}\) be a polytope given by_ \[\mathcal{Q}=\{y\in\mathbb{R}^{n}\ |\ Ay\leq 0,\ a\leq y\leq b\}\] _Where the matrix \(A\) (\(A\in\mathbb{R}^{m\times n}\)) has exactly two entries equal to 1 in each row (with all other entries equal to zero), and the vectors \(a\) and \(b\) (both in \(\mathbb{R}^{n}\)) are integral (i.e, all their entries are integers). Then \(\mathcal{Q}\) is integral (i.e., all of its extreme points are integral)_ Proof.: Let \(y\) be an extreme point of \(\mathcal{Q}\). Then by the _rank lemma_ (see Theorem 5.7 in Schrijver 2004), there exists sets \(I_{1}\subseteq[m]\) and \(I_{2},I_{3}\subseteq[n]\) (with some sets possibly empty), such that \(|I_{1}|+|I_{2}|+|I_{3}|=n\), and the point \(y\) is the unique solution of the system of equations \[A_{I_{1}}y=0\quad y_{I_{2}}=a_{I_{2}}\quad y_{I_{3}}=b_{I_{3}} \tag{5.55}\] where \(A_{I}\) represents the \(|I|-by-n\) matrix formed by the rows of \(A\) with index in \(I\), and \(a_{I}\) represents the \(|I|\) dimensional vector obtained by restricting the vector \(a\) to its components with indices in \(I\). I now show that the solution of the system 5.55 is integral, under the assumptions of the Lemma. Let \(G=(V,E)\) be the graph with vertex set \(V=[n]\), and edge set \(E\) consisting of all \((i,j)\) (with \(i,j\in[n]\)) such that there exists a row of the matrix \(A_{I_{1}}\) with a 1 in both the \(i^{th}\) and \(j^{th}\) column. For \(i\in[n]\), let \(e_{i}\in\mathbb{R}^{n}\) be the vector with an entry of 1 in the \(i^{th}\) position, with all other entries equal to 0. Since the system 5.55 has a unique solution, and \(|I_{1}|+|I_{2}|+|I_{3}|=n\), the rows of \(A_{I_{1}}\) and the vectors \(e_{j}\), with \(j\in I_{2}\cup I_{3}\), must form a linearly independent set. In particular, no edge in the graph \(G\) connects any \(i\in I_{2}\) to any \(j\in I_{3}\); if such an edge exists, then there is a row of \(A_{I_{1}}\), say its \(k^{th}\) row, with 1 in the \(i^{th}\) and \(j^{th}\) position, and we necessarily have that the \(k^{th}\) row of \(A_{I_{1}}\) and the vectors \(e_{i}\) and \(e_{j}\) are linearly dependent. Let \(V_{0}\subseteq[n]\) be given by \(V_{0}=I_{2}\cup I_{3}\). Note that if a vertex \(i\in[n]\) is not in \(V_{0}\), \(i\) must be incident on an edge in \(E\); otherwise, the system 5.55 puts no restriction on the \(i^{th}\) component of \(y\), and there cannot be a unique solution. The solution \(y\) of the system 5.55 is such that \(y_{i}\) is an integer for all \(i\in V_{0}\) (\(y\) satisfies \(y_{1_{2}}=a_{1_{2}}\) and \(y_{1_{3}}=b_{1_{3}}\)). Let \(V_{1}\subseteq[n]\) denote all vertices of the graph \(G\) that are reachable from the vertices in \(V_{0}\); that is \(V_{1}\) denotes the set of all vertices \(i\in V\) such that there exists a path (in \(G\)) from \(i\) to a vertex in \(V_{0}\). Arguing by induction on the distance from \(v\) to \(V_{0}\), we must have \(y_{i}\in\mathbb{Z}\) for all \(i\in V_{1}\); for instance, all elements of \(i\in V_{1}\) at a distance at most \(1\) to \(V_{0}\) are either in \(V_{0}\) (and the corresponding entry of \(y\) is an integer), or \(i\in V_{1}\backslash V_{0}\) and there exists \(j\in V_{0}\) such that \((i,j)\in E\), and the latter implies that \(y_{i}+y_{j}=0\), hence \(y_{i}\in\mathbb{Z}\). Let \(V_{3}=[n]\backslash V_{1}\) be the remaining vertices. Then since each element of \([n]\backslash V_{0}\) is incident on an edge in \(E\) (see the argument in the preceding paragraph), there exists \(I_{4}\subseteq I_{1}\) such that the only restrictions on the elements of \(V_{3}\) are from the equations \(A_{I_{4}}y=0\), and since the solution to 5.55 is unique, we must have \(y_{i}=0\) for all \(i\in V_{3}\). _Remark 5.5_.: Note that if we replace the constraint \(Ay\leq 0\) by the constraint \(Ay\leq c\) for some integral vector \(c\neq 0\), then the conclusion of Lemma 5.4 is no longer valid. Take \[A=\begin{pmatrix}1&1&0\\ 1&0&1\\ 0&1&1\end{pmatrix},\quad c=\begin{bmatrix}1\\ 1\\ 1\end{pmatrix},\quad a=\begin{bmatrix}0\\ 0\\ 0\end{bmatrix},\quad\text{and}\quad b=\begin{bmatrix}1\\ 1\\ 1\end{bmatrix}.\] Then \(y=(0.5,0.5,0.5)^{\prime}\) is a non-integral extreme point of \(Q\) (it is the unique solution to the equation \(Ay=c\), and it satisfies the other restrictions \(a\leq y\leq b\). See Theorem 5.7 in Schrijver 2004). If one tries to prove integrality of \(Q\) following the reasoning of the proof of Lemma 5.4, then the argument in the last paragraph, concerning the integrality of \(y_{i}\) for \(i\in V_{3}\), will no longer be valid. The validity of Lemma 5.4 can be "appreciated" numerically using Proposition 2.21. **Proposition 5.6**.: _The polytopes \(Q_{E}\) are integral (i.e., their extreme points are vectors with entries in \(\{0,\pm 1\}\))._ Proof.: The proof follows directly from Claim 5.3 and 5.4. Indeed the Inequalities 5.51, for all \(c,c^{\prime},d,d^{\prime}\in[D]\) s.t. \(c\neq c^{\prime}\) or \(d\neq d^{\prime}\), can written in matrix form as \(Cy\leq 0\), for some matrix \(C\) that has each row containing exactly two entries equal to \(1\), and the remaining inequalities in the Claim 5.3 can be written in the form \(a\leq y\leq b\), where the vector \(a\) has all entries equal \(-1\), and \(b\) has all entries equal to \(0\) or \(1\) (recall that from the proof of Claim 5.3, the inequalities \(y_{c,d}\leq 1\), for all \(c,d\in[D]\), are redundant given Inequalities 5.50 and 5.51). **Proposition 5.7**.: _All undominated extreme points \(y\) of \(Q_{E}\) satisfy:_ \[y_{c,d}\in\{0,1\}\qquad\forall\ (c,d)\in F^{c} \tag{5.56}\] \[y_{c,d}\in\{-1,0\}\qquad\forall\ (c,d)\in F \tag{5.57}\] _and_ \[y_{c,d}=-y_{d,c}\qquad\forall\ c,d\in[D]. \tag{5.58}\] _As a consequence of 5.58, all undominated extreme points of \(Q_{E}\) have rank 0._ Proof.: Let \(\hat{y}\) denote an undominated extreme point of \(Q_{E}\). Then there exists a \(w>0\) s.t. \(\hat{y}=\operatorname{argmax}\{w^{T}y\mid y\in Q_{E}\}\) (see Footnote 22). Inequality 5.57 follows from Inequality 5.52 and the fact that the extreme points of \(Q_{E}\) are integral. To get inequality 5.56, let \(y\in Q_{E}\) be such that \(y_{i,j}=-1\), for some \(i,j\in[D]\) such that \(i>j\), and define \(y^{\prime}\) by: \(y^{\prime}_{a,b}=y_{a,b}\), \(\forall(a,b)\neq(i,j)\), and \(y^{\prime}_{i,j}=0\). I show that \(y^{\prime}\) is feasible and has larger objective value, i.e., \(y^{\prime}\in Q_{E}\) and \(w^{T}y^{\prime}>w^{T}y\) (the latter necessarily follows since \(w>0\)). Indeed, \(y^{\prime}\) clearly satisfies Inequalities 5.50 and 5.52, and also satisfies Inequalities 5.51 whenever \((c,d^{\prime})\neq(i,j)\) and \((c^{\prime},d)\neq(i,j)\) (in all these cases the restriction on \(y^{\prime}\) coincide with the corresponding restriction on \(y\)). If we now consider 5.51 when \((c,d^{\prime})=(i,j)\), then for all \(c^{\prime},d\) such that \(c^{\prime}<j\) and \(i<d\), we have \((c^{\prime},d)\in F\) (as \(c^{\prime}<d\)) and \[y^{\prime}_{i,j}+y^{\prime}_{c^{\prime},d}=0+y_{c^{\prime},d}\leq 0\] since \(y_{c^{\prime},d}\leq 0\) by 5.52. Thus \(y^{\prime}\) is feasible, and we conclude that any undominated extreme point \(\hat{y}\) must satisfy \(\hat{y}_{i,m}\geq 0\) for all \((i,j)\in F^{c}\), and since \(\hat{y}\) is integral (by Proposition 5.6), we have 5.56. I now prove 5.58. let \(y\in Q_{E}\) be such that \(y_{i,i}=-1\), for some \(i\in[D]\), and define \(y^{\prime}\) by: \(y^{\prime}_{a,b}=y_{a,b}\), \(\forall(a,b)\neq(i,i)\), and \(y^{\prime}_{i,i}=0\). I show that \(y^{\prime}\) is feasible and has larger objective value (the latter immediately follows since \(w>0\)). Indeed, arguing as in the preceding paragraph, \(y^{\prime}\) clearly satisfies all inequalities of the type 5.50 and 5.51, and \(y^{\prime}\) also satisfies all inequalities of the type 5.52 such that \((c,d^{\prime})\) and \((c^{\prime},d)\) are not equal to \((i,i)\) (in all these cases the restrictions on \(y^{\prime}\) coincides with the corresponding restriction on \(y\)). If we now consider inequalities of the type 5.52 when \((c,d^{\prime})=(i,i)\), then for all \(c^{\prime}\leq i\) and \(d\geq i\), since \((c^{\prime},d)\in F\), we have \[y^{\prime}_{i,i}+y^{\prime}_{c,d^{\prime}}=0+y^{\prime}_{c^{\prime},d}\leq 0.\] We thus conclude (since 5.57 also holds) that \(\hat{y}_{i,i}=0\) and 5.58 holds for all \((c,d)=(i,i)\). I now prove that \(\hat{y}_{i,j}=1\) with \(i>j\) implies that \(\hat{y}_{j,i}=-1\). But this follows directly from 5.51 (in conjunction with the integrality of \(\hat{y}\) from Proposition 5.6), as it implies \[\hat{y}_{i,j}+\hat{y}_{a,b}\leq 0\] for all \(a\leq j\) and \(i\leq b\) (hence \((a,b)\in F\)), and the conclusion follows by taking \((a,b)=(j,i)\). It now remains to show that \(\hat{y}_{i,j}=-1\), for some \(i<j\) (\(i,j\in[D]\)), implies that \(\hat{y}_{j,i}=1\). I first show that it must be the case that there exists \(c,d\in[D]\) such that \(i\leq d<c\leq j\) such that \(\hat{y}_{c,d}=1\). If not, then letting \(y_{a,b}=0\) for \((a,b)=(i,j)\) and \(y_{a,b}=\hat{y}_{a,b}\) otherwise, would yield a feasible improvement over \(\hat{y}\), and thus contradict the optimality of \(\hat{y}\). Indeed, \(y\) clearly satisfies all inequalities of the type 5.50 and 5.52, as well as all inequalities of the type 5.51 where \((c,d^{\prime})\neq(i,j)\) and \((c^{\prime},d)\neq(i,j)\). For inequalities of the type 5.51 where \((c,d^{\prime})=(i,j)\), if \(i\leq c^{\prime}\leq d\leq j\), then inequality \[y_{i,j}+y_{c^{\prime},d}\leq 0\] follows from 5.52 (since \((i,j),(c^{\prime},d)\in F\)). If \(i\leq d<c^{\prime}\leq j\), then \[y_{i,j}+y_{c^{\prime},d}=0+\hat{y}_{c^{\prime},d}=0\] since by assumption \(\hat{y}_{c,d}=0\) for all \(i\leq d<c\leq j\). Hence \(\hat{y}_{i,j}=-1\) implies that there exists \(c,d\in[D]\) such that \(i\leq d<c\leq j\) and \(\hat{y}_{c,d}=1\). We then have \[\hat{y}_{a,b}=-1\quad\forall\ a\leq d\ \text{and}\ c\leq b, \tag{5.59}\] which follows from Inequality 5.51, as it implies that \[\hat{y}_{c,d}+\hat{y}_{a,b}=1+\hat{y}_{a,b}\leq 0\] for all such \(a\) and \(b\). Suppose now that \(\hat{y}_{j,i}\neq 1\). Then letting \(y_{a,b}=1\) for \((a,b)=(j,i)\) and \(y_{a,b}=\hat{y}_{a,b}\) otherwise, yields a feasible improvement over \(\hat{y}\), thus contradicting its optimality. Indeed, all inequalities of the type 5.50 and 5.52 are clearly satisfied, as well as all inequalities of the type 5.51 that do not involve the \((j,i)^{th}\) coordinate of \(y\). For the inequalities of the type 5.58 involving \(y_{j,i}\), it suffices to show that \[y_{j,i}+y_{a,b}\leq 0\] for all \(a,b\in[D]\) such that \(a\leq i\) and \(j\leq b\). But for such \(a\) and \(b\), we have \(a\leq d\) and \(c\leq b\), and 5.59 yields \(y_{a,b}=\hat{y}_{a,b}=-1\), thus \(y_{j,i}+y_{a,b}=1-1=0\). #### 5.2.2 Stationary case Under Assumption 5.49, the set \(\mathcal{R}\) of all the regions (see Section 2.2) is given by \(\mathcal{R}=\{f\times g\mid f,g\in F\}\), where \(F=\{(d,d^{\prime})\mid d,d^{\prime}\in[D],\ d\leq d^{\prime}\}\). In the following claim, I first give an alternative description of the polytopes \(\mathcal{Q}_{S}\). _Claim 5.8_.: The polytopes \(\mathcal{Q}_{S}\) is alternatively given by \(\mathcal{Q}_{S}=\{y|\ A^{\mathsf{T}}y\leq R_{S}^{\mathsf{T}}z,\ y\geq-1\}\). Equivalently, \(\mathcal{Q}_{S}\) is the set of all \(y\in\mathbb{R}^{D^{2}}\) (indexed by tuples \((a,b)\in[D]\times[D]\)) for which there exists a \(z\in\mathbb{R}^{|F|}\) (indexed by tuples \((a,b)\in[D]\times[D]\), with \(a\leq b\)) such that: \[y_{a,b}\geq-1\qquad\forall a,b\in[D] \tag{5.60}\] and \[y_{a,b}\leq z_{a,c}-z_{d,b}\ \ \forall a,b,c,d\in[D],\ \text{s.t.}\ a\leq c\ \text{ and }d\leq b \tag{5.61}\] Proof.: Recall (see Section 2.3) that the polytopes \(\mathcal{Q}_{S}=\{y|\ A^{\mathsf{T}}y\leq R_{S}^{\mathsf{T}}z,\ \|y\|_{\infty}\leq 1\}\). The inequalities 5.61 are just an alternative way of writing the restrictions \(A^{\mathsf{T}}y\leq R_{S}^{\mathsf{T}}z\); recall from the definition of the matrix \(R_{S}\) in Section 2.2, that its column corresponding to the region \(f\times g\in\mathcal{R}\) has an entry of 1 in the row that corresponds to the patch \(f\), and an entry of -1 in the row that corresponds to the patch \(g\), and is equal to zero otherwise. Inequalities 5.74 are implied by the restriction \(\|y\|_{\infty}\leq 1\) which is equivalent to \(-1\leq y_{a,b}\leq 1\) for all \(a,b\in[D]\). It remains to show that the inequalities \(y_{a,b}\leq 1\) for all \(a,b\in[D]\), are redundant given inequalities 5.74 and 5.61. Indeed, inequalities 5.61 imply that \(\forall a,b\in[D]\) we have \[-1\leq y_{1,D}\leq z_{1,b}-z_{a,D},\] and \[y_{a,b}\leq z_{a,D}-z_{1,b}.\] Combining the preceding inequalities yields \(y_{a,b}\leq 1\). To establish the integrality of the polytope \(\mathcal{Q}_{S}\), I use Proposition 2.21; more concretely, I will show below that for all \(w\in\mathbb{Z}^{D^{2}}\), \(\max\{w^{\mathsf{T}}y\ |\ y\in\mathcal{Q}_{S}\}\) is an integer. Note that by the (strong) duality theorem of linear programing, we have \[\max\{w^{\mathsf{T}}y\ |\ A^{\mathsf{T}}y\leq R_{S}^{\mathsf{T}}z,\ -y\leq 1\} =\min\{u^{\mathsf{T}}1\ |\ u,\lambda\geq 0,\ -u+A\lambda=w,\ R_{S}\lambda=0\}\] \[=-w^{\mathsf{T}}1+\min\{1^{\mathsf{T}}A\lambda\ |\ \lambda\geq 0,\ A \lambda\geq w,\ R_{S}\lambda=0\}\] \[=-w^{\mathsf{T}}1+\min\{1^{\mathsf{T}}\lambda\ |\ \lambda\geq 0,\ A \lambda\geq w,\ R_{S}\lambda=0\}\] where the last equality follows from the fact that the matrix \(A\) has a single non-zero entry equal to 1 in each column, which implies that \(1^{\mathsf{T}}A\) is a row vector with all entries equal to 1. As the goal is to show that the value of the latter LP is integral whenever \(w\) is integral, it suffices to show that the value of the LP \[\min\{1^{\mathsf{T}}\lambda\ |\ \lambda\geq 0,\ A\lambda\geq w,\ R_{S}\lambda=0\} \tag{5.62}\] is integral whenever \(w\) is integral (since \(-w^{\mathsf{T}}1\) is necessarily integral). I establish the integrality of 5.62 by relating it to a network flow problem. In technical language, the integrality proof that I give below consists of showing that the system of inequalities that define the polytope \(Q_{S}\) is _total dual integral_ (see Edmonds and Giles 1977). A good reference for the concepts introduced below relating to network flows is Ahuja, Magnanti, and Orlin 1993. Let \(G=(V,A)\) be a directed bipartite graph with vertex set \(V\) given by \(V=V_{1}\cup V_{2}\) where \(V_{i}=\{(f,i)\mid f\in F\}\) for \(i\in[2]\) (i.e., \(V\) is composed of two copies of the set of patches \(F\)), and the set of arcs \(A=A_{1}\cup A_{2}\) where \[A_{1}=\{(v,v^{\prime})\mid v\in V_{1}\text{ and }v^{\prime}\in V_{2}\}\] and \[A_{2}=\{(v,v^{\prime})\mid v\in V_{2},\ v^{\prime}\in V_{1},\text{ and }v=(f,2),\ v^{\prime}=(f,1),\text{for some }f\in F\}.\] That is arcs in \(A_{1}\) connect arbitrary vertices in \(V_{1}\) to arbitrary vertices in \(V_{2}\), and arcs in \(A_{2}\) connect vertices in \(V_{2}\) to their "twin" vertex in \(V_{1}\). Given an arc \(a=(v,v^{\prime})\in A\), let \(T(a)=v\) denote the tail of the arc \(a\), and \(H(a)=v^{\prime}\) denote the head of the arc \(a\). Given a vertex \(v\in V\) and a "flow" \(\chi\)\((\chi:A\to R)\) define the _inflow_ at \(v\) by \[\delta_{\chi}^{-}(v)=\sum_{\{a\in A\ |\ \ H(a)=v\}}\chi(a),\] and define the _outflow_ at \(v\) by \[\delta_{\chi}^{+}(v)=\sum_{\{a\in A\ |\ \ T(a)=v\}}\chi(a).\] A _circulation_ is a flow that satisfies the flow conservation constraint at each vertex: i.e., \[\delta_{\chi}^{+}(v)=\delta_{\chi}^{-}(v)\quad\forall\ v\in V.\] _Remark 5.9_.: Note that by construction, for each vertex \(v\in V_{1}\) there is a unique incoming arc at \(v\); i.e., there is a unique \(a\in A\) such that \(H(a)=v\). Similarly for each vertex \(v\in V_{2}\) there is a unique outgoing arc at \(v\); i.e., there is a unique arc \(a\in A\) such that \(T(a)\)=\(v\). Indeed, if \(v\in V_{2}\) is given by \(v=(f,2)\) for some \(f\in F\), then the unique outgoing arc at \(v\) is given by \(a=(v,v^{\prime})\), where \(v^{\prime}=(f,1)\) is the "twin" vertex to \(v\) in \(V_{1}\), and the arc \(a\) is also the unique incoming arc at \(v^{\prime}\). As a consequence for all flows \(\chi\) on \(G\), if \(v=(f,1)\) and \(v^{\prime}=(f,2)\) (for some \(f\in F\)), we have \(\delta_{\chi}^{+}(v^{\prime})=\delta_{\chi}^{-}(v)\); moreover, if \(\chi\) is a circulation, we have \[\delta_{\chi}^{+}(v)=\delta_{\chi}^{-}(v^{\prime}). \tag{5.63}\] Given \(f=(d,d^{\prime})\in F\), let the functions \(\Pi_{1}\) and \(\Pi_{2}\), from \(F\) to \([D]\), be defined by \(\Pi_{1}(f)=d\) and \(\Pi_{2}(f)=d^{\prime}\). Lemma 5.10 below shows that the value of the LP 5.62 coincides with that of a minimum cost flow LP (under some "lower (set) capacity constraints") on the network given by the graph \(G\) And Lemma 5.11 (further below) will also show that the value of the LP 5.62 is equal to that of another "simpler" minimum cost flow LP. These lemma makes it possible to leverage results from the study of network flows to show that the value of the LP 5.62 is an integer whenever the vector \(w\) is integral. Also, as these lemmas essentially state that the values of some LPs coincide, their validity can be "assessed" numerically by checking that the values of these LPs are in fact equal for a large number of (randomly generated) instances of the LPs under consideration. **Lemma 5.10**.: _Let the directed graph \(G=(V,\mathbb{A})\) be defined as above. The value of the LP 5.62 coincides with that of the LP_ \[\min_{\chi\in\mathbb{R}^{|A|}}\sum_{a\in\mathbb{A}_{1}}\chi(a) \tag{5.64}\] _subject to the constraints_ \[\chi(a)\geq 0\quad\forall a\in\mathbb{A} \tag{5.65}\] \[\delta_{\chi}^{+}(v)=\delta_{\chi}^{-}(v)\quad\forall v\in V \tag{5.66}\] _and_ \[\sum_{a\in\mathbb{I}_{d,d}}\chi(a)\geq w_{d,d^{\prime}}\quad\forall d,d^{ \prime}\in[D] \tag{5.67}\] _where \(I_{d,d^{\prime}}:=\{a=(v,v^{\prime})\in\mathbb{A}_{1}\ |\ v=(f,1),\ \nu^{ \prime}=(g,2),\ \text{and}\ \Pi_{1}(f)=d,\ \Pi_{2}(g)=d^{\prime}\}\). Moreover, as \(d\) and \(d^{\prime}\) vary in \([D]\), the sets \(I_{d,d^{\prime}}\) that appear in the lower (set) capacity constraints 5.67 form a partition of \(\mathbb{A}_{1}\)._ Proof.: I establish the claim by showing that for each flow \(\chi\) that satisfies the constraints 5.65, 5.66 and 5.67, there corresponds a vector \(\lambda\in\mathbb{R}^{|\mathcal{R}|}\) such that \(\lambda\) satisfies the constraints of the LP 5.62 (and vice versa), i.e. \[\lambda\geq 0 \tag{5.68}\] \[R_{S}\lambda=0 \tag{5.69}\] and \[A\lambda\geq w, \tag{5.70}\] and \(\sum_{a\in\mathbb{A}_{1}}\chi(a)=1^{T}\lambda\). Step 1. Let \(\chi\) be a flow that satisfies constraints 5.65, 5.66 and 5.67. For each region \(f\times g\in\mathcal{R}\), associate to it the arc \(a=(v,v^{\prime})\in\mathbb{A}_{1}\) where \(v=(f,1)\), \(v^{\prime}=(g,2)\), and let the corresponding \(f\times g^{th}\) entry of the vector \(\lambda\in\mathbb{R}^{|\mathcal{R}|}\) (associated to the flow \(\chi\)) be defined by \(\lambda_{f\times g}=\chi(a)\). We clearly have that \(\sum_{a\in\mathbb{A}_{1}}\chi(a)=1^{T}\lambda\), and that \(\lambda\) satisfies constraint 5.68. I claim that \(\lambda\) also satisfies constraints 5.69 and 5.70. Recall that constraint 5.69 simply encodes the stationarity restriction, and is equivalent to: \(\forall\ f\in F\), we have \[\sum_{g\in F}\lambda_{f\times g}=\sum_{g\in F}\lambda_{g\times f}. \tag{5.71}\] Fix \(f\in F\), and let \(v,v^{\prime}\in V\) be defined by \(v=(f,1)\) and \(v^{\prime}=(f,2)\). By definition of \(\lambda\), we have \(\delta_{\chi}^{+}(v)=\sum_{g\in F}\lambda_{f\times g}\) and \(\delta_{\chi}^{-}(v^{\prime})=\sum_{g\in F}\alpha_{g\times f}\). Since \(\chi\) is a circulation (it satisfies 5.66), equation 5.63 implies that we also have \(\delta_{\chi}^{+}(v)=\delta_{\chi}^{-}(v^{\prime})\), and combining these latter observations implies that \(\lambda\) satisfies restriction 5.71. To show that \(\lambda\) satisfies restriction 5.70, recall that by construction of the matrix \(A\) (see Section 2.2), for each \(d,d^{\prime}\in[D]\), there corresponds a row of the matrix \(A\) with entries equal to \(1\) (and equal to \(0\) otherwise) in each column associated to a region \(f\times g\) such that \(\Pi_{1}(f)=d\) and \(\Pi_{2}(g)=d^{\prime}\). Hence the restriction 5.70 is equivalent to: For each \(d,d^{\prime}\in[D]\), we have \[\sum_{\{f\times g\in{\cal R}\ |\ \Pi_{1}(f)=d,\ \Pi_{2}(g)=d^{\prime}\}}\lambda_{f \times g}\geq w_{d,d^{\prime}}. \tag{5.72}\] But the latter is equivalent to the lower capacity constraint 5.67 by definition of \(\lambda\). Step 2. Let now \(\lambda\in\mathbb{R}^{|{\cal R}|}\) be a vector that satisfies constraints 5.68, 5.69 and 5.70. I now associate to it a flow \(\chi\) that satisfies constraints 5.65, 5.66, 5.67, and such that the identity \(\sum_{a\in\mathbb{A}_{1}}\chi(a)=1^{T}\lambda\) holds. Let the associated (to \(\lambda\)) flow \(\chi\) be defined by: For each \(a=(v,v^{\prime})\in\mathbb{A}_{1}\), with \(v=(f,1)\) and \(v^{\prime}=(g,2)\), set \(\chi(a):=\lambda_{f\times g}\), and for each \(a=(v,v^{\prime})\in\mathbb{A}_{2}\), with \(v=(f,2)\) and \(v^{\prime}=(f,1)\), set \(\chi(a):=\sum_{g\in F}\lambda_{g\times f}\). Clearly, by construction, \(\chi\) satisfies 5.65 and the identity \(\sum_{a\in\mathbb{A}_{1}}\chi(a)=1^{T}\lambda\). Also, by the definition of \(\chi\) on the arcs in \(\mathbb{A}_{2}\), we have that \(\delta_{\chi}^{+}(v)=\delta_{\chi}^{-}(v)\) for all \(v=(f,2)\in V_{2}\). To establish that \(\chi\) satisfies 5.67, it thus remains to show that \(\delta_{\chi}^{+}(v)=\delta_{\chi}^{-}(v)\) for all \(v=(f,1)\in V_{1}\). This follows since 5.69 is equivalent to 5.71 which implies that \(\delta_{\chi}^{+}(v)=\delta_{\chi}^{-}(v^{\prime})\) (with \(v=(f,1)\), \(v^{\prime}=(f,2)\) and \(f\in F\)), and by Remark 5.9\(\delta_{\chi}^{+}(v^{\prime})=\delta_{\chi}^{-}(v)\), and thus \(\delta_{\chi}^{+}(v)=\delta_{\chi}^{-}(v^{\prime})=\delta_{\chi}^{+}(v^{ \prime})=\delta_{\chi}^{-}(v)\) (where the second inequality from the the fact that \(\delta_{\chi}^{+}(v^{\prime})=\delta_{\chi}^{-}(v^{\prime})\) for all \(v^{\prime}\in V_{2}\)). That \(\chi\) satisfies restriction 5.67 follows since restriction 5.70 is equivalent to restriction 5.72, and the latter is equivalent to restriction 5.67 by the construction of \(\chi\). Consider the auxiliary directed bipartite graph \(\tilde{G}=(\tilde{V},\tilde{A})\), with vertex set \(\tilde{V}=\tilde{V}_{1}\cup\tilde{V}_{2}\) where (for \(i\in[2]\)) \(\tilde{V}_{i}=\{(d,i)\ |\ d\in[D]\}\) (that is, each \(\tilde{V}_{i}\) represents a copy of the set \([D]\)), and the set of arcs \(\tilde{A}=\tilde{A}_{1}\cup\tilde{A}_{2}\) is such that \[\tilde{A}_{1}\{(v,v^{\prime})\ |\ v\in\tilde{V}_{1}\ \text{and}\ v^{\prime}\in \tilde{V}_{2}\}\] and \[\tilde{A}_{2}\{(v^{\prime},v)\ |\ v^{\prime}=(d^{\prime},2)\in\tilde{V}_{2},\ v=(d,1) \in\tilde{V}_{1},\ \text{and}\ d\leq d^{\prime}\}.\] The following lemma shows that the value of the LP 5.64-5.67, which is a minimum cost circulation problem with lower capacity constraints on sets of arcs, coincides with the value of a minimum cost circulation problem on the auxiliary graph \(\tilde{G}\) with lower capacity constraints on arcs (instead of sets of arcs). In Proposition 5.12 below, I will use the _Integral Circulation Theorem_ (see Theorem 12.1 in Lawler 1976) to conclude that the value of the LP (5.73-5.76) (and thus that of the LP 5.62) is an integer whenever \(w\) is an integer. **Lemma 5.11**.: _Let the directed graph \(\tilde{G}\) be defined as above. The value of the LP 5.64-5.67 coincides with that of the LP_ \[\min_{n\in\mathbb{R}^{|\Lambda|}}\sum_{a\in\tilde{\mathbb{A}}_{1}}\eta(a) \tag{5.73}\] _subject to the constraints_ \[\eta(a)\geq 0\quad\forall a\in\tilde{\mathbb{A}} \tag{5.74}\] \[\delta_{\eta}^{+}(v)=\delta_{\eta}^{-}(v)\quad\forall v\in\tilde{V} \tag{5.75}\] _and, \(\forall\ a=(v,v^{\prime})\in\tilde{\mathbb{A}}_{1}\) with \(v=(d,1)\) and \(v^{\prime}=(d^{\prime},2)\)_ \[\eta(a)\geq w_{d,d^{\prime}}. \tag{5.76}\] Note that the lower capacity constraint 5.76 is imposed on the flow along individual arcs, in contrast to the constraint 5.67 which imposes a lower bound on the aggregate flow on sets of arcs (the sets \(I_{d,d^{\prime}}\)). Proof.: As in Lemma 5.10, the proof proceeds in two steps. Step 1 Let \(\chi\) be a feasible flow for LP 5.64-5.67. I construct from \(\chi\) a flow \(\eta\) on \(\tilde{G}\) that satisfies constraints 5.74-5.76 such that \(\sum_{a\in\tilde{\mathbb{A}}_{1}}\eta(a)=\sum_{e\in\mathbb{A}_{1}}\chi(e)\). Indeed, for \(a=(v,v^{\prime})\in\tilde{\mathbb{A}}_{1}\), with \(v=(d,1)\) and \(v^{\prime}=(d^{\prime},2)\), set \(\eta(a)=\sum_{e\in I_{d,d^{\prime}}}\chi(e)\) (where \(I_{d,d^{\prime}}\) is as defined in Lemma 5.10). And for \(a=(v^{\prime},v)\in\tilde{\mathbb{A}}_{2}\), with \(v^{\prime}=(d^{\prime},2)\) and \(v=(d,1)\), set \(\eta(a)=\chi(e)\) where \(e\in\mathbb{A}_{2}\) is given by \(e=((f,2),(f,1))\) with \(f=(d,d^{\prime})\). As the sets \(I_{d,d^{\prime}}\) form a partition of \(\mathbb{A}_{1}\), it easily follows that \(\sum_{a\in\tilde{\mathbb{A}}_{1}}\eta(a)=\sum_{e\in\mathbb{A}_{1}}\chi(e)\). That \(\eta\) satisfies the non-negativity constraint 5.74 directly follows from the definition of \(\eta\) and the constraint 5.65. That \(\eta\) satisfies the arc capacity constraint 5.76 follows from the definition of \(\eta\) on the arcs in \(\tilde{\mathbb{A}}_{1}\). It thus remains to show that \(\eta\) satisfies the flow conservation constraint 5.75. Let \(v=(d,1)\in\tilde{V}_{1}\); we have \[\delta_{\eta}^{+}(v)=\sum_{\{a=(v,v^{\prime})\ |\ v^{\prime}=(d^{\prime},2)\in \tilde{V}_{2},\ d^{\prime}\geq d\}}\eta(a)=\sum_{d^{\prime}\in[D]}\sum_{e\in I _{d,d^{\prime}}}\chi(e)=\sum_{\{f\in F\ |\ \Pi_{1}(f)=d\}}\delta_{\chi}^{+}((f,1)).\] and \[\delta_{\eta}^{-}(v)=\sum_{\{a=(v^{\prime},v)|v^{\prime}=(d^{\prime},2)\in \tilde{V}_{2},\ d^{\prime}\geq d\}}\eta(a)=\sum_{\{e\in\mathbb{A}_{2}|e=((f,2),(f,1)),\ t\in F,\ \Pi_{1}(f)=d\}}\chi(e)=\sum_{\{f\in F\|\Pi_{1}(f)=d\}}\delta_{\chi}^{-}((f,1)).\] Combining these two expressions with the fact that \(\chi\) satisfies constraint 5.66 yields that \(\eta\) satisfies the flow conservation constraint at \(v\). Suppose now that \(v^{\prime}=(d^{\prime},2)\in\tilde{V}_{2}\). We have \[\delta_{\eta}^{-}(v^{\prime})=\sum_{\{a=(v,v^{\prime})\ |\ v\in\tilde{V}_{1}\}} \eta(a)=\sum_{d\in[D]}\sum_{e\in\tilde{I}_{d,d^{\prime}}}\chi(e)=\sum_{\{f\in F \ |\ \Pi_{2}(f)=d^{\prime}\}}\delta_{\chi}^{-}((f,2))\] and \[\delta_{\eta}^{+}(v^{\prime})=\sum_{\{a=(v^{\prime},v)|v=(d,1)\in\tilde{V}_{1},\ d\leq d^{\prime}\}}\eta(a)=\sum_{\{e\in\tilde{A}_{2}|e=((f,2),(f,1)),\ f\in F,\ \Pi_{2}(f)=d^{\prime}\}}\chi(e)=\sum_{\{f\in F|\Pi_{2}(f)=d^{\prime}\}}\delta_ {\chi}^{+}((f,2)).\] combining these two expressions with the fact that \(\chi\) is a circulation yields that \(\eta\) satisfies the flow conservation constraint at \(v^{\prime}\). The foregoing establishes that the value of the LP 5.73-5.76 is less than or equal to that of the LP 5.64-5.67. The second step will establish the converse. Step 2. Let \(\eta\) be a feasible flow for the LP 5.73-5.76. I construct from \(\eta\) a flow \(\chi\) on the graph \(G\), such that \(\chi\) satisfies constraints 5.65-5.67, and \(\sum_{a\in\tilde{A}_{1}}\eta(a)=\sum_{a\in\tilde{A}_{1}}\chi(a)\). By the _Flow Decomposition Theorem_ (see Theorem 3.5 and Property 3.6 in Ahuja, Magnanti, and Orlin 1993, p.80), the flow \(\eta\) can be decomposed into \[\eta=\sum_{k\in[K]}\eta^{(k)}\] where each \(\eta^{(k)}\) is a non-negative circulation supported on a directed cycle \(\mathcal{C}_{k}\) (i.e., \(\eta^{(k)}(a)=0\) for all \(a\in\tilde{A}\backslash\mathcal{C}_{k}\)), and where \(K\) is an integer upper bounded by \(|\tilde{A}|\). Here a directed cycle \(\tilde{C}\) in \(\tilde{G}\) is a sequence of arcs \((v_{1},v_{2}),(v_{2},v_{3}),\cdots,\)\((v_{n-1},v_{n}),(v_{n},v_{1})\) such that each for each \(i\in[n]\) we have \((v_{i},v_{i+1})\in\tilde{A}\) (with \(v_{n+1}\coloneqq v_{1}\)), and \(v_{i}\neq v_{j}\) whenever \(i,j\in[n]\) and \(i\neq j\). As \(\tilde{G}\) is a bipartite graph, each directed cycle \(\mathcal{C}_{k}\) is necessarily even 66, i.e., each cycle has an even number of vertices (or equivalently an even number of edges). Thus for each \(k\), \(\mathcal{C}_{k}\) can be represented by a sequence of arcs \((v_{1}^{(k)},v_{2}^{(k)}),(v_{2}^{(k)},v_{3}^{(k)}),\cdots,(v_{2n_{k}-1}^{(k)},v_{2n_{k}}^{(k)}),(v_{2n_{k}}^{(k)},v_{1}^{(k)})\) where \(2n_{k}\) is the length of the cycle \(\mathcal{C}_{k}\) and, for \(i\in[n_{k}]\), each \(v_{2i-1}^{(k)}\in\tilde{V}_{2}\), \(v_{2i}^{(k)}\in\tilde{V}_{1}\), and \((v_{2i-1}^{(k)},v_{2i}^{(k)})\in\tilde{A}_{2}\), \((v_{2i}^{(k)},v_{2i+1}^{(k)})\in\tilde{A}_{1}\) (with \(v_{2n_{k}+1}^{(k)}\coloneqq v_{1}^{(k)}\)). For each \(i\in[n_{k}]\), let \(v_{2i-1}^{(k)}=(\tilde{a}_{i}^{(k)},2)\) and \(v_{2i}^{(k)}=(d_{i}^{(k)},1)\), for some \(\tilde{d}_{i}^{(k)},d_{i}^{(k)}\in[D]\). As \((v_{2i-1}^{(k)},v_{2i}^{(k)})\in\tilde{A}_{2}\), it must be the case that \(d_{i}^{(k)}\leq\tilde{d}_{i}^{(k)}\) for each \(i\in[n_{k}]\). Let \(f_{i}^{(k)}\in F\) be defined by \(f_{i}^{(k)}=(d_{i}^{(k)},\tilde{d}_{i}^{(k)})\). Note that if a circulation \(\beta\) is supported on a directed cycle \(\mathcal{C}\), then the flow conservation at each node implies that the flow must be constant along arcs in \(\mathcal{C}\), i.e., \(\beta(a)=\beta(a^{\prime})\) for all arcs \(a,a^{\prime}\in\mathcal{C}\). Let \(\alpha_{k}\), for each \(k\in[K]\), denote the common value of the flow \(\eta^{(k)}\) on the arcs in the cycle \(\mathcal{C}_{k}\) (i.e., \(\eta^{(k)}(a)=\alpha_{k}\) for all \(a\in\mathcal{C}_{k}\)). Footnote 66: A bipartite graph has no odd cycle. For each \(k\in[K]\), I now construct a circulation \(\chi^{(k)}\) supported on a directed cycle \(\mathcal{C}^{\prime}_{k}\) of \(G\). I will show further below that the circulation \[\chi=\sum_{k\in[K]}\chi^{(k)} \tag{5.77}\] has the desired properties. For each \(i\in[n_{k}]\), let \(u^{(k)}_{2i-1}:=(f^{(k)}_{i},2)\) and \(u^{(k)}_{2i}:=(f^{(k)}_{i},1)\); clearly \(u^{(k)}_{2i-1}\in V_{2}\), \(u^{(k)}_{2i}\in V_{1}\), and the sequence of arcs \((u^{(k)}_{1},u^{(k)}_{2i}),(u^{(k)}_{2},u^{(k)}_{3}),\cdots,(u^{(k)}_{2n_{k}-1},u^{(k)}_{2n_{k}},(u^{(k)}_{2n_{k}},u^{(k)}_{1}))\) forms a directed cycle in \(G\) (can be easily checked), which \(I\) denote by \({\cal C^{\prime}}_{k}\). Let \(\chi^{(k)}\) be the circulation supported on the cycle \({\cal C^{\prime}}_{k}\) with common value of the flow on each arc in \({\cal C^{\prime}}_{k}\) given by \(\alpha_{k}\): That is, \[\chi^{(k)}((u^{(k)}_{i},u^{(k)}_{i+1})):=\alpha_{k}\quad\forall i\in[2n_{k}] \quad(\text{with }u^{(k)}_{2n_{k}+1}=u^{(k)}_{1}), \tag{5.78}\] and set \(\chi^{(k)}\) equal to zero on all arcs of \(G\) that are not in \({\cal C^{\prime}}_{k}\). Clearly \(\chi^{(k)}\) is a non-negative circulation on \(G\). Since the sum of circulations is a circulation, it follows that \(\chi\) defined in 5.77 satisfies 5.65 and 5.66. In order to see that \(\chi\) satisfies the constraint 5.67, it is important to note that if the circulation \(\eta^{(k)}\) traverses the arc \(a=((d,1),(d^{\prime},2))\) (i.e., \(a\in{\cal C}_{k}\)), then the circulation \(\chi^{(k)}\) will traverse the set \(I_{d,d^{\prime}}\) exactly once (i.e., the set \(I_{d,d^{\prime}}\) will have a single arc in the cycle \({\cal C^{\prime}}_{k}\)). Indeed, it follows from the construction of the cycle \({\cal C^{\prime}}_{k}\) and the facts that the arcs of a directed cycle are distinct, that \[a=((d,1),(d^{\prime},2))\in{\cal C}_{k}\text{ if and only if }|{\cal C^{\prime}}_{k}\cap I_{d,d^{\prime}}|=1. \tag{5.79}\] It thus follows that, for \(d,d^{\prime}\in[D]\), we have \[\sum_{e\in I_{d,d^{\prime}}}\chi(e) =\sum_{e\in I_{d,d^{\prime}}}\sum_{k\in[K]}\chi^{(k)}(e)\] \[=\sum_{k\in[K]}\sum_{e\in I_{d,d^{\prime}}}\alpha_{k}\mathbb{1}[ e\in{\cal C^{\prime}}_{k}]\] \[=\sum_{k\in[K]}\alpha_{k}\mathbb{1}[((d,1),(d^{\prime},2))\in{ \cal C}_{k}]\] \[=\sum_{k\in[K]}\eta^{(k)}((d,1),(d^{\prime},2))\] \[=\eta(((d,1),(d^{\prime},2)))\geq w_{d,d^{\prime}}\] where the fourth equality follows from 5.79 and the last inequality follows since \(\eta\) satisfies 5.76. Hence \(\chi\) satisfies the lower (set) capacity constraints 5.67. Furthermore, since for each \(k\in[K]\) we have \(|{\cal C}_{k}\cap\hat{\mathbb{A}}_{1}|=|{\cal C^{\prime}}_{k}\cap\mathbb{A}_{1}|\) (follows from the way the \(u^{(k)}_{i}\)'s are constructed from the \(v^{(k)}_{i}\)'s), and the flow \(\eta^{(k)}\) (resp. \(\chi^{(k)}\)) has the common value of \(\alpha_{k}\) along arcs in its support \({\cal C}_{k}\) (resp. \({\cal C^{\prime}}_{k}\)), it follows that \(\sum_{a\in\hat{\mathbb{A}}_{1}}\eta^{(k)}(a)=\sum_{a\in\hat{\mathbb{A}}_{1}} \chi^{(k)}(a)\) which yields (after summing over \(k\in[K]\)) \(\sum_{a\in\hat{\mathbb{A}}_{1}}\eta(a)=\sum_{a\in\hat{\mathbb{A}}_{1}}\chi(a)\). **Proposition 5.12**.: _The polytopes \(Q_{S}\) are integral (i.e., all extreme points have entries in \(\{0,\pm 1\}\))._ Proof.: From Proposition 2.21, it suffices to show that \(\max\{w^{\mathsf{T}}y\mid y\in Q_{S}\}\in\mathbb{Z}\)67 for all integral objective vectors \(w\in\mathbb{Z}^{D^{2}}\). By the strong duality theorem of LP, the latter is equivalent to showing that the value of the LP \(\min\{1^{\mathsf{T}}\lambda\mid\lambda\geq 0,\ A\lambda\geq w,\ R_{S} \lambda=0\}\) is an integer, for each integral objective vector \(w\) (see the discussion preceding equation 5.62). By Lemma 5.10, the value of the LP 5.62 coincides with that of the LP 5.64-5.67. And by Lemma 5.11, the value of the LP 5.64-5.67 coincides with the value of the LP 5.73-5.76. The LP 5.73-5.76 is a minimum cost circulation problem with integral lower capacity constraints. From the Integral Circulation Theorem (see Theorem 12.1 in Lawler 1976), the LP 5.73-5.76 admits an optimal solution \(\eta^{*}\) that is integral. It thus follows that the value of the LP 5.73-5.76 (equal to \(\sum_{a\in\hat{A}_{1}}\eta^{*}(a)\)) is an integer, and we conclude that \(\max\{w^{\mathsf{T}}y\mid y\in Q_{S}\}\) is an integer whenever \(w\) is integral. Footnote 67: Since \(Q_{S}\) is bounded, the value of the LP \(\max\{w^{\mathsf{T}}y\mid y\in Q_{S}\}\) is always finite.
2308.12102
A $Π^0_2$ Singleton of Minimal Arithmetic Degree
In the study of the arithmetic degrees (the degree structure induced by relative arithmetic definability, ($\leq_{a}$) the $\omega$-REA sets play a role analogous to the role the r.e. degrees play in the study of the Turing degrees. However, much less is known about the arithmetic degrees and the role of the $\omega$-REA sets in that structure than about the Turing degrees. Indeed, even basic questions such as the existence of a $\omega$-REA set of minimal arithmetic degree are open. This paper makes progress on this question by demonstrating that some promising approaches inspired by the analogy with the r.e sets fail to show that no $\omega$-REA set is arithmetically minimal. Finally, it constructs a $\Pi^0_2$ singleton of minimal arithmetic degree. Not only is this a result of considerable interest in it's own right, constructions of $\Pi^0_2$ singletons often pave the way for constructions of $\omega$-REA sets with similar properties. Along the way, a number of interesting results relating arithmetic reducibility and rates of growth are established.
Peter Gerdes
2023-08-23T12:45:49Z
http://arxiv.org/abs/2308.12102v2
# A \(\Pi^{0}_{2}\) singleton of minimal arithmetic degree ###### Abstract. In the study of the arithmetic degrees (the degree structure induced by relative arithmetic definability, \(\leq_{\mathbf{a}}\)) the \(\omega\)-REA sets play a role analogous to the role the r.e. degrees play in the study of the Turing degrees. However, much less is known about the arithmetic degrees and the role of the \(\omega\)-REA sets in that structure than about the Turing degrees. Indeed, even basic questions such as the existence of a \(\omega\)-REA set of minimal arithmetic degree are open. This paper makes progress on this question by demonstrating that some promising approaches inspired by the analogy with the r.e. sets fail to show that no \(\omega\)-REA set is arithmetically minimal. Finally, it constructs a \(\Pi^{0}_{2}\) singleton of minimal arithmetic degree. Not only is this a result of considerable interest in it's own right, constructions of \(\Pi^{0}_{2}\) singletons often pave the way for constructions of \(\omega\)-REA sets with similar properties. Along the way, a number of interesting results relating arithmetic reducibility and rates of growth are established. Key words and phrases:computability theory;arithmetic degrees;arithmetic reducibility;minimal degree;singleton;REA;arithmetic singleton;pi2 singleton;implicit definability;Turing reducibility 2010 Mathematics Subject Classification: Primary 03D28, 03D30; Secondary (03D60) ## 1. Introduction In the study of the arithmetic degrees (the degree structure induced by relative arithmetic definability, \(\leq_{\mathbf{a}}\)) the \(\omega\)-REA sets play a role analogous to the role the r.e. degrees play in the study of the Turing degrees. This analogy holds both as a matter of structure (e.g. the arithmetic jump is an \(\omega\)-REA operation) and as a way to approach constructions. For instance, just as the r.e. sets allow us to characterize the range of the Turing jump on \(\mathscr{D}(\mathbf{0}^{\prime})\) (the Turing degrees less than of equal to \(\mathbf{0}^{\prime}\)) via the Shoenfield jump inversion [14] the \(\omega\)-REA sets allow us to similarly identify the range of the arithmetic jump on \(\mathscr{D}_{\mathbf{a}}(\mathbf{0_{a}}^{\prime})\) with the degrees of the sets \(\omega\)-REA in \(\mathbf{0_{a}}^{\prime}\)[15]. However, while the Turing degrees generally and the degrees of the r.e. sets specifically have been extensively studied much less is known about the arithmetic degrees and even less about the role of the \(\omega\)-REA sets in that structure. Even seemingly basic questions remain open. For instance, whether or not there are any \(\omega\)-REA sets of minimal arithmetic degree remains an open question. The analogy between the r.e. sets and the Turing degrees and the \(\omega\)-REA sets suggests that \(\omega\)-REA sets of minimal arithmetic degree shouldn't exist. However, it is already known that the analogy is imperfect as there is a minimal pair (in the arithmetic degrees) of \(\omega\)-REA sets which join to \(\mathbf{0_{a}}^{\prime}\)[15] in contrast to the non-diamond theorem in the r.e. degrees [8]. While we don't settle the existence of an \(\omega\)-REA set of minimal arithmetic degree in this paper we make what we believe is an important step in that direction by proving the following result (and thereby presenting an alternate solution question 62 in [2] by providing a \(\Pi^{0}_{2}\) singleton not arithmetically equivalent to any \(\mathbf{0}^{(\alpha)}\)). **Corollary 4.3**.: _There is a \(\Pi^{0}_{2}\) singleton of minimal arithmetic degree._ W While the degrees of \(\omega\)-REA sets are properly contained in the degrees of singletons, results about \(\Pi^{0}_{2}\) singletons have often paved the way for corresponding results about \(\omega\)-REA sets, e.g., Harrington's construction of an arithmetically low \(\Pi^{0}_{2}\) singleton [5] or arithmetically incomparable \(\Pi^{0}_{2}\) singletons [4] both1. As the approach taken in this paper draws heavily on the ideas in [5] presaged corresponding results about \(\omega\)-REA sets [15]. We hypothesize that, contra the analogy with r.e. sets, that there is an arithmetically minimal \(\omega\)-REA set and hope that the construction here suggests an approach to such a construction. Footnote 1: These are both unpublished notes that sketch the approach. See [16] or (draft work) [3] for more rigorous write ups of some of the results. ## 2. Background There is a fair amount of notation required for the results in this paper, however, almost all of it is standard. Readers familiar with standard notation may wish to skip most of the subsections below and return to them only as needed for reference. However, section 2.5 is worth looking at for all readers as it contains some slightly less common definitions and results. ### Computations, Strings and Degrees We largely adopt the standard notation seen in [9] which we briefly review. The use of \(\Phi_{i,s}(X;y)\) is denoted by \(\mathfrak{u}\left[\Phi_{i,s}(X;y)\right]\), the \(e\)-th set r.e. in \(X\) by \(W^{X}_{e}\) and write \(y\searrow_{\!\!\!\!\searrow s}X\) (\(y\searrow_{\!\!\!\!\searrow s}\overline{X}\)) to indicate \(y\) enters (leaves) \(X\) at stage \(s\). We denote convergence and divergence by \(\Phi_{i}(X;y)\!\!\!\downarrow\) and \(\Phi_{i}(X;y)\!\!\uparrow\) respectively. Convergence in \(s\)-steps is denoted by \(\Phi_{i}(X;y)\!\!\!\downarrow_{s}\stackrel{{\mathrm{def}}}{{ \iff}}\Phi_{i,s}(X;y)\!\!\downarrow\) and it's negation by \(\Phi_{i}(X;y)\!\!\downarrow_{s}\). We write \(\Phi_{i}(X)\!\!\uparrow\) to indicate that \(\Phi_{i}(X;\cdot)\) is partial (thinking of \(\Phi_{i}\) as a functional). We denote the least \(s\in\omega\) satisfying \(\psi\) by \(\mu s\left(\psi(s)\right)\). We write elements of \(2^{<\omega}\) and \(\omega^{<\omega}\) (referred to as strings) like \(\langle x_{0},x_{1},\ldots,x_{n-1}\rangle\) with \(\langle\rangle\) denoting the empty string and let \(\sigma,\tau,v,\alpha,\beta,\xi,\nu\) range over strings. For elements of \(2^{<\omega},\omega^{<\omega}\) we denote the length of \(\sigma\) by \(|\sigma|\) and write \(\sigma^{-}\) to indicate the immediate predecessor of \(\sigma\) under \(\prec\) and write \(\alpha^{\frown}\beta\) to denote \(\alpha\) concatenated with \(\beta\). For elements in \(2^{<\omega},\omega^{<\omega},\omega^{\omega},2^{\omega}\) (identifying sets with their characteristic functions) we denote \(\sigma\) is (non-strictly) extended by \(\tau\) by \(\sigma\prec\tau\), incompatibility by \(|\), compatibility by \(\not|\) and use \(<_{L}\) to denote the lexicographic ordering. For \(\sigma\in 2^{<\omega}\) (\(\sigma\in\omega^{<\omega}\)) we let \(\Phi_{e}(\sigma)\) denote the longest string \(\tau,|\tau|\leq|\sigma|\) such that \(\tau(n)=\Phi_{e,|\sigma|}(\sigma;n)\!\!\downarrow\) and relativize to define \(\Phi_{e}(\sigma\oplus X)\) in the obvious manner. We let \({}^{\tau}\alpha^{\shortmid}\) denote the canonical bijection of \(\omega^{<\omega}\) with \(\omega\) where \(\alpha\prec\beta\implies{}^{\tau}\alpha^{\shortmid}<{}^{\tau}\beta^{\shortmid}\), \(i<j\implies{}^{\tau}\alpha^{\shortmid}\langle i\rangle^{\shortmid}<{}^{\tau} \alpha^{\shortmid}\langle j\rangle^{\shortmid}\) and \({}^{\tau}\langle\rangle^{\shortmid}=0\). We let \(\langle x,y\rangle=\frac{1}{2}(x+y)(x+y+1)+y\) (this is a bijection of \(\omega^{2}\) with \(\omega\)) and we let \(\left(\langle a,b\rangle\right)_{0}=a\) and \(\left(\langle a,b\rangle\right)_{1}=b\). We define \(A\oplus B\), \(\oplus_{n\in S}X_{n}\) and \(X^{[n]}/X^{[<n]}\) standardly. We will gloss over the distinction between strings and their codes when it won't cause confusion. A set \(X\) is arithmetic in \(Y\) (written \(X\leq_{\mathbf{a}}Y\)) just if there is a formula in the language of arithmetic (with a designated set constant) \(\psi_{e}\) such that \(Y\models\psi_{e}(z)\iff z\in X\) (see [9] for details). In this case we write \(\psi_{e}(Y)=X\). Recall that an equivalent characterization of arithmetic reducibility is given by \(X\leq_{\mathbf{a}}Y\iff(\exists n)\big{(}X\leq_{\mathbf{T}}Y^{(n)}\big{)}\). The arithmetic degrees are the equivalence classes induced by arithmetic reducibility. We denote the arithmetic degree of \(\emptyset\) by \(\mathbf{0_{a}}\) and, as the arithmetic jump of \(X\) is defined to be \(X^{\omega}\), that of \(\mathbf{0}^{(\omega)}\) by \(\mathbf{0_{a}}^{\prime}\). An arithmetic degree is minimal just if it has exactly one predecessor under \(\leq_{\mathbf{a}}\). ### Trees and Forcing We call a tree \(T\subset\omega^{<\omega}\) (where trees are sets closed under \(\prec\)) pruned if for every \(\sigma\in T\) there is some \(f\succ\sigma,f\in[T]\) (where \([T]\) is the set of paths through \(T\)). A node \(\sigma\in T\) is terminal if \(\sigma\) has no extensions in \(T\), branching if it has more than one immediate extension, \(\omega\)-branching if it has infinitely many immediate extensions and the root of \(T\) if it is the unique \(\prec\) least branching node in \(T\). A tree is \(\omega\)-branching if every branching node is \(\omega\)-branching. The set of nodes in \(T\) that extend to some path in \([T]\) is denoted \(T^{(\infty)}\). We abuse notation and write \(T\!\upharpoonright_{l}\) for \(T\!\upharpoonright_{\{\sigma\mid|\sigma|\leq l\}}\). We write \(\sigma\ast T\) for \(\{\sigma\widehat{\ \ \ }\tau\mid\tau\in T\}\), \(T\,/\,\sigma\) for \(\{\tau\mid\sigma\widehat{\ \ }\tau\in T\}\) (identifying \(\tau\,/\,\sigma\) with \(\{\tau\}\,/\,\sigma\) ) and \(T^{\langle\sigma\rangle}=\sigma\ast T\,/\,\sigma\) for the subtree of \(T\) compatible with \(\sigma\) (or \(\langle\rangle\) if \(\sigma\notin T\)). We recall that the standard topology on \(\omega^{\omega}\) is induced by basic open sets \([\sigma]=\{f\mid f\succ\sigma\}\) for \(\sigma\in\omega^{<\omega}\) and likewise for \(2^{\omega}\). A set is perfect if it contains no isolated points and we call a tree \(T\) perfect if \([T]\) is perfect. When working with unpruned trees in \(\omega^{<\omega}\) we can't identify trees (\(\prec\) closed sets) with \(\prec\) respecting functions on strings2. We therefore use the term f-tree for a \(\prec\) respecting injective partial function \(T:\omega^{<\omega}\mapsto\omega^{<\omega}\) that respects lexicographic ordering and longest common initial segments, i.e., \(T(\sigma^{\sim}\!\langle n\rangle)(|T(\sigma)|)\) is strictly monotonic in \(n\) (on it's domain). The notions of subtree3, subtree above \(\tau\) (\(T\,/\,\tau\)), \(\sigma\ast T\)and subtree compatible with \(\tau\) (\(T^{\langle\tau\rangle}\) ) all generalize to f-trees in straightforward ways (inducing the same operation on the set of strings). An f-tree is (weakly) \(\omega\)-branching if it's range is \(\omega\)-branching as a set of strings and \(\omega\)-branching if it's a total function on \(\omega^{<\omega}\). When working with pruned perfect trees in \(2^{<\omega}\) we will identify trees and f-trees in both directions but otherwise only go from an f-tree to the associated set of strings. Footnote 2: In \(2^{<\omega}\) we can, computably in \(T\), find the least extension of \(\sigma\in T\) which either has no extensions in \(T\) or two extensions in \(T\) – provided we assume such an extension always exists. Footnote 3: The f-tree \(\widehat{T}\) is a subtree of the f-tree \(T\) just if there is a f-tree \(S\) s.t. \(\widehat{T}=T\circ S\). We write \(\sigma\Vdash\psi\) to denote forcing over \(2^{<\omega}\) or \(\omega^{<\omega}\) and \(\sigma\Vdash_{T}\psi\) to denote local forcing on the (pruned) tree \(T\) (see [9] for details). A set/function is \(\kappa\)-generic iff it forces either \(\psi\) or \(\neg\psi\) for every \(\Sigma^{0}_{\lambda}\) sentence with \(\lambda<1+\kappa\) and weakly \(\kappa\)-generic iff it meets every dense \(\Sigma^{0}_{\lambda}\) set of strings (note the application at limit ordinals). Recall that a set of strings \(W\) is dense if \((\forall\sigma)(\exists\tau\in W)(\sigma\prec\tau)\) and that \(X\subset\omega\) meets \(W\) if \((\exists\tau\in W)(\tau\prec X)\) (similarly for \(f\in\omega^{\omega}\)). ### \(\Pi^{0}_{n}\) Classes and \(\omega\)-Rea sets A \(\Pi^{0}_{n}\) set (function) class is the set of elements in \(2^{\omega}\) that satisfy some \(\Pi^{0}_{n}\) formula with a free set (function) variable. We will use the term \(\Pi^{0}_{n}\) class without further specification to refer to a \(\Pi^{0}_{n}\) set class. We note that if \(n>0\) then \(\mathscr{F}\subset\omega^{\omega}\) is a \(\Pi^{0}_{n}\) function class iff there is a computable relation \(R\) and a quantifier block \(\forall x\ldots Qy\) containing \(n\) alternations such that \(f\in\mathscr{F}\) iff \(\forall x\ldots QyR(f\!\upharpoonright\!y,x,\ldots,y)\) (and likewise for a \(\Pi^{0}_{n}\) set class). An immediate consequence of this fact is that \(\Pi^{0}_{1}\) classes can be identified with the set of paths through a computable tree. Interestingly, up to degree, \(\Pi^{0}_{2}\) classes and \(\Pi^{0}_{1}\) function classes are equivalent in the following sense. **Lemma 2.1**.: _If \(\mathscr{F}\subset\omega^{\omega}\) is a \(\Pi^{0}_{1}\) function class then there is a \(\Pi^{0}_{2}\) class \(\mathscr{C}\) and a degree preserving computable homeomorphism of \(\mathscr{F}\) with \(\mathscr{C}\). Conversely, given a \(\Pi^{0}_{2}\) class \(\mathscr{C}\) there is a \(\Pi^{0}_{1}\) function class and a degree preserving computable homeomorphism from \(\mathscr{C}\) to \(\mathscr{F}\). This holds with all possible uniformity._ Proof.: For the first claim, it is enough to note that there is a computable homeomorphism \(\Gamma\) of \(\omega^{\omega}\) with the collection of coinfinite sets (viewed as a subset of the cantor space) defined by setting \(\Gamma(\langle\rangle)=\langle\rangle\) and \(\Gamma(\sigma^{\frown}\langle i\rangle)=\Gamma(\sigma)^{\frown}\langle 1^{i} \rangle^{\frown}\langle 0\rangle\) (where \(\langle 1^{i}\rangle\) denotes a string of \(i\) 1's. The other direction is slightly more tricky. Given a \(\Pi^{0}_{2}\) class \(\mathscr{C}\) such that \(X\in\mathscr{C}\iff(\forall z)(\exists y)R(X\!\upharpoonright\!y,z,y)\) define \(\Gamma(X)(2z)\) to be the least \(y\) such that \(R(X\!\upharpoonright\!y,z,y)\) and \(\Gamma(X)(2z+1)=X(z)\). We now define a computable tree \(T\subset\omega^{<\omega}\) with \(\mathscr{F}=[T]\). Given \(\sigma\in\omega^{<\omega}\) with \(|\sigma|\equiv 0\pmod{2}\) let \(\sigma=\tau\oplus\epsilon\) (so \(\tau(x)=\sigma(2x),\epsilon(x)=\sigma(2x+1)\)) and place \(\sigma\in T\) iff \((\forall l<|\tau|)R(\epsilon\!\upharpoonright\!t,\tau(l),l)\). If \(|\sigma|\equiv 1\pmod{2}\) then place \(\sigma\in T\) iff either \(\sigma^{\frown}\langle 0\rangle\in T\) or \(\sigma^{\frown}\langle 1\rangle\in T\). Clearly, \(T\) is a computable tree and \(\Gamma\) is a computable continuous bijection of \(\mathscr{C}\) with \([T]\) with a continuous inverse on \([T]\). As the name would suggest, a \(\Pi^{0}_{n}\) singleton is a set/function that's the only element in a \(\Pi^{0}_{n}\) class. Recall that every \(\omega\)-REA set is a \(\Pi^{0}_{2}\) singleton but not vice-versa. For the interested reader unfamiliar with the \(\omega\)-REA sets we refer them to [6] but as these sets will primarily play a motivating role in this paper it's enough for the reader to understand that they are the result of effectively iterating the operation \(X\mapsto X\oplus W^{X}_{i}\)\(\omega\) many times (so the \(n+1\)-st component must be r.e. in the first \(n\) components). ### Ordinal Notations We will generalize our main theorem past \(\omega\) to arbitrary ordinal notations. The reader interested in only claims about arithmetic reductions such as the headline corollary can proceed without any understanding of arbitrary notations and simply pretend that notations only range over \(\omega\cup\{\omega\}\). We exile most of the details regarding ordinal notations to appendix B but provide enough information about notational conventions (inspired by [12]) here for readers to interpret our arguments for \(\omega\). Kleene's set of ordinal notations is \(\mathcal{O}\), the canonical ordering of notations is \(<_{\mathcal{O}}\). The height of a notation (or ordinal) \(\kappa\) is \(|\kappa|_{\mathcal{O}}\). The r.e. set of notations below \(\alpha\) is \(\mathcal{O}_{\alpha}\), \(\overrightarrow{\mathcal{O}}\) is the set of limit notations, \({}^{+}\mathcal{O}\) the set of successor notations. If \(\alpha\in{}^{+}\mathcal{O}\) we write \(\alpha^{-}\) to denote the immediate predecessor of \(\alpha\). For \(\lambda\) a limit notation we denote the \(n\)-th element of the effectively given increasing sequence defining \(\lambda\) by \(\left\{\lambda\right\}^{\mathcal{O}}(n)\). We elide the differences between finite notations and elements of \(\omega\) as well as that between \(\omega\) and some canonical notation for it. ### Rates of Growth We say that \(g\in\omega^{\omega}\) is \(C\subset\omega^{\omega}\)-escaping if \(g\) isn't dominated by any \(f\in C\), \(X\subset\omega\)-escaping if it escapes from the Turing degree of \(X\) and arithmetically escaping if it escapes from the set of arithmetic functions. Recall that \(f\in\omega^{\omega}\) dominates (majorizes) \(g\omega^{\omega}\) iff \(f(x)\geq g(x)\) for all but finitely many \(x\) (all \(x\)). Following [9] we draw on the fact that a Turing degree is hyperimmune just if it contains a \(\mathbf{0}\)-escaping function and say that a arithmetic degree is hyperimmune just if it contains an arithmetically escaping function. It's a well-known fact that if \(A\) is an r.e. set then \(A\) is uniformly computable in any \(g\) majorizing \(m_{A}(x)=\mu s\left(A\right|_{x}=A_{s}\!\left\lceil{}_{x}\right\rceil\). Thus, the degree of an r.e. set can be characterized in terms of the rate of growth of a function computable in that degree. However, it's slightly less well-known that this isn't just true of r.e. sets but of all \(\Pi^{0}_{2}\) singletons. **Definition 2.2**.: \(f\in\omega^{\omega}\) is a **uniform modulus** for \(X\) if there is a computable functional \(\Phi\) such that if \(g\) majorizes \(f\) then \(\Phi(g)=X\). \(f\) is a **uniform self-modulus** as well if \(f\equiv_{\mathbf{T}}X\). **Lemma 2.3**.: _If \(f\) is a \(\Pi^{0}_{1}\) function singleton then every \(g\) majorizing \(f\) uniformly computes \(f\). Furthermore, any non-arithmetic \(\Pi^{0}_{2}\) singleton is arithmetically hyperimmune._ Proof.: By lemma 2.1 the second claim follows from the first. Now suppose that \(f\) is the unique path through \([T]\) and that \(f\) is majorized by some arithmetic function \(g\). Now let \(\widehat{T}\) consist of all \(\sigma\in T\) majorized by \(g\) (on \(\operatorname{dom}\sigma\)). To compute \(f\!\left\lceil{}_{l}\right\rceil\) search for some \(\sigma,k\) such that \(|\sigma|=l\) and \(\Big{(}\forall\tau\in\widehat{T}\Big{)}(|\tau|=k\implies\tau\succ\sigma)\). As \(f\in[\widehat{T}]\) we must have \(f\succ\sigma\) for any such \(\sigma\). To complete the proof it is enough to show that such a \(\sigma\) can always be found. Suppose not. Then for some incompatible pair of length \(l\) strings \(\sigma_{0},\sigma_{1}\in\widehat{T}\) we have that \(T_{i}=\Big{\{}\tau\in\widehat{T}\mid\tau\not\!\left\lceil{}_{\sigma_{i}} \right\}\) is infinite for \(i\in\{0,1\}\). As \(\widehat{T}\) is finitely branching Konig's lemma tells us that since each \(T_{i}\) is infinite it contains an infinite path \(f_{i}\). Either \(f_{0}\) or \(f_{1}\) is a path through \([T]\) not equal to \(f\). Contradiction. ## 3. Fast Growing Functions and Minimality Before we prove corollary 4.3 we first consider a seemingly promising, but ultimately futile, approach to prove that no \(\omega\)-REA set can be of minimal arithmetic degree. This failure provides an interesting result about the arithmetic degrees in its own right and illustrates both some of the similarities and differences between the role of the r.e. sets in the Turing degrees and the \(\omega\)-REA sets in the arithmetic degrees. As an added bonus it will preview some issues that will arise later and remind readers of the standard construction of a minimal arithmetic degree [10]. ### Motivation One of the most powerful methods to prove results about r.e. sets is to threaten to code one set into another (e.g., see Sack's proof of the Density theorem [13]). However, translating this approach to the \(\omega\)-REA sets under \(\leq_{\mathbf{a}}\) faces two serious barriers. First, the fact we can only place elements into the \(n+1\)-th component of an \(\omega\)-REA set if they are enumerated in an r.e. fashion from the \(n\)-th component. This makes it very difficult to threaten to code \(X\) into \(Y\) without following through. Second, the coding would somehow have to control/react to facts about arbitrarily many jumps of \(Y\). One potential way to avoid the difficulty translating coding arguments is to ignore the details about what elements enter a set and just focus on rate of growth/domination. Every non-arithmetic \(\Pi^{0}_{2}\) singleton (and hence \(\omega\)-REA set) computes an arithmetically escaping function and we know that non-domination strength is often a good way to build sets of smaller degree, e.g., Kurtz's proof that every hyperimmune degree computes a weak \(1\)-generic [7]. Also, [1] showed that every \(\mathbf{0^{\prime}}\)-escaping function computes a weak \(2\)-generic (and thus not of minimal Turing degree). While further non-domination strength won't ensure we can compute a weak \(3\)-generic an examination of the proof of this claim [1] suggests that this is more about the ease of avoiding genericity not a limitation on the computational power of non-domination strength. This leaves open the possibility that arithmetically escaping functions compute \(\omega\)-generics under some local forcing4 or other forcing notion. Besides, it simply seems intuitively unlikely that a minimal arithmetic degree, a degree which should have the very least amount of computational power, could include a arithmetically escaping function. Footnote 4: However, a modification of the same argument will allow one to show that no amount of non-domination is enough to compute a generic with respect to local forcing on some _sufficiently definable_ pruned perfect tree \(T_{e}\). The construction is as before, except now we compute a pair \(T_{e},X_{e}\) from each path on \(T\) but ends up working quite similarly with the addition that we achieve immediate victory if we can force \(X_{e}\) to leave \(T_{e}\). However, this doesn’t necessarily show that we don’t compute some \(\omega\)-generic relative to local forcing on some more complicated tree. This possibility is rendered more plausible by the fact that the the standard construction of a minimal arithmetic degree [11] naturally produces a set which doesn't compute any arithmetically escaping functions. **Proposition 3.1**.: _Suppose that for every \(n\) there is an arithmetic tree \(T_{n}\) such that \(X\in[T_{n}]\) and every path through \(T_{n}\) is \(n\)-generic with respect to \(\Vdash_{T_{n}}\) then \(X\) is arithmetically hyperimmune-free._ Proof.: Suppose \(f\in\omega^{\omega}\) and \(X\models\psi(x,y)\iff f(x)=y\) for some \(\Sigma^{0}_{n}\) sentence \(\psi\). We now construct an arithmetic \(g\) majorizing \(f\) as follows. Let \(m=n+2\) and consider the sentence that asserts \(\psi(x,y)\) defines a total function \[\varphi\stackrel{{\text{\tiny def}}}{{=}}(\forall x)(\exists y)( \psi(x,y))\wedge(\forall x)(\forall y)(\forall y^{\prime})(\psi(x,y)\wedge \psi(x,y^{\prime})\implies y=y^{\prime})\] By assumption, \(X\models\varphi\) and therefore there is some \(\sigma\prec X,\sigma\in T_{m}\) with \(\sigma\Vdash_{T_{m}}\varphi\). Let \(\widehat{T}_{m}\) be the strings in \(T_{m}\) compatible with \(\sigma\). We compute \(g(x)\) from \({T_{m}}^{(m)}\) by searching for a finite set of pairs \(y_{i},\tau_{i}\) with \(\tau_{i}\Vdash_{T_{m}}\psi(x,y_{i})\) such that every path through \([\widehat{T}_{m}]\) extends some \(\tau_{i}\). Since all \(Y\in[\widehat{T}_{m}]\) are \(m\) locally generic every such \(Y\models\varphi\). Thus, for all \(Y\in[\widehat{T}_{m}]\) we must have \(Y\models\psi(x,y)\) for some \(y\) and thus (as \(\psi\) is \(\Sigma^{0}_{n},n<m\)) for some \(l\) we have \(Y|_{l}\Vdash_{T_{m}}\psi(x,y)\). Thus, there is an set of pairs \(y_{i},\tau_{i}\) as described and as the \(\tau_{i}\) may be taken to be incompatible. As \(T_{m}\) is finitely branching this set must be finite or, by Konig's lemma, there would be an infinite path through \(\widehat{T}_{m}\) extending no \(\tau_{i}\). Now let \(g(x)\) be larger than all the \(y_{i}\) in our set of pairs. If \(Y\in[\widehat{T}_{m}]\) then \(g\) majorizes the function \(f_{Y}\) where \(f_{Y}(x)=y\iff Y\models\psi(x,y)\) and as \(X\in[\widehat{T}_{m}]\) it follows that \(g\) majorizes \(f\). As \(T_{m}\geq_{\mathbf{T}}\widehat{T}_{m}\) is arithmetic it follows that \(g\) is an arithmetic function majorizing \(f\). The trees \(T_{n}\) in the above proposition track the trees used in the construction of a minimal arithmetic degree. Thus, the usual construction of a minimal arithmetic degree produces an arithmetically hyperimmune-free degree. ### An Arithmetically Hyperimmune Minimal Degree Unfortunately, despite the reasons to conjecture that arithmetically hyperimmune functions couldn't be of minimal arithmetic degree, it turns out not to be the case. Indeed, it turns out that any amount of non-domination strength is compatible with being of minimal arithmetic degree. This contrasts with the situation in the Turing degrees where no \(\mathbf{0}^{\prime}\) escaping function can be of minimal Turing degree. **Theorem 3.2**.: _There is a pruned perfect \(\omega\)-branching f-tree \(T\leq_{\mathbf{T}}\mathbf{0}^{(\omega)}\) such that every \(f\in[T]\) is of minimal arithmetic degree._ We will break the proof of this theorem up into a sequence of lemmas. However, before we do that let's first verify that the theorem actually provides the desired (or maybe undesired) arithmetically minimal, arithmetically hyperimmune degree. **Corollary 3.3**.: _There is a minimal arithmetic degree \(\mathbf{a}\) that is of arithmetically hyperimmune degree. Indeed, for any countable \(C\subset\omega^{\omega}\) there is a minimal arithmetic degree \(\mathbf{a}\) containing a \(C\) escaping member._ Proof.: The first claim follows from the second by taking \(C\) to be the collection of arithmetic \(f\in\omega^{\omega}\). To prove the second claim let \(g\in\omega^{\omega}\) be some function dominating every element of \(C\) and \(T\) as in theorem 3.2. We build a path \(f\) through \([T]\) by letting \(\sigma_{0}\) be the root of \(T\) and \(\sigma_{n+1}\) to be the \(\prec\) least \(\omega\)-branching node extending \(\sigma_{n}\widehat{\ \ }\langle m\rangle\) for some \(m\) large enough that \(\sigma_{n+1}(|\sigma|)>g(|\sigma|)\). As \(T\) is a pruned \(\omega\)-branching tree we can always find such extensions. Clearly, \(f\) isn't dominated by any member of \(C\) and, as \(f\in[T]\), \(f\) is of minimal arithmetic degree. Recall from the construction of a minimal Turing degree that **Definition 3.4**.: The strings \(\tau_{0},\tau_{1}\)\(e\)-split if \(\Phi_{e}(\tau_{0})\mid\Phi_{e}(\tau_{1})\) and that in that construction we build a sequence of computable trees \(T_{e}\subset 2^{<\omega}\) with \(T_{e+1}\) a subtree of \(T_{e}\) such that one of the following obtains (here we identify pruned, perfect binary trees and f-trees). 1. (Partiality) \((\forall f\in[T_{e}])(\Phi_{e}(f)\mathord{\uparrow}\,)\) 2. (Non \(e\)-splitting) for all \(\tau,\tau^{\prime}\in T_{e}\), \(\tau,\tau^{\prime}\) don't \(e\)-split 3. (\(e\)-splitting) For all \(\sigma\in T_{e}\), \(T_{e}(\sigma\mathord{\ \raisebox{-1.0pt}{\scalebox{1.0}{$\frown$}}}(0))\) and \(T_{e}(\sigma\mathord{\ \raisebox{-1.0pt}{\scalebox{1.0}{$\frown$}}}(1))\)\(e\)-split. We then build \(f\in\bigcap_{e\in\omega}[T_{e}]\) ensuring that either \(\Phi_{e}(f)\) is partial (1), computable (2) or computes \(f\) (3). We adopt the same general approach, but, to handle arithmetic reductions rather than Turing reductions we'll need to replace the notion of \(e\)-splitting with an analog based on local forcing (as in the construction of a minimal arithmetic degree from [11] discussed above). However, we'll need to adjust this construction to allow us to build \(\omega\)-branching trees. First, however, we introduce notation to represent an analog of partial application of a functional for forcing. **Definition 3.5**.: Given a notion of forcing \(\Vdash\) a condition \(\sigma\) and a sentence \(\psi\) with a single free (number) variable let \(\psi^{\Vdash}\) (\(\sigma\)) denote the longest string \(\tau\in 2^{<\omega}\) such that \(n\in\operatorname{dom}\tau\) implies \(\sigma\Vdash\psi(n)\wedge\tau(n)=1\) or \(\sigma\Vdash\neg\psi(n)\wedge\tau(n)=0\). We extend this in the obvious way to infinite paths. In other words, \(\psi^{\Vdash}(\sigma)\) represents the initial segment of \(\psi(A)\) whose values have been determined for \(A\succ\sigma\) (assuming \(A\) is sufficiently generic). **Definition 3.6**.: If \(\psi_{e}\) is a \(\Sigma^{0}_{n}\) or \(\Pi^{0}_{n}\) formula with a single free variable then 1. A pair of strings \(\tau,\tau^{\prime}\)\(e\)-fsplits on \(T\) just if \(\psi_{e}^{\Vdash_{T}}\left(\tau\right)\mid\psi_{e}^{\Vdash_{T}}\left(\tau\right)\). 2. A pruned f-tree \(T\) is totally non-\(e\)-fsplitting if there are no \(\tau,\tau^{\prime}\) in the image of \(T\) that \(e\)-fsplit. 3. A pruned f-tree is totally \(e\)-fsplitting if whenever \(\sigma\in\omega^{<\omega},n\neq n^{\prime}\) then \(T(\sigma\mathord{\ \raisebox{-1.0pt}{\scalebox{1.0}{$\frown$}}}(n))\), \(T(\sigma\mathord{\ \raisebox{-1.0pt}{\scalebox{1.0}{$\frown$}}}(n^{\prime}))\)\(e\)-fsplits whenever both are defined. 4. A pruned f-tree \(T\) is \(e\)-deciding if it is either totally non-\(e\)-fsplitting or totally \(e\)-fsplitting and every path through \(T\) is \(n\) generic with respect to \(\Vdash_{T}\). **Lemma 3.7**.: _If \(T:\omega^{<\omega}\mapsto\omega^{<\omega}\) is a pruned \(e\)-deciding f-tree, \(f\in[T]\) and \(X=\psi_{e}(f)\) then \(X\) is either arithmetic in \(T\) or \(f\) is arithmetic in \(X\oplus T\)._ Proof.: As every path through \(T\) is \(n\)-generic with respect to \(\Vdash_{T}\) if \(f\Vdash_{T}\psi_{e}(x)\) then \(x\in\psi_{e}(f)\). Hence, if \(T\) is totally non-\(e\)-fsplitting then (the characteristic function for) \(X\) is the union of \(\psi_{e}^{\Vdash_{T}}\left(\tau\right)\) for \(\tau\) in the image of \(T\). If \(T\) is totally \(e\)-fsplitting and \(X=\psi_{e}(f)\) then \(f\) is the union of \(\tau\) in the image of \(T\) with \(\psi_{e}^{\Vdash_{T}}\left(\tau\right)\not|X\). As forcing for arithmetic sentences is arithmetic the conclusion follows. We now prove the key lemma we'll use to establish theorem 3.2. **Lemma 3.8**.: _Given \(e\) and a pruned \(\omega\)-branching perfect f-tree \(T:\omega^{<\omega}\mapsto\omega^{<\omega}\) there is a pruned \(\omega\)-branching perfect \(e\)-deciding subtree \(\widehat{T}\) uniformly arithmetic in \(T\)._ Proof.: We first observe that given a pruned perfect f-tree tree \(T\) there is a pruned perfect subtree \(\tilde{T}\) where every path through \(\tilde{T}\) is \(n\)-generic with respect to \(\Vdash_{T}\). Moreover, we note that since \(\sigma\Vdash_{T}\psi\implies\sigma\Vdash_{\hat{T}}\psi\) every path is also \(n\)-generic with respect to \(\tilde{T}\) and if \(T\) was either totally non-\(e\)-fsplitting or totally \(e\)-fsplitting and \(\psi_{e}\in\Sigma_{n}^{0}\cup\Pi_{n}^{0}\) than \(\tilde{T}\) is \(e\)-deciding. As we can take \(\tilde{T}\) to be arithmetic in \(T\), it is enough to demonstrate that there is pruned \(\omega\)-branching perfect subtree of \(T\), \(\widehat{T}\) arithmetic in \(T\) that is either totally non-\(e\)-fsplitting or totally \(e\)-fsplitting. We consider two cases. Case 1: Suppose some \(\sigma\in\operatorname{rng}T\) isn't extended by any \(e\)-fsplit \(\tau_{0},\tau_{1}\in\operatorname{rng}T\). Let \(\widehat{T}=T^{\langle\sigma\rangle}\), the full subtree of \(T\) above \(\sigma\). Clearly, \(\widehat{T}\) is totally non-\(e\)-fsplitting. Case 2: Suppose case 1 doesn't hold. We now seek to build a perfect pruned \(\omega\)-branching totally \(e\)-fsplitting subtree \(\widehat{T}\) of \(T\). Absent the need to be \(\omega\) branching we could simply search for an \(e\)-fsplitting pair extending \(\widehat{T}(\sigma)\) on \(T\) to define \(\widehat{T}(\sigma^{\frown}\langle 0\rangle),\widehat{T}(\sigma^{\frown} \langle 1\rangle)\). However, if we then later tried to define \(\widehat{T}(\sigma^{\frown}\langle i\rangle)\) we might be unable to choose a value for \(\widehat{T}(\sigma^{\frown}\langle i\rangle)\) which \(e\)-fsplits with \(\widehat{T}(\sigma^{\frown}\langle 0\rangle)\). We could further extend \(T(\sigma^{\frown}\langle 0\rangle)\) to find a splitting but we risk having to do this infinitely often. Instead, we make sure that when we pick a value for \(\widehat{T}(\sigma^{\frown}\langle 0\rangle)\) it \(e\)-fsplits with extensions of infinitely many other strings of the form \(\widehat{T}(\sigma)^{\frown}\langle i\rangle\). We build an f-tree \(V:\omega^{<\omega}\mapsto\omega^{<\omega}\) and define \(\widehat{T}=T\circ V\). Our construction of \(V\) will proceed in stages. At stage \(s\) we will define \(V(\sigma)\) where \(s={}^{r}\sigma^{\ast}\) (remember, that \(\sigma\prec\tau\implies{}^{r}\sigma^{\ast}<{}^{r}\tau\)). At stage \(0={}^{r}\langle\rangle\)' we being by defining \(V(\langle\rangle)=\langle\rangle\). Note that, since \(i<j\implies{}^{r}\sigma^{\frown}\langle i\rangle^{\ast}<{}^{r}\sigma^{\frown }\langle j\rangle^{\ast}\) if \({}^{r}\sigma^{\frown}\langle n\rangle^{\ast}=s\) then we've already defined \(V(\sigma^{\frown}\langle m\rangle),m<n\). Once we've defined \(V(\sigma)\) we maintain a set \(S^{\sigma}_{s}\) of extensions of \(\sigma\) representing possible initial segments of \(V(\sigma^{\frown}\langle i\rangle)\) for \(V(\sigma^{\frown}\langle i\rangle)\) not yet defined at \(s\). To ensure that \(\widehat{T}\) remains \(\omega\) -branching we ensure that all elements in \(S^{\sigma}_{s}\) extend incompatible immediate extensions of \(\sigma\). Unless otherwise specified, \(S^{\sigma}_{s+1}=S^{\sigma}_{s}\). Suppose that at stage \(s\) we are working to define \(V(\sigma^{\frown}\langle n\rangle)\), i.e., \({}^{r}\sigma^{\frown}\langle n\rangle^{\ast}=s\). Let \(\tau\) be the lexicographically least element of \(S^{\sigma}_{s}\) and \(U=S^{\sigma}_{s}\setminus\{\tau\}\). Let \(\tau_{0},\tau_{1}\succ\tau\) be such that \(T(\tau_{0}),T(\tau_{1})\)\(e\)-fsplit. Such strings must exist or case 1 would have obtained. Let \(U_{i},i\in\{0,1\}\) be the set of \(\prec\) minimal \(\upsilon\) extending some element in \(U\) such that \(T(\upsilon)\) and \(T(\tau_{i})\)\(e\)-fsplit. Let \(i\in\{0,1\}\) by the least such that \(U_{i}\) is infinite. Since the 'images' of \(T(\tau_{0})\) and \(T(\tau_{1})\) under \(\psi_{e}\) disagree such an \(i\) must exist. Set \(V(\sigma^{\frown}\langle n\rangle)=\tau_{i}\) and \(S^{\sigma}_{s+1}=U_{i}\), initialize \(S^{\sigma^{\frown}\langle n\rangle}\) to \(\{\tau_{i}\widehat{\ \ whenever we define \(V(\sigma^{\sim}\langle n\rangle)\) we limit \(T(V(\sigma^{\sim}\langle n^{\prime}\rangle)),n^{\prime}>n\) to extensions of string which \(e\)-fplit with \(T(V(\sigma^{\sim}\langle n\rangle))\) it follows that \(\widehat{T}\) is totally \(e\)-fsplitting. As these cases are exhaustive, this suffices to complete the proof. We can now complete the proof of the theorem. **Theorem 3.2**.: _There is a pruned perfect \(\omega\)-branching f-tree \(T\leq_{\mathbf{T}}\mathbf{0}^{(\omega)}\) such that every \(f\in[T]\) is of minimal arithmetic degree._ Proof.: Iteratively applying lemma 3.8 would immediately suffice to produce a minimal arithmetic degree. However, we wish to end up with an \(\omega\)-branching tree of such degrees. To that end, we set\(T_{0}\) to be the identity function on \(\omega^{<\omega}\) and inductively define \(T_{n+1}\!\upharpoonright\!=T_{n}\!\upharpoonright\!_{n}\) (i.e. equal when applied to strings of length at most \(n\)) and if \(|\sigma|=n\) and \(T_{n+1}^{\sigma}\) is the \(n\) -deciding subtree of \(T_{n}^{(T_{n}(\sigma))}\) produced by lemma 3.8 then \(T_{n+1}(\sigma^{\sim}\tau)=T_{n}(\sigma)*T_{n+1}^{\sigma}\). Now let \(T\) be the limit of this process, i.e., \(T(\sigma)=T_{|\sigma|}(\sigma)\). \(T\) is a perfect pruned \(\omega\)-branching f-tree. Since we defined \(T_{n+1}\) in a uniform arithmetic fashion from \(T_{n}\) we have \(T\leq_{\mathbf{T}}\mathbf{0}^{(\omega)}\). Finally, if \(f\in[T]\) and \(X=\psi_{e}(f)\) and \(\sigma=f\!\upharpoonright\!_{e+1}\) then \(f\!/\,\sigma\in[T_{e+1}^{\sigma}]\) and thus, as \(T_{e+1}^{\sigma}\) is arithmetic, by lemma 3.7 either \(X\) is arithmetic or \(f\!/\,\sigma\equiv_{\mathbf{T}}f\) is arithmetic in \(X\). ### Fast Growth and Definability In retrospect, perhaps we shouldn't be too surprised by the result in the last section. After all, there are minimal Turing degrees of hyperimmune degree (e.g. any minimal degree below \(\mathbf{0}^{\prime}\)). And maybe we don't have to completely give up the idea of using non-domination strength to show that no \(\omega\)-REA set can be of minimal arithmetic degree. While, _surprisingly_, unlike a true 1-generic, a weak 1-generic can be of minimal Turing degree (a minimal degree below \(\mathbf{0}^{\prime}\) can't be hyperimmune-free and thus computes a weak 1-generic which, by virtue of being non-computable, must be of that very minimal degree), we were still able to use non-domination strength to demonstrate the non-existence of minimal r.e. degrees by identifying a property (weak 1-genericity) that enough non-domination strength would less us satisfy (compute) but which couldn't hold of any set of r.e. degree. Perhaps we could similarly show that every arithmetically escaping function computes a non-arithmetic set with a property that guarantees it's not of \(\omega\)-REA degree. What might play the role of this property in the arithmetic degrees? The result in [1] tells us that it can't be \(\omega\)-genericity but the following lemma suggests a different way of generalizing the idea that more non-domination strength should somehow allow us to compute less definable sets. **Lemma 3.9**.: _If \(X\subset\omega\) is \(n\)-generic with respect to local forcing on some perfect tree (or weakly n-generic ) then \(X\) isn't a \(\Pi_{n}^{0}\) singleton. Similarly, no \(n\)-generic \(f\in\omega^{\omega}\) is a \(\Pi_{n}^{0}\) function singleton._ Proof.: Suppose that \(X\) is the unique set such that \(X\models\psi\) for some \(\Pi^{0}_{n}\) formula (with a set constant) \(\psi\). By \(n\)-genericity we must have \(X\!\upharpoonright_{T}\psi\) for some \(l\). As \(T\) is perfect there is some \(n\)-generic path \(Y\succ X\!\upharpoonright_{l},Y\neq X\) through \(T\). But, by \(n\)-genericity \(Y\models\psi\) contradicting the fact that \(X\) was the unique solution. To show the claim holds for weakly \(n\)-generic sets suppose that \(\psi=(\forall x)\Psi(x)\). Now let \(S=\{\sigma\mid(\exists x)(\sigma\Vdash\neg\Psi(x))\}\). \(S\) is a \(\Sigma^{0}_{n}\) set and if no element in \(S\) extended \(\tau\) then \(\tau\Vdash^{w}\psi\) and, as above, would contradict the uniqueness of \(X\). Hence, \(S\) is a \(\Sigma^{0}_{n}\) dense set of strings and, as every weak \(n\) generic is \(n-1\)-generic and \(\neg\Psi\in\Pi^{0}_{n-1}\), if \(X\) is a weak \(n\)-generic meeting \(S\) then \(X\models\neg\psi\). The argument for function singletons proceeds identically. In this light, we can think of the results from [1] as showing us that any \(\mathbf{0}\)-escaping function computes a set that's not a \(\Pi^{0}_{1}\) singleton and any \(\mathbf{0}^{\prime}\)-escaping function computes a set that's not a \(\Pi^{0}_{2}\) singleton5. We leave it as an exercise to demonstrate that the techniques in [1] show that sufficient non-domination strength allows us to compute a set that's not a \(\Pi^{0}_{3}\) singleton. Thus, a plausible conjecture is that an arithmetically escaping function computes a set that's not an arithmetic singleton (i.e. not a \(\Pi^{0}_{n}\) singleton for any \(n\)). If true, this would prove that no \(\omega\)-REA set is on minimal arithmetic degree. Footnote 5: Since \(\Pi^{0}_{2}\) singletons are closed under Turing equivalence, we could say not of \(\Pi^{0}_{2}\) singleton degree. **Proposition 3.10**.: _If every arithmetically escaping function \(f\) can arithmetically define a set that's not an arithmetic singleton then no \(\Pi^{0}_{2}\) singleton, and hence no \(\omega\)-REA set, is of minimal arithmetic degree._ Proof.: Suppose, for contradiction, \(X\) is a \(\Pi^{0}_{2}\) singleton of minimal arithmetic degree. By lemma 2.3\(X\) computes an arithmetically escaping \(f\). Let \(Y\leq_{\mathbf{a}}f\) such that \(Y\) isn't an arithmetic singleton as per the proposition. If \(Y\) were arithmetic then \(Y\) would be a \(\Sigma^{0}_{n}\) set for some \(n\) and thus an arithmetic singleton. Therefore, we must have \(Y\equiv_{\mathbf{a}}X\). Thus, for some arithmetic formulas \(\psi,\Psi\) we have \(\psi(X)=Y\wedge\Psi(Y)=X\) and thus \(Y\) is the unique solution of the arithmetic formula which asserts that \(\Psi(Y)\) satisfies the \(\Pi^{0}_{2}\) formula defining \(X\) and that \(\psi(\Psi(Y))=Y\). Contradiction. ## 4. A Arithmetically Minimal \(\Pi^{0}_{2}\) Singleton We will now prove that our seemingly plausible conjectures (once again) fail and that there is a \(\Pi^{0}_{2}\) singleton of minimal arithmetic degree. Our approach to proving this borrows substantially from Harrington's proof of McLaughlin's conjecture [5]. As it's easier to work with computable trees than \(\Pi^{0}_{2}\) classes we'll work in \(\omega^{<\omega}\). By lemma 2.1 it will be enough to build a computable tree \(T\subset\omega^{<\omega}\) with a single path \(f\) of minimal arithmetic degree. However, doing this in a single step would be dauntingly difficult so we instead break up our construction into steps. Specifically, the primary task will be to prove the following proposition. **Proposition 4.1**.: _Given a (potentially partial) tree \(S\subset\omega^{<\omega}\) computable in \(X^{\prime\prime}\) there is a (total) computable tree \(T\subset\omega^{<\omega}\) and an \(X^{\prime\prime}\) computable partial f-tree \(\widehat{T}\) such that_ 1. \(\operatorname{rng}\widehat{T}\subset T\) _and_ \([\widehat{T}]=[T]\)__ 2. \(\widehat{T}(\cdot)\) _is a homeomorphism of_ \([S]\) _with_ \([T]\)_._ 3. _If_ \(f\in[T]\) _then_ \(f\nleq_{\mathbf{T}}X\)_._ 4. _If_ \(g\in[S]\) _then_ \(g\oplus X^{\prime\prime}\equiv_{\mathbf{T}}\left(\widehat{T}(g)\oplus X\right) ^{\prime\prime}\equiv_{\mathbf{T}}\widehat{T}(g)\oplus X^{\prime\prime}\)_._ 5. _If_ \(f\in[T]\) _and_ \(Y\leq_{\mathbf{T}}f\oplus X\) _then either_ \(Y\leq_{\mathbf{T}}X\) _or_ \(f\leq_{\mathbf{T}}Y\oplus X^{\prime\prime}\)_._ 6. _For all_ \(\sigma\in 2^{<\omega}\)_,_ \(|\widehat{T}(\sigma)|\geq|\sigma|\) _(when defined)._ _Moreover, this holds with all possible uniformity. In particular, given a computable functional \(\Upsilon_{2}\) we can effectively produce functionals \(\Upsilon,\widehat{\Upsilon}\) so that whenever \(\Upsilon_{2}(X^{\prime\prime})=S\) then \(\Upsilon(X)=T\) and \(\widehat{\Upsilon}(X^{\prime\prime})=\widehat{T}\) with the properties described above._ We will then leverage this proposition to prove the main theorem below by using it repeatedly to pull down a tree \(T_{\alpha}\leq_{\mathbf{T}}\mathbf{0}^{(\alpha)}\) to a homeomorphic image \(T\leq_{\mathbf{T}}\mathbf{0}\). **Theorem 4.2**.: _Given a limit ordinal \(\alpha<\omega_{1}^{\mathrm{CK}}\) and a tree \(T_{\alpha}\subset\omega^{<\omega},T_{\alpha}\leq_{\mathbf{T}}\mathbf{0}^{(\alpha)}\) there is a computable tree \(T\) and an f-tree \(\Gamma\leq_{\mathbf{T}}\mathbf{0}^{(\alpha)}\) such that \(\Gamma\) is a homeomorphism of \([T_{\alpha}]\) with \([T]\) satisfying the following for all \(f\in[T]\) and ordinals \(\beta<\alpha\)_ * \(f^{(\beta)}\equiv_{\mathbf{T}}f\oplus\mathbf{0}^{(\beta)}\) _and, indeed,_ \(f^{(\alpha)}\equiv_{\mathbf{T}}f\oplus\mathbf{0}^{(\alpha)}\)__ * \(f\nleq_{\mathbf{T}}\mathbf{0}^{(\beta)}\)__ * _If_ \(Y\leq_{\mathbf{T}}f^{(\beta)}\) _then either_ \(Y\leq_{\mathbf{T}}\mathbf{0}^{(\beta)}\) _or_ \(f\leq_{\mathbf{T}}Y\oplus\mathbf{0}^{(\beta+2)}\)__ _Moreover, this holds with all possibility uniformity._ Note that the final point above immediately entails that if there is some \(\beta<\alpha\) such that \(Y\leq_{\mathbf{T}}f^{(\beta)}\) then there is some \(\beta<\alpha\) such that either \(Y\leq_{\mathbf{T}}\mathbf{0}^{(\beta)}\) or \(f\leq_{\mathbf{T}}Y^{(\beta)}\). We now catalogue a number of interesting corollaries, such as the existence of a \(\Pi^{0}_{2}\) singleton of minimal arithmetic degree. **Corollary 4.3**.: _There is a \(\Pi^{0}_{2}\) singleton of minimal arithmetic degree._ Proof.: Taking \(T_{\omega}=\{\langle 0^{n}\rangle\mid n\in\omega\}\) immediately produces a \(\Pi^{0}_{1}\) function singleton of minimal arithmetic degree and applying lemma 2.1 transforms this into a \(\Pi^{0}_{2}\) singleton of minimal arithmetic degree. Our work from the previous section isn't wasted as it allows us to prove the following interesting corollary. **Corollary 4.4**.: _There is an arithmetically escaping function \(f\leq_{\mathbf{T}}\mathbf{0}^{(\omega)}\) such that every \(X\leq_{\mathbf{T}}f\) is an arithmetic singleton._ Proof.: Immediate from proposition 3.10 and corollary 4.3. Specifically, if \(X\) is a \(\Pi^{0}_{2}\) singleton of minimal arithmetic degree than the uniform self-modulus of \(X\) is an arithmetically escaping function such that every \(Y\leq_{\mathbf{a}}f\) is an arithmetic singleton. We can also derive some interesting consequences about perfect sets of minimal arithmetic degree. **Corollary 4.5**.: _There is a perfect \(\Pi^{0}_{2}\) class \(\mathscr{C}\) all of whose members are of minimal arithmetic degree and which contains elements of arbitrarily large non-domination strength._ By contain elements of arbitrarily large non-domination strength, we mean that for any countable \(C\subset\omega^{\omega}\) there is an \(X\in\mathscr{C}\) and \(f\in\omega^{\omega}\) such that \(X\equiv_{\mathbf{T}}f\) and \(f\) isn't dominated by any \(g\in C\). Proof.: Take \(T_{\omega}=\omega^{<\omega}\). Since \(\Gamma\) is an f-tree and a homeomorphism at each \(\sigma\in T_{\omega}\) we can pick \(i\) large which ensures that \(\Gamma(\sigma^{\sim}\langle i\rangle)\) is large at at \(n=|\Gamma^{\omega}_{0}(\sigma)|\). Thus, we can apply the same approach as in in corollary 3.3 to show that there is an element of \([T]\) that avoids domination by any element in \(C\). By applying lemma 2.1 we get a \(\Pi^{0}_{2}\) class whose members are Turing equivalent to the elements in \([T]\). Before we move on to providing proofs of theorem 4.2 and proposition 4.1, we end this section by asking a few questions. While the existence of a \(\Pi^{0}_{2}\) singleton of minimal arithmetic degree is suggestive, it isn't quite enough to demonstrate that the project of using the properties of fast growing functions to show \(\omega\)-REA sets can't be of minimal degree fails. This gives rise to the following question. **Question 4.6**.: If \(f\) is arithmetically escaping, must there be some \(X\leq_{\mathbf{a}}f\) where \(X\) isn't of \(\omega\)-REA arithmetic degree. At first glance, one might think that this question stands or falls with the existence of an \(\omega\)-REA set of minimal arithmetic degree. After all, the question must have a negative answer if there is such an \(\omega\)-REA set and if the question has a positive answer then no such set can exist. However, there is still the possibility that the question has a negative answer and no \(\omega\)-REA set is of minimal arithmetic degree. This would require that there is some non-arithmetic \(\omega\)-REA set \(A\) such that every \(B\leq_{\mathbf{a}}A\) is arithmetically equivalent to an \(\omega\)-REA set but that doesn't seem beyond the realm of possibility. Next, inspired by the methods in [5], we ask if this approach offers any utility if extended up through all ordinals below \(\omega^{\mathrm{CK}}_{1}\). **Question 4.7**.: Do results that create a unique path through \(\mathcal{O}\) via 'non-standard' notations offer a means to extend theorem 4.2 to prove interesting results about hyperdegrees? Such a generalization isn't as simple as merely applying the construction used in theorem 4.2 to some non-standard notation. That can't work as we could then build a \(\Pi^{0}_{1}\) function singleton that isn't of hyperarithmetic degree which contradicts the fact that if \(f\) is the unique path through a computable tree \([T]\) then \(f\in\text{HYP}\). However, the methods in [5] might allow the definition of a perfect \(\Pi^{0}_{1}\) class of elements all of which satisfy the conclusion of theorem 4.2 with respect to all \(\alpha\) in some linearly ordered path through \(\mathcal{O}\). **Question 4.8**.: Is there a perfect \(\Pi^{0}_{1}\) function class all of whose elements are both arithmetically minimal and arithmetically escaping? The difficulty here is that modifying \(T_{\omega}\) also results in modification to \(T\). **Question 4.9**.: What kind of ability do we have to control the join of pairs of minimal degrees? Are there \(\Pi^{0}_{2}\) singletons \(A,B\) of minimal arithmetic degree such that \(A\oplus B\equiv_{\mathbf{a}}\mathbf{0}^{(\omega)}\) (or even \(\equiv_{\mathbf{T}}\mathbf{0}^{(\beta)}\) for arbitrary \(\beta\in\mathcal{O}\)). Could we combine this with the idea in question 4.7 to create a perfect \(\Pi^{0}_{2}\) class whose elements join to compute \(\mathcal{O}\)? ## 5. Towers of Trees Before we get into the weeds of proving proposition 4.1 we first show that it suffices to establish the main theorem. As not all readers may wish to delve into the details involved in manipulating ordinal notations we segregate the results needed to prove the claim for \(\alpha\) above \(\omega\) into an appendix. If you are only interested in the case \(\alpha=\omega\) you may simply take \(\beta^{\Diamond}=\omega\) and \(l^{\Diamond}(n)=n\) for all \(n\in\omega\) and \(\left\{\omega\right\}^{\mathcal{O}}(n)=2n\) and replace the following definition of an even notation with that of an even number. **Definition 5.1**.: The ordinal notation \(\beta\) is an even notation if \(\beta=\lambda+2k\) where \(\lambda\) is either \(0\) or a limit notation and \(k\in\omega\). We break our proof of theorem 4.2 into a construction of a uniform sequence of trees \(T_{\beta}\) and a verification that this uniform sequence of trees has the desired properties. Note that, for the case \(\alpha=\omega\) we may also take \(\alpha^{\prime}=\omega\). **Proposition 5.2**.: _Given a limit notation \(\alpha\) and a tree \(T_{\alpha}\) computable in \(\mathbf{0}^{(\alpha)}\) the following hold_ 1. _For each even notation_ \(\beta<_{\mathcal{O}}\alpha\) _there is a tree_ \(T_{\beta}\) _uniformly computable in_ \(\mathbf{0}^{(\beta)}\)_._ 2. _For all_ \(\gamma<_{\mathcal{O}}\beta\leq_{\mathcal{O}}\alpha\)_, there is a uniformly given f-tree_ \(\Gamma^{\beta}_{\gamma}\leq_{\mathbf{T}}\mathbf{0}^{(\beta)}\) _that's a homeomorphism of_ \([T_{\beta}]\) _with_ \([T_{\gamma}]\)_._ 3. _For each even notation_ \(\beta<_{\mathcal{O}}\alpha\) _there is some_ \(l^{\Diamond}(\beta)\in\omega\) _such that if_ \(\sigma\in T_{\beta},\left|\sigma\right|=l^{\Diamond}(\beta)\) _then applying proposition_ 4.1 _with_ \(S=T_{\beta+2}\left/\right.\sigma\) _and_ \(X=\mathbf{0}^{(\beta)}\) _produces_ \(T=T_{\beta}\left/\right.\sigma\) _and_ \(\widehat{T}\) _where_ \(\sigma\widehat{\ }\widehat{T}(\tau)=\Gamma^{\beta+2}_{\beta}(\sigma\widehat{\ }\tau)\) _for all_ \(\tau\)_._ For \(2\) we understand the uniformity claim to mean that \(\Gamma^{\beta}_{\gamma}(\tau)\) is given by \(\Gamma(\beta,\gamma,\mathbf{0}^{(\beta)},\tau)\) for a single computable functional \(\Gamma\). Similarly, for \(1\) we understand the uniformity claim to mean that there is a single computable functional such that \(\Upsilon(\beta,\mathbf{0}^{(\beta)},\cdot)\) gives the characteristic function for \(T_{\beta}\) for all even notations \(\beta\leq_{\mathcal{O}}\alpha\), Moreover, that indexes for both functionals are given by a computable function of \(\alpha\) and an index for \(T_{\alpha}\). ### Verifying the Main Theorem Note that the assumption in proposition 4.1 that We now prove a utility lemma which will let us show that theorem 4.2 follows from the claim above. **Lemma 5.3**.: _Given \(\alpha,T_{\beta},\Gamma\) as in proposition 5.2 and \(f\in[T_{0}]\) for all even notations \(\beta\leq_{\mathcal{O}}\alpha\)_ 1. _There is a unique_ \(g_{\beta}\in[T_{\beta}]\) _such that_ \(\Gamma_{0}^{\beta}(g_{\beta})=f\)_._ 2. \(g_{\beta}\) _is uniformly computable from_ \(f\oplus\mathbf{0}^{(\beta)}\)_._ 3. \(f^{(\beta)}\equiv_{\mathbf{T}}f\oplus\mathbf{0}^{(\beta)}\equiv_{\mathbf{T}}g _{\beta}\oplus\mathbf{0}^{(\beta)}\)_. Moreover, this equivalence holds uniformly in_ \(\beta\)_._ 4. \(\beta<_{\mathcal{O}}\alpha\) _implies_ \(g_{\beta}\nleq_{\mathbf{T}}\mathbf{0}^{(\beta)}\)__ 5. _If_ \(\beta<_{\mathcal{O}}\alpha\) _and_ \(Y\leq_{\mathbf{T}}g_{\beta}\oplus\mathbf{0}^{(\beta)}\) _then either_ \(Y\leq_{\mathbf{T}}\mathbf{0}^{(\beta+2)}\) _or_ \(g_{\beta}\leq_{\mathbf{T}}Y\oplus\mathbf{0}^{(\beta+2)}\)__ Proof.: By point 2 of proposition 5.2 there is a unique \(g_{\beta}\) whose image under \(\Gamma_{0}^{\beta}\) is \(f\). We can compute \(g_{\beta}\) in \(\mathbf{0}^{(\beta)}\oplus f\) as \(\Gamma_{0}^{\beta}\leq_{\mathbf{T}}\mathbf{0}^{(\beta)}\) and as it's an f-tree it sends incompatible elements in the domain to incompatible elements in the range. This verifies points 1 and 2. For point 4 let \(\sigma=g_{\beta}\!\upharpoonright_{\!\!\restriction\!\!\restriction\!\!\restriction \!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\! \restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction \!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction \!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction \!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction \!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction \!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction \!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction \!\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\! \restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction \!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction \!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction \!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction \!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction \!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction \!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\! \restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\! \restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\! \restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\! \restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\! \restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\! \restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\! \restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\! \restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\! \restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\! \restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\!\restriction\!\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\restriction\!\restriction\restriction\!\restriction\!\restriction\restriction\!\restriction\restriction\!\restriction\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\restriction\!\restriction\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\restriction\!\restriction\restriction\!\restriction\!\restriction\restriction\!\restriction\restriction\!\restriction\!\restriction\restriction\!\restriction\restriction\!\restriction\!\restriction\restriction\!\restriction\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction\!\restriction\restriction\restriction\!\restriction\!\restriction\!\restriction\!\restriction\!\restriction\restriction\!\restriction Proof.: Take \(\alpha^{\prime}\) to be a notation for the desired ordinal in theorem 4.2 and apply proposition 5.2 to get the notation \(\alpha\), the sequence \(T_{\beta}\) and \(\Gamma\). We now apply lemma 5.3. The equivalence \(f^{(\beta)}\equiv_{\mathbf{T}}f\oplus\mathbf{0}^{(\beta)}\equiv_{\mathbf{T}}g_{ \beta}\oplus\mathbf{0}^{(\beta)}\) from part 3 of lemma 5.3 gives the first claim in theorem 4.2 on it's own. Applying it to part 5 of lemma 5.3 is enough to give the final claim in theorem 4.2. Finally, if \(f\leq_{\mathbf{T}}\mathbf{0}^{(\beta)}\) then that equivalence implies that \(g_{\beta}\oplus\mathbf{0}^{(\beta)}\leq_{\mathbf{T}}\mathbf{0}^{(\beta)}\) contradicting part 4 of lemma 5.3. ### Constructing the Tower We now prove proposition 5.2 follows from proposition 4.1. Note that, the assumption that \(S\) is a tree in proposition 4.1 is harmless as we can always replace the computation checking that an element is in \(S\) with the computation checking that that element and all its predecessors are elements of \(S\). To deal with notations for ordinals above \(\omega\) we use some results proved in the appendix. We start by defining \(l^{\Diamond}(\beta),\beta^{\Diamond}\) via definition B.3. As mentioned above, readers only interested in the case \(\alpha=\omega\) can skip the material on ordinal notations in the appendix. They will, however, still need to know that we can replace \(T_{\omega}\) with a tree \(T^{\prime}_{\omega}\) with \([T^{\prime}_{\omega}]=[T_{\omega}]\) in which the membership of \(\sigma\in T^{\prime}_{\omega}\) for \(|\sigma|=2n\) can be computed uniformly in \(\mathbf{0}^{(2n)}\) (a special case of the general result proved in lemma B.6). We construct a computable functional \(\Upsilon\) such that for all even notations \(\beta\) with \(\beta\leq_{\mathcal{O}}\alpha\) we have. \[\Upsilon(\beta,\mathbf{0}^{(\beta)},\sigma)=\begin{cases}1&\Longleftrightarrow \ \sigma\in T_{\beta}\\ 0&\Longleftrightarrow\ \sigma\notin T_{\beta}\end{cases}\] Regarding functionals as r.e. -sets of axioms we can define a functional \(\Upsilon\) assuming that we already have access to some partial functional \(\widehat{\Upsilon}\) and then use the recursion theorem to yield a single functional satisfying \(\Upsilon=\widehat{\Upsilon}\). Formally speaking, \(\Upsilon\) and \(\widehat{\Upsilon}\) are defined to be sets of axioms however, we will only specify those sets implicitly by instead defining the trees \(T_{\beta}\) in terms of the trees \(\tilde{T}_{\beta}\) where we understand that \(\tilde{T}_{\beta}\) represents the set defined by \(\widehat{\Upsilon}(\beta,X,\cdot)\) on the guess that \(X=\mathbf{0}^{(\beta)}\). Specifically, we define \(T_{\alpha}\) to be \(T_{\alpha}\). That is, regardless of the behaviour of \(\tilde{\Upsilon}\), \(\Upsilon(\alpha,X,\cdot)\) gives the computation that yields the characteristic function for \(T_{\alpha}\) when \(X=\mathbf{0}^{(\alpha)}\). Given \(\beta\) an even notation with \(\beta<_{\mathcal{O}}\alpha\) we define the tree \(T_{\beta}\) as follows (once \(\Upsilon\) has verified the \(\Sigma^{0}_{1}\) fact that \(\beta<_{\mathcal{O}}\alpha\) is an even notation). Let \(\lambda=\beta^{\Diamond}\). If \(|\sigma|\leq l^{\Diamond}(\beta)\) then \(\sigma\in T_{\beta}\) iff \(\sigma\in\tilde{T}^{\prime}_{\lambda}\) where if \(\lambda\) is a limit notation, \(\tilde{T}^{\prime}_{\lambda}\) is the result of applying lemma B.6 to \(\tilde{T}_{\lambda}\) so that \(\tilde{T}^{\prime}_{\lambda}|_{l^{\Diamond}(\beta)}\) has membership uniformly computable in \(\mathbf{0}^{(\beta)}\) and \([\tilde{T}^{\prime}_{\lambda}]=[\tilde{T}_{\lambda}]\). If \(\lambda\) is a successor notation than \(\tilde{T}^{\prime}_{\lambda}=\tilde{T}_{\lambda}\). For \(\sigma^{\prime}\) with \(|\sigma^{\prime}|>l^{\Diamond}(\beta)\) let \(\sigma^{\prime}=\sigma^{\frown}\tau\) where \(|\sigma|=l^{\Diamond}(\beta)\). Set \(\sigma^{\prime}\notin T_{\beta}\) if \(\sigma\notin\tilde{T}^{\prime}_{\lambda}\). If \(\sigma\in\tilde{T}^{\prime}_{\lambda}\) then apply proposition 4.1 to \(\tilde{T}^{\sigma}_{\beta+2}\stackrel{{\mathrm{def}}}{{=}}\tilde{T}_ {\beta+2}\left/\,\sigma\right.\) to yield some \(T^{\sigma}_{\beta}\) and place \(\sigma^{\prime}\in T_{\beta}\) just if \(\tau\in T^{\sigma}_{\beta}\). Note that, we can enforce the fact that \(\tilde{T}^{\sigma}_{\beta+2}\) is always a tree by only allowing strings into this set when all their predecessors have been seen to be in the set. We now define \(\Upsilon\) to be the fixed point rendering \(\tilde{\Upsilon}=\Upsilon\). Thus, for an even notation \(\beta\leq_{\mathcal{O}}\alpha\) we define \(T_{\beta}=\Upsilon(\beta,\mathbf{0}^{(\beta)})\). Note that, by part 3 of proposition B.5 we can be sure that our fixed point will satisfy \(T_{\beta}\!\upharpoonright_{l^{\circ}(\beta)}\subset T_{\beta+2}\!\upharpoonright_{l^ {\circ}(\beta)}\) so that \(\tilde{T}_{\beta+2}^{\sigma}\) will be defined for \(\sigma\in T_{\beta},|\sigma|=l^{\circ}(\beta)\). We define the functional \(\Gamma\) in a similar fashion assuming we have some \(\tilde{\Gamma}\) and applying the recursion theorem. Note that, the argument above produces two indexes \(i\) and \(q(i)\) for the functional \(\Upsilon\) where \(q\) is the computable functional implicitly defined by the construction above and \(i\) is the index given by the recursion theorem applied to \(q\). While \(i\) and \(q(i)\) result in the same set of axioms, we can't guarantee that the application of proposition 4.1 produces the same \(T,\widehat{T}\) for different indexes for the same tree \(S\). To handle this issue, we use \(\tilde{T}_{\beta+2}\) to indicate the definition in terms of \(i\) and apply proposition 4.1 to \(\tilde{T}_{\beta+2}\,/\,\sigma\) not \(T_{\beta+2}\,/\,\sigma\) to ensure we get the f-tree \(\widehat{T}\) associated with the construction of \(T_{\beta}\,/\,\sigma\). With this in mind, we define \(\Gamma_{\beta}^{\beta+2}\) for \(\beta\leq_{\mathcal{O}}\alpha\) an even notation as follows. If \(|\sigma|\leq l^{\circ}(\beta)\) then \(\Gamma_{\beta}^{\beta+2}(\sigma)=\sigma\) if \(\sigma\in\tilde{T}_{\beta+2}\) and undefined otherwise. If \(|\sigma^{\prime}|>l^{\circ}(\beta)\) then let \(\sigma=\sigma^{\prime}\!\upharpoonright_{l^{\circ}(\beta)}\) and if \(\sigma\notin\tilde{T}_{\beta+2}\) then \(\Gamma_{\beta}^{\beta+2}(\sigma^{\prime})\!\upharpoonright\). Otherwise, we define \(\Gamma_{\beta}^{\beta+2}(\sigma^{\prime})=\sigma^{\frown}\widehat{T}(\tau)\) where, \(\sigma^{\prime}=\sigma^{\frown}\tau\) and \(\widehat{T}\) is the f-tree produced by proposition 4.1 as applied to \(\tilde{T}_{\beta+2}\,/\,\sigma\). For \(\beta^{\prime}<_{\mathcal{O}}\beta\) we define \(\Gamma_{\beta^{\prime}}^{\beta+2}\) to be \(\Gamma_{\beta}^{\beta+2}\circ\tilde{\Gamma}_{\beta^{\prime}}^{\beta}\). Finally, for \(\lambda\) a limit and \(\beta<_{\mathcal{O}}\lambda\leq_{\mathcal{O}}\alpha\), \(\beta\) an even notation we define \(\Gamma_{\beta}^{\lambda}(\sigma)\) as follows. Let \(\beta_{n}\) be the sequence of notations given by part 4 of proposition B.5. Let \(m\) be the least with \(|\sigma|\leq m\) and \(\beta\leq_{\mathcal{O}}\beta_{m}\) and define \(\Gamma_{\beta}^{\lambda}(\sigma)\) to be equal to \(\tilde{\Gamma}_{\beta}^{\beta_{m}}(\sigma)\). Finally, we define \(\Gamma\) via the recursion theorem so that \(\Gamma=\tilde{\Gamma}\). To verify this construction produces the desired result, we suppose that \(\beta,\gamma\) is the lexicographically least pair of even notations \(\gamma<_{\mathcal{O}}\beta\leq_{\mathcal{O}}\alpha\) such that \(\Gamma_{\gamma}^{\beta}\) isn't a f-tree that's a homeomorphism of \([T_{\beta}]\) with \([T_{\gamma}]\) and then observe that if \(\beta=\beta^{\prime}+2\) then since \(\Gamma_{\beta^{\prime}}^{\beta^{\prime}+2}\) is a homeomorphism we get a contradiction. Similarly, if \(\lambda=\beta\) is a limit then by proposition B.5, 6 of proposition 4.1 and the the inductive assumption regarding \(\Gamma_{\beta}^{\beta_{n}}\) we can derive that \(\Gamma_{\beta}^{\lambda}\) is an f-tree that's a homeomorphism of \([T_{\lambda}]\) with it's image contained in \([T_{\beta}]\). Now suppose that some \(f\in[T_{\beta}]\) isn't equal to \(\Gamma_{\beta}^{\lambda}(g)\) for any \(g\in[T_{\lambda}]\). Then, for some \(n\) we have that \(f\succ\Gamma_{\beta}^{\beta_{n}}(\sigma)\) for no \(\sigma\in T_{\lambda}^{\prime}\) with \(|\sigma|=l^{\circ}(\beta_{n})\) and thus by the inductive assumption \(f\notin[T_{\beta}]\). Thus, the claim holds for \(\Gamma_{\beta}^{\lambda}\) as well. This suffices to prove proposition 5.2. ## 6. Minimality and Double Jump Inversion In this section, we finally present the proof of proposition 4.1. However, before we do this we answer an obvious question raised by our construction. Why use double jump inversion and not single jump inversion as Harrington did in [5]. The answer is that it's not possible. We can't satisfy the minimality style requirements while also using \(\omega\)-branching to encode the copy of \(T_{\beta+1}\). Specifically, we now show that no \(\mathbf{0}^{\prime}\)\(\omega\)-branching tree \(T\) lets us achieve the kind of minimality required by part 5 of proposition 4.1. **Proposition 6.1**.: _Given a perfect weakly \(\omega\)-branching pruned f-tree \(T\leq_{\mathbf{T}}\mathbf{0}^{\prime}\) one can uniformly construct a computable functional \(\Phi\) such that \(e\)-splitting pairs in occur above every node in \(T\) and for every \(\tau\in\operatorname{rng}T\) there are paths \(f\neq g\) extending \(\tau\) through \(T\) with \(\Phi_{e}(f)=\Phi_{e}(g)\)._ Indeed, the result is actually slightly stronger in that we show that even if \(T\) is unpruned we can start building such \(f,g\) above every node in \(T\) and always extend to preserve agreement under \(\Phi_{e}\) until we hit a terminal node. Even without this improvement, this rules out the possibility of building \(\widehat{T}\) to be computable in \(X^{\prime}\). While we aren't guaranteed that \(\widehat{T}\) itself is pruned it will be pruned whenever \(S\) is pruned. Thus, the above result rules out the possibility of performing the construction using only single jump inversion. Since the proof of this proposition takes some work and would interrupt the flow of the paper we relegate it to appendix A. ### Machinery The construction will build \(T\) via a stagewise approximation \(T_{s}\) with \(T_{s+1}\supset T_{s}\) and \(\sigma\in T\) iff \(\sigma\in T_{\sigma^{*}}\). We will also maintain a stagewise approximation \(\widehat{T}_{s}\) to \(\widehat{T}\). We will organize the construction of by drawing on ideas from \(\Pi^{0}_{2}\) constructions in the r.e. sets. In particularly, we will arrange modules on a priority tree, denoted \(\mathbb{T}\subset\omega^{<\omega}\), but the differences between constructing a set and a tree require we make some modifications. Usually, in a \(\Pi^{0}_{2}\) tree construction we assign modules to elements of \(\omega^{<\omega}\), denoting an arbitrary module assigned to \(\alpha\) by \(\mathcal{M}_{\alpha}\) (for a module of type \(R\), \(\mathcal{R}_{\alpha}\)) and use only the outcome of that module at stage \(s\) to determine where next to visit at stage \(s\). However, rather than working on meeting a requirement globally for the entire tree \(T\) we will assign modules to work on meeting a requirement for the path extending \(\widehat{T}(\sigma)\) for some \(\sigma\). Thus, rather than working on \(\mathbb{T}\subset\omega^{<\omega}\) we work on \(\mathbb{T}\subset\omega^{<\omega}\times\omega^{<\omega}\). More specifically, we identify modules in our construction with pairs \((\alpha,\sigma)\) where \(\alpha\) is the string built up out of the outcomes of prior modules and \(\sigma\) indicates the element in \(\operatorname{dom}\widehat{T}\) we are working to define. The idea is that at certain points in our construction, rather than following a single outcome of a module, we will simultaneously work both above (our approximation to) \(\widehat{T}(\sigma^{\frown}\langle m\rangle)\) and above/to define \(\widehat{T}(\sigma^{\frown}\langle m^{\prime}\rangle)\). When we specify a module \(\mathcal{M}_{\xi}\) we also specify a set \(\mathfrak{O}(\xi)\) of pairs of potential successors \((o,\delta)\) where \(o\) is a potential outcome of the module (identified with elements in \(\omega\) but written more suggestively) and \(\delta\in\omega^{<\omega}\). As usual, at each stage any module we visit will have a single outcome \(o\) but as there may be multiple pairs \((o,\delta)\in\mathfrak{O}(\xi)\) there may be multiple immediate successors of this module which both get visited at this stage. In our construction most modules will only allow \(\delta=\langle\rangle\) but some modules will have successors both of the form \((o,\langle\rangle)\) and \((o,\langle n\rangle)\) (for some particular value \(n\)). Thus, we will only every we working on finitely many modules simultaneously at any stage. With this in mind, we define our priority tree \(\mathbb{T}\) inductively as follows. In what follows, keep in mind that \(\left(\langle a,b\rangle\right)_{0}=a\) and \(\left(\langle a,b\rangle\right)_{1}=b\). **Definition 6.2**.: \(\mathbb{T}\) is the smallest set of pairs \(\langle\alpha,\sigma\rangle\) closed under the following conditions * \(\langle\langle\rangle,\langle\rangle\rangle\in\mathbb{T}\) * If \(\xi=\langle\alpha,\sigma\rangle\in\mathbb{T}\wedge(o,\delta)\in\mathfrak{O}(\xi)\) then \(\nu=\langle\alpha^{\frown}\langle o\rangle,\sigma^{\frown}\delta\rangle\in \mathbb{T}\). In this case we write \(\nu^{-}=\xi\) and call \(\xi\) the predecessor of \(\nu\). We define \(|\xi|\in\mathbb{T}=|(\xi)_{0}|\) and say \(\xi\prec\xi^{\prime}\) if \(\prec\) holds on both components (i.e. \((\xi)_{i}\prec(\xi^{\prime})_{i}\,,i\in\{0,1\}\) ). Finally, \(\xi^{-}\) is defined to be the unique \(\prec\) maximal element in \(\mathbb{T}\) with \(\xi^{-}\prec\xi\). The careful reader might note the possibility that \(\xi^{-}\) could fail to be unique if we aren't careful. To avoid this, we assume that the outcomes of \(\xi=\langle\alpha,\sigma\rangle\) are modified to be of the form \(\langle o,\sigma\rangle\). This ensures that \(\xi^{-}\) is uniquely defined and \(\prec\) is always a linear order when restricted to the predecessors of \(\xi\) on \(\mathbb{T}\). As it won't cause any confusion, we will assume this happens in the background and will present our outcomes untransformed. We now define what it means for a node on this tree to be to the left of another node (we retain the terminology 'left of' even though it's not longer visually accurate) and extend this to a set of nodes as follows. **Definition 6.3**.: We define \(\xi{<}_{L}\xi^{\prime}\) on \(\mathbb{T}\) just if there are \(\nu\prec\xi,\nu^{\prime}\prec\xi\) with \(\nu^{-}=\nu^{\prime-}\), \((\nu)_{0}=\alpha^{\frown}\langle o\rangle,(\nu^{\prime})_{0}=\alpha^{\frown} \langle o^{\prime}\rangle\) and \(o<o^{\prime}\). We extend this relation to sets by setting \(Q<_{L}\xi\) (read left of) for \(Q\subset\mathbb{T},\xi\in\mathbb{T}\) just if \(Q\) contains an element \(\nu<_{L}\xi\). Our truepath, and its approximations, will no longer be single paths but sets of nodes. Informally speaking, we define \(\mathcal{U}_{s}\) to be the set of nodes visited at stage \(s\) following the rules described above but not visiting any extensions of a node \(\xi\) being visited for the first time at stage \(s\). Formally speaking, we give the following definition. **Definition 6.4**.: We define \(\mathcal{U}_{s}\) as the largest set satisfying the following closure conditions * \(\langle\rangle\in\mathcal{U}_{s}\) * If \(\xi\in\mathbb{T},\xi\in\mathcal{U}_{s}\), \(s_{\xi}>0\) and at stage \(s\), \(o\) is the outcome of \(\xi\) at stage \(s\) and \(\nu\in\mathbb{T},\nu^{-}=\xi\) with \((\nu)_{0}=(\xi)_{0}\,^{\frown}\langle o\rangle\) then \(\nu\in\mathbb{T}\). Where \(s_{\xi}\) is defined to be \(|\{t\mid\xi\in\mathcal{U}_{t}\wedge t<s\}|\). We define \(\xi\in\mathcal{U}\) iff \((\exists s)(\forall s^{\prime}>s)(\neg\mathcal{U}_{s}<_{L}\xi)\wedge(\exists ^{\infty}s)(\xi\in\mathcal{U}_{s})\) Note that, out priority construction will never reinitialize any nodes. That is, our construction will satisfy the following condition. **Condition 1**.: If \(s^{\prime}>s\) and \(\xi\in\mathcal{U}_{s},\xi<_{L}\mathcal{U}_{s^{\prime}}\) then for all \(t\geq s^{\prime}\)\(\xi\notin\mathcal{U}_{s}\). With the action of the priority tree defined we need to specify how the modules are able to control the construction. As described above, modules will directly enumerate elements into \(T=\bigcup_{s\in\omega}T_{s}\) with a deadline of stage \(s=\,^{\prime}\sigma\)' to place \(\sigma\) into \(T\). We use a bit more machinery to specify our approximation to \(\widehat{T}\). Each module \(\mathcal{M}_{\xi}\) receives a string \(\delta^{\xi}\) and specifies a string \(\delta^{\nu}\) for each successor to \(\xi\). If \(\xi=\langle\alpha,\sigma\rangle\) then we understand the module \(\xi\) to be executing on the guess that \(\delta^{\xi}\prec\widehat{T}(\sigma)\). This is sufficient for modules that only need to manipulate a single path but some modules will need to manipulate the collection of potential branches of \(\widehat{T}(\sigma)\). To this end, some modules will also define an infinite set \(\Theta^{\xi}\) of branches with the \(n\)-th element (ordered lexicographically) indicated by \(\theta^{\xi}_{n}\). In our construction, we will ensure that our definition of \(\delta^{\xi}\) and \(\Theta^{\xi}\) satisfy the following condition. **Condition 2**.: For each \(\xi\in\mathbb{T}\) 1. If \(\xi\in\mathcal{U}_{s}\wedge s_{\xi}=0\) then \(\mathcal{M}_{\xi^{-}}\) must set \(\delta^{\xi}\) during stage \(s\) and ensure \(\delta^{\xi}\in T_{s+1}\). 2. \(\nu\succeq\xi\implies\delta^{\xi}\succeq\delta^{\nu}\). If \(\xi\in\mathbb{T}\) and \(\mathcal{M}_{\xi^{-}}\) defines \(\Theta^{\xi}\) then 1. \(\theta^{\xi}_{n}\) enumerates \(\Theta^{\xi}\) with \(\theta^{\xi}_{n}\succ\delta^{\xi^{\frown}}\langle k_{n}\rangle\) with \(n\mapsto k_{n}\) monotonic, injective function of \(n\). 2. If \(\xi\in\mathcal{U}_{s}\wedge s_{\xi}=n\) then \(\mathcal{M}_{\xi^{-}}\) must set \(\theta^{\xi}_{n}\) by the end of stage \(s\) and ensure \(\theta^{\xi}_{n}\in T_{s+1}\). These are mostly straightforward demands that what the module at \(\xi\) does is compatible with what \(\xi^{-}\) does and defines it's output promptly. However, a few points deserve mentioning. The requirement that \(\delta^{\xi}\in T_{s+1}\) will ensure that \(\operatorname{rng}\widehat{T}\subset T\). The final condition will enable multiple modules who all want to ensure the leftmost branch extending \(\widehat{T}(\sigma)\) has some property to cooperate. Without this condition, a module that only ensured \(\theta^{\xi}_{0}\) has some property might find their work erased by the next module leaving all extensions of \(\theta^{\xi}_{0}\) out of the set of branches it specifies. We also impose the following condition on the construction to (help) ensure that if \(\xi\in\mathcal{U}\) then the modules above \(\xi\) get to control whether \(\delta^{\xi}\) extends to a path through \(T\). **Condition 3**.: If \(\delta^{\xi}=\tau\) and the module \(\mathcal{M}_{\xi}\) enumerates \(\sigma\) into \(T\) then \(\sigma\succ\tau\). ### Requirements With an understanding of how our \(\mathbb{T}\) operates we are now in a position to present the requirements our construction will meet and arrange the modules we will use to meet them on the tree. Recall that we seek to prove the following result. **Proposition 4.1**.: _Given a (potentially partial) tree \(S\subset\omega^{<\omega}\) computable in \(X^{\prime\prime}\) there is a (total) computable tree \(T\subset\omega^{<\omega}\) and an \(X^{\prime\prime}\) computable partial f-tree \(\widehat{T}\) such that_ 1. \(\operatorname{rng}\widehat{T}\subset T\) _and_ \([\widehat{T}]=[T]\)__ 2. \(\widehat{T}(\cdot)\) _is a homeomorphism of_ \([S]\) _with_ \([T]\)_._ 3. _If_ \(f\in[T]\) _then_ \(f\nleq_{\mathbf{T}}X\)_._ 4. _If_ \(g\in[S]\) _then_ \(g\oplus X^{\prime\prime}\equiv_{\mathbf{T}}\left(\widehat{T}(g)\oplus X \right)^{\prime\prime}\equiv_{\mathbf{T}}\widehat{T}(g)\oplus X^{\prime \prime}\)_._ 5. _If_ \(f\in[T]\) _and_ \(Y\leq_{\mathbf{T}}f\oplus X\) _then either_ \(Y\leq_{\mathbf{T}}X\) _or_ \(f\leq_{\mathbf{T}}Y\oplus X^{\prime\prime}\)_._ 6. _For all_ \(\sigma\in 2^{<\omega}\)_,_ \([\widehat{T}(\sigma)]\geq|\sigma|\) _(when defined)._ _Moreover, this holds with all possible uniformity. In particular, given a computable functional \(\Upsilon_{2}\) we can effectively produce functionals \(\Upsilon,\widehat{\Upsilon}\) so that whenever \(\Upsilon_{2}(X^{\prime\prime})=S\) then \(\Upsilon(X)=T\) and \(\widehat{\Upsilon}(X^{\prime\prime})=\widehat{T}\) with the properties described above._ We work to meet the following requirements during the construction. We state the requirements in unrelativized form. Unlike requirements in the construction of a single set, we work to ensure that the requirement of the form \(\mathscr{R}_{e}\) is satisfied for all \(e\) and all \(\sigma\in S^{\langle\infty\rangle}\) with \(|\sigma|=e\). For the purposes of stating the requirements, we use \(\widehat{T}^{-}(\sigma)\) to denote an extension of \(\widehat{T}(\sigma^{-})\) that would be extended by \(\widehat{T}(\sigma)\) if the later were defined. We won't actually define this function but just use it in the requirements to stand in for some string to be defined later6. Footnote 6: We could define \(\widehat{T}^{-}(\sigma)\) to be \(\delta^{\xi}\) for the unique \(\xi\) along the truepath of the form \((\alpha,\sigma)\) such that \(\xi\) implements \(\mathcal{H}_{\sigma}\) but, as it’s unnecessary for the proof, we feel doing so would unnecessarily multiply notation. \[\mathscr{P}_{e}\colon \widehat{T}(\sigma)\mid\Phi_{e}(X)\vee\Phi_{e}(X)\!\!\uparrow\] \[\mathscr{L}_{e}\colon \Big{(}\forall f\in[T],f\succ\widehat{T}(\sigma)\Big{)}(\Phi_{e }(f\oplus X)\!\!\uparrow)\vee\Big{(}\forall f\in[T],f\succ\widehat{T}(\sigma )\Big{)}(\Phi_{e}(f\oplus X)\!\!\downarrow)\] \[S(\sigma)\!\!\downarrow=0\implies\widehat{T}(\sigma)\!\! \downarrow\wedge(\forall m)\Big{(}\widehat{T}(\sigma)\,^{\sim}\!\langle m \rangle\notin T\Big{)}\] \[\mathscr{H}_{\sigma}\colon S(\sigma)\!\!\downarrow=1\implies\widehat{T}(\sigma)\! \downarrow\!\in T^{\langle\infty\rangle}\iff\sigma\in S^{\langle\infty\rangle}\] \[S(\sigma)\!\!\uparrow\implies\widehat{T}(\sigma)\!\uparrow\! \wedge\!\widehat{T}^{-}(\sigma)\notin T^{\langle\infty\rangle}\] \[\mathscr{S}_{e}^{n}\colon T^{\langle\widehat{T}(\sigma^{\sim} \langle n\rangle)\rangle}\text{ is totally non-$e$-splitting }\vee m>n\implies\] \[\widehat{T}^{-}(\sigma^{\sim}\langle n\rangle)\text{ and }\widehat{T}^{-}(\sigma^{\sim}\langle m \rangle)\,e-\text{split}\] Note that \(\Phi_{e}(f)\!\!\downarrow\) means \((\forall n)(\Phi_{e}(f;n)\!\!\downarrow)\) (and similarly for \(\Phi_{e}(f)\!\!\uparrow\)) and that, when we speak of \(e\)-splittings we mean the notion relativized to \(X\). The statement of \(\mathscr{H}_{\sigma}\) is a bit odd in the case where \(\sigma\in S\) since in that we do nothing except don't try to stop \(\widehat{T}(\sigma)\) from potentially extending to a full path should \(\sigma\) extend to a full path through \(S\). Each requirement gets it's own module to assist in meeting it, however, some modules get helper modules. For instance, we break up meeting the requirement \(\mathscr{H}_{\sigma}\) into a module \(\mathcal{H}_{\sigma^{-}}^{+}\) responsible for creating an \(\omega\)-branching above \(\widehat{T}(\sigma^{-})\) and a module \(\mathcal{H}_{\sigma}\) responsible for ensuring \(\widehat{T}(\sigma)\) doesn't extend to a path through \(T\) if \(\sigma\notin S\). Similarly, we supplement \(\mathcal{L}_{e}\) with submodules \(\mathcal{L}_{e}^{n}\) responsible for checking if we can extend \(\Phi_{e}(f)\) to converge on \(\Phi_{e}(f;n)\). Finally, we use the module \(\mathcal{S}_{-1}^{n}\) as a helper to split off those modules who will work above \(\widehat{T}(\sigma^{\sim}\langle n\rangle)\) from those modules working to define \(\widehat{T}(\sigma^{\sim}\langle m\rangle)\,,m>n\). **Definition 6.5**.: Modules are be assigned to nodes on \(\mathbb{T}\) as follows 1. If \(\xi=(\langle\rangle,\langle\rangle)\) then \(\xi\) implements \(\mathcal{H}_{\setminus}^{+}\). Also, if \(\xi^{-}\) implements some module \(\mathcal{H}_{\sigma}\) and has an outcome guessing \(\sigma\in S\) then \(\xi\) implements \(\mathcal{H}_{\sigma}^{+}\). 2. If \(\xi^{-}\) implements \(\mathcal{H}_{\sigma}^{+}\) with \(|\sigma|=e\) then \(\xi\) implements \(\mathcal{S}_{e}^{0}\). 3. If \(\xi^{-}\) implements \(\mathcal{S}_{e}^{n}\) with \(e\geq 0\) then \(\xi\) implements \(\mathcal{S}_{e-1}^{n}\). 4. If \(\xi^{-}\) implements \(\mathcal{S}_{-1}^{n}\) and \(\left(\xi\right)_{1}=\left(\xi^{-}\right)_{1}=\sigma\) then \(\xi\) implements \(\mathcal{S}_{e}^{n+1}\) where \(|\sigma|=e\). 5. If \(\xi^{-}\) implements \(\mathcal{S}_{-1}^{n}\) and \(\left(\xi\right)_{1}\neq\left(\xi^{-}\right)_{1}\) then \(\xi\) implements \(\mathcal{P}_{e}\) for the least \(e\) such that no module of this form is assigned to any \(\nu\precsim\xi\). 6. If \(\xi^{-}\) implements \(\mathcal{P}_{i}\) then \(\xi\) implements \(\mathcal{L}_{e}\) for the least \(e\) such that no \(\nu\precsim\xi\) implements \(\mathcal{L}_{e}\). 7. If \(\xi^{-}\) implements a module of the form \(\mathcal{L}_{e}\) or \(\mathcal{L}_{e}^{n}\) then let \(i<e\) (if it exists) be the largest value such that \(\xi\) extends the outcome \({}^{r}\downarrow{}^{\gamma}\) of \(\mathcal{L}_{i}\) and \(m\) be the least such that no predecessor of \(\xi\) implements \(\mathcal{L}_{i}^{m}\) then \(\xi\) implements \(\mathcal{L}_{i}^{m}\). 8. If \(\xi^{-}\) implements a module of the form \(\mathcal{L}_{e}\) or \(\mathcal{L}_{e}^{n}\) and either \(e=0\) or for all \(i<e\)\(\xi^{-}\) doesn't extend the \({}^{r}\downarrow{}^{\gamma}\) outcome of \(\mathcal{L}_{i}\) and \(\sigma=\left(\xi\right)_{1}\) then \(\xi\) implements the module \(\mathcal{H}_{\sigma}\) Before we get into any further details, we give a high level overview of how this is all supposed to work. If we suppose that we've just defined \(\widehat{T}(\sigma)\) and wish to define \(\widehat{T}(\sigma^{\frown}\langle m\rangle)\) we start with the module \(\mathcal{H}_{\sigma}^{+}\) which will specify a bunch of immediate extensions of \(\widehat{T}(\sigma)\) (placing them in \(T\)). We start by executing \(\mathcal{S}_{e}^{0},|\sigma|=e\) then \(\mathcal{S}_{e-1}^{0}\) and so forth all of which work to ensure the leftmost potential extension of \(\widehat{T}(\sigma)\)\(e\)-split or \(e-1\) split or so on with all the remaining potential extensions. When we finally get to the module \(\mathcal{S}_{-1}^{0}\) it specifies that \(\widehat{T}(\sigma^{\frown}\langle 0\rangle)\) extends the leftmost branch as extended by all the modules \(\mathcal{S}_{e^{\prime}}^{0},e^{\prime}\leq e\) and the construction now splits into one part which works on the next module \(\mathcal{P}_{e}\) along that path specified as an initial segment of \(\widehat{T}(\sigma^{\frown}\langle 0\rangle)\) and another part where \(\mathcal{S}_{e}^{1}\) starts working to define the node which will be extended by \(\widehat{T}(\sigma^{\frown}\langle 1\rangle)\). After the module of the form \(\mathcal{P}_{e}\) we work on the next module of the form \(\mathcal{L}_{e}\) and then, if we are above the total outcome any \(\mathcal{L}_{e^{\prime}},e^{\prime}\leq e\) we implement the next helper module of the form \(\mathcal{L}_{e^{\prime}}^{m}\). Finally, after those modules, comes the module \(\mathcal{H}_{\sigma^{\frown}\langle m\rangle}\) (assuming we took the path working on \(\widehat{T}(\sigma^{\frown}\langle m\rangle)\)) which guesses whether or not \(\sigma^{\frown}\langle m\rangle\in S\). If it determines \(\sigma^{\frown}\langle m\rangle\) is in \(S\) then we go on to \(\mathcal{H}_{\sigma^{\frown}\langle m\rangle}^{+}\). If it determines that \(\sigma^{\frown}\langle m\rangle\) is not in \(S\) then no module is assigned above that outcome preventing any path from being constructed. With this in mind, we can now give a formal definition of \(\widehat{T}\) and its stagewise approximation. **Definition 6.6**.: We define \(\widehat{T}_{s}(\sigma)=\delta^{\xi}\) where \(\xi=(\alpha,\sigma)\in\mathcal{U}_{s}\) and \(\xi^{-}\) implements the module \(\mathcal{H}_{\sigma}\). If no such \(\xi\in\mathcal{U}_{s}\) then it is undefined. We define \(\widehat{T}(\sigma))\) in a similar manner except we require that \(\xi\in\mathcal{U}\). ### Modules We now describe the operation of the modules. For this subsection, we describe the operation of the module assuming it is located at the node \(\xi\in\mathbb{T},\xi=(\alpha,\sigma)\) and executing at stage \(s\). As we only define the outcome of the module at when \(s_{\xi}>0\) we understand the previous outcome of the module to be undefined when \(s_{\xi}\leq 1\). #### 6.3.1. Module \(\mathcal{P}_{e}\) The module \(\mathcal{P}_{e}\) has outcomes \({}^{r}\neq^{\ast}<_{L}{}^{r}\uparrow^{\ast}\) and \((o,\delta)\in\mathfrak{O}(\xi)\) iff \(\delta=\langle\rangle\) and \(o\) is one of the above two outcomes. If the previous outcome was \({}^{r}\neq^{\ast}\) we retain that outcome. Otherwise, the module acts as follows. Check if there is any \(\tau\in T_{s},\tau\succ\delta^{\xi}\) with \(\Phi_{e,s}(X)\mid\tau\). If found set the outcome to \({}^{r}\neq^{\ast}\) and \(\delta^{\langle\alpha^{\frown}(\neq^{\ast}),\sigma\rangle}\) to be a \(\prec\) maximal extension of \(\tau\) in \(T_{s}\). Otherwise, set the outcome to \({}^{r}\uparrow^{\ast}\) and, if this is the first stage at which that outcome is visited, set \(\delta^{\langle\alpha^{\frown}(\uparrow^{\ast}),\sigma\rangle}=\delta^{\xi}\). #### 6.3.2. Module \(\mathcal{L}_{e}\) The module \(\mathcal{L}_{e}\) has outcomes \({}^{r}\downarrow^{\ast}=0\) and \({}^{r}\nu^{\ast}\) where \(\nu\succ\xi\) and some module of the form \(\mathcal{L}_{e}^{n}\) is assigned to \(\nu\) (hence \({}^{r}\nu^{\ast}>0\)). \(\mathfrak{O}(\xi)\) consists of all pairs \((o,\delta)\) where \(o\) is one of the allowed outcomes and \(\delta=\langle\rangle\). The outcome \({}^{r}\downarrow^{\ast}\) corresponds to the state where \(\Phi_{e}(X\oplus f)\) is total for all paths \(f\in[T],f\succ\delta^{\xi}\) and the outcome \({}^{r}\nu^{\ast}\) where \(\nu\) implements \(\mathcal{L}_{e}^{n}\) corresponds to the state where all \(f\in[T],f\succ\delta^{\langle\alpha^{\frown}(^{r}\nu^{\ast}),\sigma\rangle}\) satisfy \(\Phi_{e}(X\oplus f;n)\uparrow\). Intuitively, we can think of the operation of \(\mathcal{L}_{e}^{n}\) as creating something of a link with \(\mathcal{L}_{e}\) as in a \(\mathbf{0}^{\prime\prime\prime}\) construction. When we visit \(\mathcal{L}_{e}^{n}\) we effectively pause the operation of all the intervening modules between \(\mathcal{L}_{e}\) and \(\mathcal{L}_{e}^{n}\) and start meeting the modules extending \(\mathcal{L}_{e}\) again until we find find an extension which causes the \(e\)-th functional to converge on argument \(n\) at which point we return to \(\mathcal{L}_{e}\). Luckily, however, we don't need the full machinery of links and can achieve this effect merely by letting the module \(\mathcal{L}_{e}^{n}\) manipulate the internal state of the unique visiting module \(\mathcal{L}_{e}\) as defined here. If \(s_{\xi}=0\) we initialize \(\upsilon=\uparrow,\delta=\uparrow\). If \(s_{\xi}>0\) and \(\upsilon\uparrow\) then we visit the outcome \({}^{r}\downarrow^{\ast}\) with \(\delta^{\langle\alpha^{\frown}(^{\downarrow}),\sigma\rangle}=\delta^{\xi}\). We leave it to the submodules of the form \(\mathcal{L}_{e}^{n}\) to define \(\upsilon,\eta\) when necessary. If \(s_{\xi}>0\) and \(\upsilon\downarrow\) with \(\mathcal{L}_{e}^{n}\) assigned to \(\upsilon\) then we check if there is a (maximal) extension \(\tau\succ\delta,\tau\in T_{s}\) such that \(\Phi_{e}(\tau;n)\downarrow\). If there is, then set \(\delta^{\upsilon^{+}}=\tau\) where \(\upsilon^{+}\) is the unique successor of \(\upsilon\) on \(\mathbb{T}\), set \(\upsilon,\delta\) to be undefined and visit the outcome \({}^{r}\downarrow^{\ast}\) as above. Otherwise, visit the outcome \({}^{r}\upsilon\)' with \(\delta^{\langle\alpha^{\frown}(^{\upsilon}\upsilon),\sigma\rangle}=\delta\). #### 6.3.3. Module \(\mathcal{L}_{e}^{n}\) This node only has a single outcome \(0\) and \(\mathfrak{O}(\xi)=\{(0,\langle\rangle)\}\). Let \(\nu\prec\xi\) be the unique ancestor node implementing \(\mathcal{L}_{e}\). If \(s_{\xi}=0\) then set the variables \(\upsilon,\delta\) for the module at node \(\nu\) to be equal to \(\xi\) and \(\delta^{\xi}\) respectively. If we are ever visited again, we visit our single outcome and rely on the node implementing \(\mathcal{L}_{e}\) to have set \(\delta^{\langle\alpha^{\frown}\langle 0\rangle,\sigma\rangle}\). #### 6.3.4. Module \(\mathcal{H}_{\sigma}^{+}\) This node only has a single outcome \(0\) and \(\mathfrak{O}(\xi)=\{(0,\langle\rangle)\}\). Let \(\nu=(\alpha^{\frown}\langle 0\rangle,\sigma)\) and if \(s_{\xi}=1\) then set \(\delta^{\nu}\) to be a maximal element in \(T_{s}\) extending \(\delta^{\xi}\). If \(s_{\xi}=n+1\) (hence \(s_{\nu}=n\) ) then let \(k\) be large and \(\tau=\delta^{\nu\frown}\langle k\rangle\) (in particular, large enough that if \({}^{r}\tau^{\ast}>s\)). Enumerate \(\tau\) into \(T_{s+1}\) and set \(\theta^{\nu}_{n}=\tau\). #### 6.3.5. Module \(\mathcal{S}_{-1}^{n}\) This node only has a single outcome \(0\) but \(\mathfrak{D}(\xi)=\{(0,\langle n\rangle),(0,\langle\rangle)\}\). This module doesn't take any actions, merely split up input it gets between the two successor nodes as follows. Specifically, it sets \(\delta^{\langle\alpha^{\sim}\langle 0\rangle,\sigma^{\sim}\langle n\rangle \rangle}=\theta_{0}^{\xi}\), \(\delta^{\langle\alpha^{\sim}\langle 0\rangle,\sigma\rangle}=\delta^{\xi}\) and \(\theta_{n}^{\langle\alpha^{\sim}\langle 0\rangle,\sigma\rangle}=\theta_{n+1}^{\xi}\). #### 6.3.6. Module \(\mathcal{S}_{e}^{n}\) This node has outcomes \({}^{r}\not|^{\ast}=0\), \({}^{r}|_{0}\)\({}^{\ast}=1\), \({}^{r}(|_{1},n)\)\({}^{\ast}=2+\langle n,0\rangle\), \({}^{r}(\uparrow,n,m)\)\({}^{\ast}=\langle n,m+1\rangle\). \(\mathfrak{D}(\xi)\) consists of all pairs \((o,\delta)\) where \(o\) is one of the allowed outcomes and \(\delta=\langle\rangle\). Remember that in what follows \(e\)-splitting refers to the notion relativized to \(X\). The outcome \({}^{r}\not|^{\ast}\) corresponds to the case where \(\theta_{0}^{\xi}\) isn't extended by an \(e\)-splitting in \(T\). The other outcomes presume we do find some \(e\)-splitting \(\tau_{0},\tau_{1}\) extending \(\theta_{0}^{\xi}\). The outcome \(o={}^{r}|_{0}\)\({}^{\ast}\) corresponds to the case where we find infinitely many elements in \(\Theta^{\xi}\) that extend to an \(e\)-splitting with \(\tau_{0}\). Outcomes of the form \({}^{r}(|_{1},n)\)\({}^{\ast}\) correspond to the case where we only find \(n\) elements in \(\Theta^{\xi}\) that extend to an \(e\)-splitting with \(\tau_{0}\) but infinitely many which extend to an \(e\)-splitting with \(\tau_{1}\). Finally, the outcomes of the form \({}^{r}(\uparrow,n,m)\)\({}^{\ast}\) correspond to the case where we find \(n\) elements in \(\Theta^{\xi}\) extending to \(e\)-splittings with \(\tau_{0}\) after which we find another \(m\) elements extending to an \(e\)-splitting with \(\tau_{1}\) but infinitely many elements don't extend to an \(e\)-splitting with either. Let \(\nu=\langle\alpha^{\sim}\langle o\rangle,\sigma\rangle\) for whatever value we specify for the outcome \(o\). We'll ensure that \(\theta_{0}^{\nu}\)\(e\)-splits with \(\theta_{n+1}^{\nu},n\in\omega\) in all cases except when \(o={}^{r}\not|^{\ast}\) or \(o={}^{r}(\uparrow,n,m)\)\({}^{\ast}\). In the later case, we'll ensure that neither \(\tau_{0}\) or \(\tau_{1}\)\(e\)-split with any extension of \(\theta_{n+1}^{\nu},n\in\omega\) (ensuring that if \(f\succ\theta_{n+1}^{\nu}\) then \(\Phi_{e}(f\oplus X)\)\(\uparrow\)). Since we don't want to accidentally extend \(\theta_{0}^{\langle\alpha^{\sim}\langle o^{\prime}\rangle,\sigma\rangle}\) to an infinite path if \(o^{\prime}\) isn't the true outcome we'll ensure that every outcome except \({}^{r}\not|^{\ast}\) corresponds to an incompatible value for \(\theta_{0}^{\langle\alpha^{\sim}\langle o^{\prime}\rangle,\sigma\rangle}\) of length at most \(1+\max\lvert\tau_{0}\rvert,\lvert\tau_{1}\rvert\) while always ensuring that \(\theta_{0}^{\nu}\succ\theta_{0}^{\xi}\). We define \(\delta^{\nu}=\delta^{\xi}\) for all potential outcomes \(o\). When \(s_{\xi}=0\) we start by setting \(\tau_{0},\tau_{1}\) to be undefined and \(\widehat{n}=\widehat{m}=0\). For \(s_{\xi}>0\) we consider the following cases. Case \(\tau_{0}\)\(\uparrow\):Check if there are \(\tau_{0},\tau_{1}\in T_{s}\) with \(\tau_{0},\tau_{1}\)\(e\)-splitting extensions of \(\theta_{0}^{\xi}\). If no such values are found, then visit outcome \(o={}^{r}\not|^{\ast}\) and define \(\Theta^{\nu}=\Theta^{\xi}\). If such values are found, let \(\tau_{0},\tau_{1}\) be \(\prec\) maximal extensions in \(T_{s}\) of these \(e\)-splitting extensions of \(\theta_{0}^{\xi}\), let \(\widehat{m}=\widehat{n}=n\) and visit outcome \(o={}^{r}|_{0}\)\({}^{\ast}\) setting \(\theta_{0}^{\nu}=\tau_{0}\). Case \(\tau_{0}\)\(\downarrow\):We break this up into a number of subcases. We search for some \(n\leq s_{\xi}\) and \(\tau^{\prime}\succ\theta_{n}^{\xi},\tau^{\prime}\in T_{s}\) that satisfy the following (picking the first case satisfied) Case \(\tau^{\prime},\tau_{0}\)\(e\)-split with \(\widehat{n}<n\leq s_{\xi}\):In this case, we let \(o={}^{r}|_{0}\)\({}^{\ast}\) and set \(\theta_{s_{\nu}}^{\nu}\) to be a \(\prec\) maximal extension of \(\tau^{\prime}\) in \(T_{s}\). Finally, we set \(\widehat{n}=\widehat{m}=n\). Case \(\tau^{\prime},\tau_{1}\)\(e\)-split with \(\widehat{m}<n\leq s_{\xi}\):In this case, we let \(o={}^{r}(|_{1},\widehat{n})\)\({}^{\ast}\) and set \(\widehat{m}=n\) and \(\theta_{s_{\nu}+1}^{\nu}\) be a \(\prec\) maximal extension of \(\tau^{\prime}\) in \(T_{s}\). If this is the first time we've visited this outcome \(s_{\nu}=0\). We pick \(k\) to be larger than any number mentioned so far in this construction, and set \(\theta_{0}^{\nu}=\tau_{1}\)\({}^{\sim}\langle k\rangle\) placing \(\theta_{0}^{\nu}\) into \(T_{s+1}\). Case Otherwise: In this case, we visit outcome \(o={}^{r}(\uparrow,\widehat{n},\widehat{m})\)'. If this is the first time we've visited this outcome, set \(\theta_{0}^{\nu}=\tau_{1}\widehat{\phantom{\tau_{1}}}\langle k\rangle\). Let \(\tau^{\prime}\) be a \(\prec\) maximal element of \(T_{s}\) extending \(\theta_{\widehat{m}+s_{\nu}+1}^{\xi}\) and set \(\theta_{s_{\nu}+1}^{\nu}=\tau^{\prime}\). #### 6.3.7. Module \(\mathcal{H}_{\sigma}\) We note that we can assume (see lemma B.7) we have a uniformly given total computable binary valued function (indeed functional) \(\rho(\sigma,s_{1},s_{0})\) such that \(\rho_{1}(\sigma,s_{1})=\lim_{s_{0}\to\infty}\rho(\sigma,s_{1},s_{0})\) is total, \(S(\sigma)=\lim_{s_{1}\to\infty}\rho_{1}(\sigma,s_{1})\) (both diverging if either does). Morally speaking, this module has the outcomes \({}^{r}(i,\widehat{n})\)' ordered lexicographically where \(i\in\{0,1\}\) indicates whether the module guesses that \(\sigma\notin S\) or \(\sigma\in S\) and \(\widehat{n}\) indicates the value at which \(\rho_{1}\) achieves it's limit. However, to ensure that we never reinitialize any node as required by condition 1 we also record a value \(m\) giving the number of times an outcome to the left of \((i,\widehat{n})\) has been visited. Thus, the actual outcomes will be of the form \({}^{r}(i,\widehat{n},m)\)' \(=2\langle\widehat{n},m\rangle+i\). As our pairing function is strictly monotonic in both arguments this functions just as if we'd used outcomes of the form \({}^{r}(i,\widehat{n})\)' and reinitialized outcomes after passing to their left. \(\mathfrak{D}(\xi)\) consists of all pairs \((o,\delta)\) where \(o\) is one of the allowed outcomes and \(\delta=\langle\rangle\). If \(s_{\xi}=0\) set \(\tau\) to be a \(\prec\) maximal extension of \(\delta^{\xi}\) in \(T_{s}\). If \(s_{\xi}>0\) we define \(k\) to be the number of times before stage \(s\) at which an outcome of the form \({}^{r}(i,\widehat{n},m)\)' has been visited. Choose the lexicographically least pair \((i,\widehat{n})\) such that for all \(n\in[\widehat{n},\widehat{n}+k]\) there are \(k+1\) distinct values \(x_{j}^{n}\leq s,j<k+1\) such that \(i=\rho(\sigma,n,x_{j}^{n})\). Note that, such a pair is always found since for large enough \(\widehat{n}\) we have \(k=0\) and \(\rho(\sigma,\widehat{n},0)\in\{0,1\}\). We now visit the outcome \(o={}^{r}(i,\widehat{n},m)\)' where \(m\) is the number of times that we've visited an outcome of the form \({}^{r}(i^{\prime},n,m)\)' with \((i^{\prime},n)\) lexicographically before \((i,\widehat{n})\) before stage \(s\). If this is the first time we've visited outcome \(o\) then pick \(c\) to be large,enumerate \(\tau^{\frown}\langle c\rangle\) into \(T_{s+1}\) and set \(\delta^{\langle\alpha^{\frown}\langle o\rangle,\sigma\rangle}=\tau^{\frown} \langle c\rangle\). ### Verification Before we verify the individual requirements, we verify that the construction controls the paths through \(T\) in the manner desired. **Lemma 6.7**.: _Suppose that \(\xi\in\mathfrak{U}\) implements a module of the form \(\mathcal{P}_{e},\mathcal{L}_{e},\mathcal{L}_{e}^{n},\mathcal{H}_{\sigma}\) and that for some \(\nu\) with \(\nu^{-}=\xi\) we have \(|\tau|=|\delta^{\nu}|\) but \(\tau\not\in\delta^{\widehat{\xi}}\) for \(\widehat{\xi}\in\mathfrak{U}\), \(\widehat{\xi}^{-}=\xi\). Then there are only finitely many stages at which any module at \(\widehat{\nu}\succ\xi\) enumerates an element \(\tau^{\prime}\succ\tau\) into \(T\)._ Note that this covers the case where \(\mathcal{H}_{\sigma}\) doesn't have any extension \(\nu\) in the truepath because \(S\) doesn't converge on \(\sigma\). Proof.: By condition 3 and condition 2 (and the fact that no single module ever adds a full path) it is enough to show that there are only finitely many stages at which we visit a node \(\widehat{\nu}\) with \(\widehat{\nu}\preccurlyeq\xi\) and \(\delta^{\widehat{\nu}}\) compatible with \(\tau\). For the module \(\mathcal{L}_{e}^{n}\) this is trivial as this module only has a single outcome. For the module \(\mathcal{H}_{\sigma}\) we note that each time visit to an outcome \({}^{r}i,\widehat{n},m\)' ensures that all outcomes to the right visit strings that have never been visited before. As \(\mathcal{P}_{e}\) can act at most once this case is also straightforward. This leaves only the case \(\mathcal{L}_{e}\). If this module takes any of the finite outcomes the claim is evident and if this module takes the outcome \({}^{r}\downarrow\) \({}^{\text{\circle{.}}}\) then the claim follows because \(\delta^{\nu}\succ\delta^{\widehat{\xi}}\) for all \(\nu\) with \(\nu^{-}=\xi\) when \(\widehat{\xi}\) corresponds to the infinitary outcome. **Lemma 6.8**.: _Suppose that \(\xi\in\mathcal{U}\) and \(\xi^{-}\) implements a module of the form \(\mathcal{H}^{+}_{\sigma}\) or \(\mathcal{S}^{n}_{e},e\geq 0\) then for each \(k\) there is some \(l\) such that if \(\tau\succ\delta^{\xi^{\frown}}\langle k\rangle,|\tau|\geq l\) but \(\tau\not\succ\theta^{\xi}_{n}\) for any \(n\) then there are only finitely many stages at which some \(\nu\succ\xi^{-}\) enumerates an extension of \(\tau\)._ Proof.: This is trivial if \(\xi^{-}\) implements a module of the form \(\mathcal{H}^{+}_{\sigma}\). Also, if there is no \(\theta^{\xi}_{n}\succ\delta^{\xi^{\frown}}\langle k\rangle\) then there is some last stage at which we visit any \(\nu\) with \(\nu^{-}=\xi^{-}\) with \(\Theta^{\nu}\) containing an extension of \(\delta^{\xi^{\frown}}\langle k\rangle\). So suppose that \(\theta^{\xi}_{n}\succ\delta^{\xi^{\frown}}\langle k\rangle\). If \(n>0\) then, once we set \(\theta^{\xi}_{n}\) we never visit any \(\nu\neq\xi\) with \(\nu^{-}=\xi^{-}\) with \(\theta^{\xi}_{m}\succ\delta^{\xi^{\frown}}\langle k\rangle\). This leaves only the case where \(\theta^{\xi}_{0}\succ\delta^{\xi^{\frown}}\langle k\rangle\). If \(\xi\) corresponds to an outcome \({}^{r}(\uparrow,n,m)\) \({}^{\text{\circle{.}}}\) or \({}^{r}\not|\) \({}^{\text{\circle{.}}}\) then we never again visit another extension of \(\xi^{-}\) after visiting \(\xi\) so the bound can be deduced by looking at the finitely many stages before that happens. Thus, we can assume \(\xi\) corresponds either to \({}^{r}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Lemma 6.10**.: \(\operatorname{rng}\widehat{T}\subset T\) _and \([\widehat{T}]=[T]\), that is, claim 1 of proposition 4.1 holds._ Proof.: By condition 2 anytime a module sets \(\delta^{\xi}=\tau\) it ensures that \(\tau\in T\). This ensures the first part of the claim holds. The second half of the claim is just lemma 6.9. **Lemma 6.11**.: _If \(\xi\in\mathcal{U}\) and \(\xi\) implements \(\mathcal{H}_{\sigma}\) then \(\xi\) has a well-defined outcome iff \(S(\sigma)\downarrow\) and that outcome is always correct about the membership of \(\sigma\) in \(S\)._ By well-defined outcome we mean a leftmost outcome that is visited infinitely often. Proof.: Suppose that the module at \(\xi\) has the true outcome \({}^{r}(i,\widehat{n},m)\)'. If \(S(\sigma)\neq i\) (including divergence) then, since \(\rho_{1}(\sigma,s_{1})\) (first limit), can be taken to be always defined, then for some \(n>\widehat{n}\) we have \(\rho_{1}(\sigma,n)=1-i\). Thus, for some \(k\) we have \(\rho(\sigma,n,k^{\prime})=1-i\) for all \(k^{\prime}\geq k\) contradicting the assumption that we visit this outcome more than \(n+k+1\) times. Thus, \(\mathcal{H}_{\sigma}\) is never incorrect and thus must not have an outcome whenever \(S(\sigma)\uparrow\). Now suppose that \(S(\sigma)\downarrow=i\). For some minimal \(\widehat{n}\) we have \(\rho_{1}(\sigma,s_{1})=i\) for all \(s_{1}\geq\widehat{n}\). By minimality, there is some last stage at which any outcome of the form \({}^{r}(i,n,m)\)' with \(n<\widehat{n}\) is visited and as \(\rho_{1}(\sigma,\widehat{n})=i\) there is some last stage at which any outcome of the form \({}^{r}(1-i,\widehat{n},m)\)' is visited. Thus, after some point we never visit an outcome of the form \({}^{r}(i^{\prime},n,m)\)' with \((i^{\prime},n)\) lexicographically before \((i,\widehat{n})\) and thus there is some \(m\) for which \({}^{r}(i,\widehat{n},m)\)' is the true outcome. **Lemma 6.12**.: _For all \(g\in\omega^{\omega}\), \(\widehat{T}(g)\) is total iff \(g\in[S]\) iff \(\widehat{T}(g)\in[T]\). Moreover, \(\widehat{T}(\sigma)\downarrow\) iff \(S(\sigma)\downarrow\) and all \(\sigma^{\prime}\preccurlyeq\sigma\) are in \(S\)._ Proof.: We first verify the moreover claim. Note that, for any module besides \(\mathcal{H}_{\sigma}\) there is always is a well-defined true outcome. Thus, by an examination of the construction, we can see that the only way for \(\widehat{T}(\sigma)\) to be undefined is either if for some \(\sigma^{\prime}\preccurlyeq\sigma\) the module of the form \(\mathcal{H}_{\sigma^{\prime}}\) on the truepath doesn't have a true outcome guessing \(\sigma^{\prime}\in S\) or if the module \(\mathcal{H}_{\sigma}\) on the truepath doesn't have any true outcome. Clearly, if either of those cases obtain then we actually do have \(\widehat{T}(\sigma)\uparrow\) so this result follows from lemma 6.11 The main claim follows trivially since \(g\in[S]\) iff all \(\sigma\prec g\) are elements in \(S\). **Lemma 6.13**.: \(\widehat{T}\) _is an f-tree_ Proof.: As \(\widehat{T}\) is clearly \(\prec\) respecting it is enough to show that whenever \(\widehat{T}(\sigma)\) isn't terminal then \(i<j\) implies that \(\widehat{T}(\sigma^{\frown}\langle i\rangle)\) and \(\widehat{T}(\sigma^{\frown}\langle j\rangle)\) extend incompatible immediate extensions of \(\widehat{T}(\sigma)\) and \(\widehat{T}(\sigma^{\frown}\langle i\rangle)\) is lexicographically below \(\widehat{T}(\sigma^{\frown}\langle j\rangle)\). However, this is immediate from the operation of \(\mathcal{H}_{\sigma}^{+}\) and the fact that nodes of the form \(\mathcal{S}_{e}^{n}\) maintain these properties. We can use this to prove the homeomorphism claim from proposition 4.1. **Lemma 6.14**.: \(\widehat{T}\) _is a homeomorphism of \([S]\) with \([T]\). That is claim 2 of proposition 4.1 holds._ Proof.: By lemma 6.12 we know that \([T]\) is the image of \([S]\) under \(\widehat{T}\). Evidently, both \(\widehat{T}\) and its inverse are continuous so it remains only to show that \(\widehat{T}\) is injective. However, this follows from lemma 6.13. **Lemma 6.15**.: _If \(\Upsilon_{2}\) is a computable functional then we can uniformly find computable functionals \(\Upsilon,\widehat{\Upsilon}\) such that if \(\Upsilon_{2}(X^{\prime\prime})=S\) then \(\Upsilon(X)=T\) and \(\widehat{\Upsilon}(X^{\prime\prime})=\widehat{T}\) where \(T,\widehat{T}\) are as constructed above._ Proof.: To compute \(\widehat{T}(\sigma)\) from \(X^{\prime\prime}\) we simply (iteratively) identify the leftmost outcome of nodes on \(\mathbb{T}\) to identify elements in \(\mathbb{U}\) and search for a node \(\xi\in\mathbb{U}\) and \(\xi^{-}\) implementing \(\mathcal{H}_{\sigma}\) and return \(\delta^{\xi}\). It's possible that when working to compute \(\widehat{T}(\sigma)\) we next discover such a node \(\xi\). However, this can only happen when \(S\) fails to converge on some \(\sigma^{\prime}\prec\sigma\) in which case \(\widehat{T}(\sigma)\) is properly undefined anyway. The uniformity can be read off the construction (note the only use of \(S\) is via lemma B.7 which is fully uniform). **Lemma 6.16**.: _If \(f\in[T]\) then \(f\nleq_{\mathbf{T}}X\). That is claim 3 of proposition 4.1 holds._ Proof.: Suppose the claim fails and for some \(f\in[T],f=\Phi_{e}(X)\) for some total \(\Phi_{e}(X)\). By lemma 6.9 we have \(f=\widehat{T}(g)\). Thus, for some \(\xi\in\mathbb{U}\) with \(\xi^{-}\) implementing \(\mathcal{P}_{e}\) we have \(f\succ\delta^{\xi}\). As \(f=\widehat{T}(g)\) we know that \(\mathbb{U}\) extends to a node implementing some \(\mathcal{H}_{\sigma}^{+}\) and thus \(T\) contains incompatible \(\tau_{0},\tau_{1}\succ\delta^{\xi^{-}}\). Thus, by the operation of \(\mathcal{P}_{e}\) we know that \(\delta^{\xi}\mid\Phi_{e}(X)\) contradicting the supposition. **Lemma 6.17**.: _Suppose \(\xi\in\mathbb{U},f\in[T],f\succ\delta^{\xi}\) and \(\xi^{-}\) implements \(\mathcal{L}_{e}\) then \(\Phi_{e}(f\oplus X)\) is total iff the true outcome of \(\xi^{-}\) is \({}^{r}\downarrow\)._ Proof.: Suppose \(\xi^{-}\) has true outcome \({}^{r}\downarrow\)'but that \(\Phi_{e}(f\oplus X;n)\uparrow\). For some \(\nu\in\mathbb{U},\tau=\delta^{\nu}\) with \(\nu^{-}\succ\xi\) implementing \(\mathcal{L}_{e}^{n}\) we have \(f\succ\tau\) and by operation of \(\mathcal{L}_{e}\) and \(\mathcal{L}_{e}^{n}\) we can only have true outcome \({}^{r}\downarrow\)'if \(\Phi_{e}(\tau\oplus X;n)\downarrow\). This contradicts our assumption. On the other hand, if \(\xi^{-}\) has true outcome \({}^{r}\nu\)'then \(\nu\succ\xi^{-}\) implements some \(\mathcal{L}_{e}^{n}\). The operation of \(\mathcal{L}_{e}\) guarantees that if we ever saw some \(\tau\succ\delta^{\xi},\tau\in T\) with \(\Phi_{e}(\tau\oplus X;n)\downarrow\) then we wouldn't have true outcome \({}^{r}\nu\). Hence, we must have \(\Phi_{e}(f\oplus X;n)\uparrow\). As the operation of \(\mathcal{L}_{e}\) ensures that one of our outcomes is true, this establishes the claim. **Lemma 6.18**.: _If \(g\in[S]\) then \(g\oplus X^{\prime\prime}\equiv_{\mathbf{T}}\left(\widehat{T}(g)\oplus X\right) ^{\prime\prime}\equiv_{\mathbf{T}}\widehat{T}(g)\oplus X^{\prime\prime}\). That is claim 4 of proposition 4.1 holds._ Note that, as \([T]\) is the image of \([S]\) under \(\widehat{T}\) every \(f\in[T]\) has this property relative to some \(g\). Proof.: Let \(g\in[S]\). By lemma 6.14, \(\widehat{T}(g)=f\) for some total \(f\). Since whenever \(\widehat{T}(\sigma)\!\downarrow\) by lemma 6.15 we can find \(\widehat{T}(\sigma)\) computably in \(X^{\prime\prime}\) it follows that \(f\leq_{\mathbf{T}}g\oplus X^{\prime\prime}\). To see that \(g\leq_{\mathbf{T}}f\oplus X^{\prime\prime}\) note that, by lemmas 6.13 and 6.15 we can inductively recover the unique path \(g\) with \(\widehat{T}(g)=f\) computably in \(f\oplus X^{\prime\prime}\). Clearly \((f\oplus X)^{\prime\prime}\geq_{\mathbf{T}}f\oplus X^{\prime\prime}\). Thus, to complete the proof, it is sufficient to show that \((f\oplus X)^{\prime\prime}\leq_{\mathbf{T}}g\oplus X^{\prime\prime}\). By a well-known result, it is enough to show that \(g\oplus X^{\prime\prime}\) can computably decide whether \(e\) is an index for a total computable function in \(f\oplus X\). However, by lemma 6.17 we can decide this question by searching for \(\xi\in\mathfrak{U}\) with \(\xi\) implementing \(\mathcal{L}_{e}\) and \(\xi=\langle\alpha,\sigma\rangle\) with \(\sigma\prec g\) and determining the outcome of \(\xi\). By lemma 6.15 this can be done computably in \(g\oplus X^{\prime\prime}\). **Lemma 6.19**.: _If \(f\in[T],Y=\Phi_{e}(f\oplus X)\) and there is some \(\tau\prec f\) with no \(e\)-splitting \(\tau_{0},\tau_{1}\succ\tau\) with \(\tau_{0},\tau_{1}\in T\) then \(Y\leq_{\mathbf{T}}X\)._ As remarked above, we mean \(e\)-splitting relativized to \(X\). Proof.: We can compute \(Y\!\!\restriction_{n}\) from \(X\) by returning \(\Phi_{e}(\tau^{\prime}\oplus X)\!\restriction_{n}\) for the first \(\tau^{\prime}\succ\tau\) in \(T\) we can find for which this is a string of length \(n\). As \(Y=\Phi_{e}(f\oplus X)\) and \(f\succ\tau,f\in[T]\) there must be some such \(\tau^{\prime}\) and by the lack of \(e\)-splitting extensions there is no possibility of an incompatible value. **Lemma 6.20**.: _If \(f\in[T]\) and \(Y\leq_{\mathbf{T}}f\oplus X\) then either \(Y\leq_{\mathbf{T}}X\) or \(f\leq_{\mathbf{T}}Y\oplus X^{\prime\prime}\). That is claim 5 of proposition 4.1 holds._ Proof.: Suppose that \(Y=\Phi_{e}(f\oplus X)\) and, by lemma 6.14, that \(f=\widehat{T}(g)\). Let \(r(\sigma,n)\) where \(|\sigma|>e\) be the outcome of the module \(\mathcal{S}_{e}^{n}\) assigned to some \((\alpha,\sigma)\in\mathfrak{U}\) and \(m(\sigma,n)=\xi\in\mathfrak{U}\) where \(\xi^{-}=(\alpha,\sigma)\). Note that, for all \(n,\sigma\in\operatorname{dom}\widehat{T}\) with \(|\sigma|>e\), \(\widehat{T}(\sigma^{\sim}\langle n\rangle)\succ\theta_{0}^{m(\sigma,n)}\). This is because all modules of the form \(\mathcal{S}_{e}^{n},e\geq 0\) always output a leftmost branch extending the leftmost branch they receive as input. If there is some \(\tau\prec f\) not extended by any \(e\)-splitting extensions then by lemma 6.19 we are done. So we may suppose this isn't the case and show that this implies \(g\leq_{\mathbf{T}}Y\oplus X^{\prime\prime}\) which, by lemma 6.18 is equivalent to showing \(f\leq_{\mathbf{T}}Y\oplus X^{\prime\prime}\). Suppose we know \(\sigma,n^{\prime}\) and that \(\sigma^{\sim}\langle n\rangle\prec g\) for some \(n\geq n^{\prime}\) (where \(|\sigma|>e\)) we demonstrate how to check if \(n=n^{\prime}\) or \(n>n^{\prime}\) computably in \(Y\oplus X^{\prime\prime}\). This will suffice since, applying this repeatedly, we can compute \(g\) from \(Y\oplus X^{\prime\prime}\). Using \(X^{\prime\prime}\) we can compute \(\xi=m(\sigma,n^{\prime})\) and \(r(\sigma,n^{\prime})\). If \(n=n^{\prime}\) then we would have \(f\succ\theta_{0}^{\xi}\) while if \(n>n^{\prime}\) then \(f\succ\theta_{k}^{\xi},k>0\). If we have \(r(\sigma,n^{\prime})=\sideset{{}^{\tau}}{\not\!\restriction}\) then that entails \(\theta_{0}^{\xi^{-}}\) has no \(e\)-splitting extensions. Thus, by our supposition \(f\) can't extend \(\theta_{0}^{\xi^{-}}\), so we can conclude \(n\neq n^{\prime}\). Suppose instead that \(r(\sigma,n^{\prime})={}^{\prime}(\uparrow,\widehat{n},\widehat{m})\)". In this case, the module \(\mathcal{S}_{e}^{n^{\prime}}\) at \(\xi^{-}\) must have identified some \(e\)-splitting \(\tau_{0},\tau_{1}\) of \(\theta_{0}^{\xi^{-}}\) and that no \(\theta_{k}^{\xi},k>0\) has an extension in \(T\) which \(e\)-splits with either \(\tau_{0}\) or \(\tau_{1}\). However, as \(\Phi_{e}(f\oplus X)\) is total, if \(f\) doesn't extend either \(\tau_{0}\) or \(\tau_{1}\) then some initial segment of \(f\) must \(e\)-split with either \(\tau_{0}\) or \(\tau_{1}\) (\(\Phi_{e}(f\oplus X)\) can't agree with incompatible strings). Hence, we can't have \(f\succ\theta_{k}^{\xi},k>0\) so we must have \(n=n^{\prime}\). This leaves only the case in which \(r(\sigma,n^{\prime})\) gives one of the incompatible (i.e. \(e\)-splitting) outcomes. In this case, we simply test if \(Y\) is compatible with \(\Phi_{e}\Big{(}\theta_{0}^{\xi}\oplus X\Big{)}\). If so, then \(n=n^{\prime}\). If not, then \(n>n^{\prime}\). These lemmas, taken together, verify all parts of proposition 4.1 except item 6 which is evident from the construction. ## Appendix A Minimality Requires Double Jump Here we present the promised result about the need for double, rather than single, jump inversion from section 6 **Proposition 6.1**.: _Given a perfect weakly \(\omega\)-branching pruned f-tree \(T\leq_{\mathbf{T}}\mathbf{0}^{\prime}\) one can uniformly construct a computable functional \(\Phi\) such that \(e\)-splitting pairs in \(T\) occur above every node in \(T\) and for every \(\tau\in\operatorname{rng}T\) there are paths \(f\neq g\) extending \(\tau\) through \(T\) with \(\Phi_{e}(f)=\Phi_{e}(g)\)._ Indeed, as remarked in that section, we actually prove a slightly stronger result and show that we even if \(T\) is unpruned we can start building such paths \(f,g\) above any node \(\tau\in T\) and maintain agreement under \(\Phi_{e}\) with the only potential for failure being the possibility of hitting a terminal node in \(T\). Proof.: The basic idea of this proof is to use the fact that \(T(\sigma)\) has infinitely many immediate extensions on \(T\) to define a limiting behaviour for \(\Phi_{e}(\tau)\) for \(\tau\succ T(\sigma^{\frown}\langle m\rangle)\) for sufficiently large \(m\). Let \(T_{s}\) be a stagewise approximation to \(T\) that's correct in the limit and which doesn't converge on elements outside the domain. WLOG we may assume that if \(T_{s}(\sigma)=\tau\) then \({}^{\prime}\sigma^{\ast}\,,{}^{\prime}\tau^{\ast}<s\) and \(T_{s}(\sigma^{\prime})\downarrow\) for all \(\sigma^{\prime}\prec\sigma\). We say that \(T(\sigma)\) was defined at \(s^{\prime}\) (relative to \(s\)) if \(s^{\prime}=\mu t(\forall t^{\prime}\in[t,s])(T_{t^{\prime}}(\sigma)=T_{s}( \sigma))\) and that \(\sigma\) is senior to \(\sigma^{\prime}\) at stage \(s\) if \(T(\sigma)\) was defined at an earlier stage than \(T(\sigma^{\prime})\) or the same stage and \(\sigma<_{L}\sigma^{\prime}\). We'll also talk about about \(T(\sigma)\) being senior to \(T(\sigma^{\prime})\) when \(\sigma\) is senior to \(\sigma^{\prime}\). We first ensure that every non-terminal \(T(\sigma)=\tau\) is extended by an e-splitting. The idea here is that if \(\tau^{\prime}\succ\tau\) we define \(\Phi_{e}(\tau^{\prime})\) to extend \(\Phi_{e}(\tau)\) with a guess at the first prior stage that some extension of \(\tau\) is permanently seen to be \(T(\sigma^{\frown}\langle m\rangle)\) for some \(m\). Eventually, this guess stabilizes and will differ from the guess that was made along the most senior extension of \(\tau\). Given \(T_{s}(\sigma)=\tau\), define \(q_{s}(\tau)=t\) where \(\sigma^{\frown}\langle m\rangle\) is the most senior immediate extension of \(\sigma\) relative to \(s\) and \(t\) the stage it was defined or \(0\) if no such \(t\) exists. We let \(q(\tau)=\lim_{s\to\infty}q_{s}(\tau)\). If \(s={}^{r}\tau^{*}\) we define \(\Phi_{e}(\tau)=\Phi_{e}(\tau^{-})\hat{\ \ }\langle q_{s}(\tau^{-})\rangle\hat{\ }\ \ Now check if there is any \(\sigma^{\prime}=\sigma^{\frown}\langle m\rangle\) such that \(T_{s}(\sigma^{\prime})=\tau^{\prime}\) and \(\Phi_{e^{\prime}}(\tau^{\prime})\succ\Phi_{e^{\prime}}(g_{k}^{n})\). If so, take the most senior such \(\sigma^{\prime}\) and define \(f_{\nu}^{n+1}=\tau^{\prime}\) and continue to the next case. If not, finish processing \(\tau\) for this stage. Case \(g_{\nu}^{n}=\tau\wedge g_{\nu}^{n+1}\uparrow\) :As above, using \(g_{\nu}^{n},g_{\nu}^{n+1}\) in place of \(f_{\nu}^{n},f_{\nu}^{n+1}\) and \(f_{\nu}^{n+1}\) in place of \(g_{\nu}^{n}\) (note the off by 1 difference). Case \(f_{\sigma\downarrow}^{0}\wedge g_{\sigma}^{0}\uparrow\) :We've already done all the hard work when defining \(f_{\sigma}^{0}\) so we look for the most senior \(\tau^{\prime}=T_{s}(\sigma^{\frown}\langle m\rangle)\) with \(\Phi_{e^{\prime}}\big{(}f_{\sigma}^{0}\big{)}\preceq\Phi_{e^{\prime}}(\tau^{ \prime})\) and define \(g_{\sigma}^{0}=\tau^{\prime}\). If found, we move on to the next case. If not, finish processing \(\tau\) for this stage. Case \((\forall\nu,n)(\tau\neq f_{\nu}^{n},g_{\nu}^{n})\wedge f_{\sigma}^{0}\uparrow\):Let \(\tau^{\prime}=T_{s}(\sigma^{\frown}\langle m\rangle)\) with \(\sigma^{\frown}\langle m\rangle\) be the most senior extension of \(\sigma\) (if any). If not found then finish processing \(\tau\) for this stage. If found, set \(f_{k}^{0}=\tau^{\prime}\) and set \(l_{s+1}(\tau)\) to be the \(\prec\) minimal value such that \(\Phi_{e^{\prime}}(\tau)\widehat{\ \ }l_{s+1}(\tau)=\Phi_{e^{\prime}}(\tau^{ \prime})\). It is straightforward, to verify, that each node in \(\operatorname{rng}T\) is only injured finitely many times and that each \(f_{\nu}^{n},g_{\nu}^{n}\), is only undefined/redefined finitely many times. To see this, note first that once all more senior elements in \(\operatorname{rng}T\) have entered \(\operatorname{rng}T\) permanently the only way \(\tau\) is injured is if we define \(f_{\nu}^{n},g_{\nu}^{n}\) to equal \(\tau\) or \(\tau=T_{s}(\nu)\) and \(f_{\nu}^{0}\) is injured. It is clear from the construction that for at most one pair \((n,\nu)\) and either \(f\) or \(g\) do we set \(f_{\nu}^{n},g_{\nu}^{n}\) to equal \(\tau\) and that only if \(\tau\) goes unused in this way do we extend it by \(f_{\nu}^{0}\). Thus, it is enough to show that each \(f_{\nu}^{n}\),\(g_{\nu}^{n}\) eventually settles down. To see this, note that if undefining \(h_{\nu}^{n}\) could cause \(\widehat{h}_{\nu}^{n}\) to become undefined then \(h_{\nu}^{n}\) took the more senior extension of their common initial segment. It is also evident from the construction that the images of \(f_{\nu}\) and \(g_{\nu}\) under \(e^{\prime}\) agree. Specifically, that, when defined \(\Phi_{e}(g_{\nu}^{n})\succ\Phi_{e}(f_{\nu}^{n})\) and \(\Phi_{e}\big{(}f_{\nu}^{n+1}\big{)}\succ\Phi_{e}(g_{\nu}^{n})\). We now show that for any \(\sigma\in\operatorname{dom}T\) if \(\tau=T(\sigma)\) isn't terminal than it is extended. We first deal with the case where \(\tau\) isn't equal to any \(f_{\nu}^{n}\) or \(g_{\nu}^{n}\). In this case, we clearly eventually define \(f_{\sigma}^{0}=T(\sigma^{\frown}\langle m\rangle)\) for some \(m\) and then for large enough \(m^{\prime},t\) we will have \(\tau^{\prime}=T_{t}(\sigma^{\frown}\langle m^{\prime}\rangle)\) then \(\Phi_{e^{\prime}}(\tau^{\prime})\succ\Phi_{e^{\prime}}\big{(}f_{\sigma}^{0} \big{)}\) thus ensuring that we define \(g_{\sigma}^{0}\). If \(\tau=f_{\nu}^{n+1}=T_{s+1}(\sigma)\) the agreement remarked on above ensures we eventually get a chance to define \(l_{s+1}(\tau)\) so that \(\Phi_{e^{\prime}}(f_{\nu}^{n})\widehat{\ \ }l_{s+1}(\tau)\succ\Phi_{e^{\prime}}\big{(}g_{\nu}^{n+1} \big{)}\) after which \(\tau\) is never injured. Thus, for some sufficiently large \(m,t\) we have that if \(\tau^{\prime}=T_{t}(\sigma^{\frown}\langle m\rangle)\) then \(\Phi_{e^{\prime}}(\tau^{\prime})\succ\Phi_{e^{\prime}}(g_{\nu}^{n})\) and we eventually define \(f_{\nu}^{n+2}\) to be equal to such a \(\tau^{\prime}\). The same argument applies, with the obvious adjustments to the indexes, when \(\tau=g_{\nu}^{n}\). Thus, if our tree is pruned then, as it is weakly, \(\omega\) branching for any \(\sigma\in\operatorname{dom}T\) there is some \(m\) with \(\sigma^{\prime}=\sigma^{\frown}\langle m\rangle\) such that \(f_{\sigma^{\prime}},g_{\sigma^{\prime}}\) are defined and extend \(T(\sigma^{\prime})\) with equal images under \(\Phi_{e}\). The moreover claim is straightforward as well. With slightly more care, we could ensure that every node \(\tau\in\operatorname{rng}T\) was extended by infinitely many paths \(f_{k,\tau},g_{k,\tau}\). We are unsure if this argument can be improved to construct a single total computable functional which witnesses all \(\mathbf{0}^{\prime}\) computable f-trees fail, in a strong sense, to have the properties necessary to help in a minimality style argument. ## Appendix B Ordinal Notation Technicalities If we wish to build a sequence of trees \(T_{n}\) all homeomorphic to \(T_{\omega}\) ensuring the homeomorphism at limit levels only requires that we ensure \(T_{n}\!\upharpoonright_{n}=T_{\omega}\!\upharpoonright_{n}\) and that our homeomorphism from \(T_{n+1}\) to \(T_{n}\) is the identity on \(T_{n}\!\upharpoonright_{n}\). In this appendix we show that this idea can be extended to arbitrary ordinal notations. The potential difficulty here is that we attempt to demand that \(T_{\beta_{n}}\!\upharpoonright_{n}\) should equal \(T_{\lambda}\!\upharpoonright_{n}\) and that for some \(\gamma\) with \(\beta_{n}<_{\mathcal{O}}\gamma<_{\mathcal{O}}\lambda\) while also demanding that \(T_{\gamma}\!\upharpoonright_{m}=T_{\lambda^{\prime}}\) for some \(\lambda^{\prime}>_{\mathcal{O}}\lambda,m>n\). From here on out, we assume that we are working below some limit notation \(\alpha\) and show that it's possible to define computable functions \(\beta^{\diamond}\) and \(l^{\diamond}(\beta)\) that instruct us to copy strings of length \(l^{\diamond}(\beta)\) from the tree \(T_{\beta^{\diamond}}\) when building \(T_{\beta}\). Note that, our functions \(\beta^{\diamond}\) and \(l^{\diamond}(\beta)\) will depend on both \(\alpha\) and \(\beta\). We start by making the following definition. **Definition B.1**.: An \(\mathcal{O}\)-path is a finite string \(\delta\) that satisfies the following for all \(n+1\in\operatorname{dom}\delta\) when \(\delta(0)=\alpha\) and \(\delta(|\delta|-1)=\beta\) 1. \(\delta(0)=\alpha\) 2. \(\delta(|\delta|-1)=\beta\) 3. If \(\delta(n)=\beta+1\) then \(\delta(n+1)=\beta\). 4. If \(\delta(n)=\lambda\) and the notation \(\lambda\) isn't a successor then \(\delta(n+1)=\gamma\in\operatorname{rng}\left\{\lambda\right\}^{\mathcal{O}}\) with \(\beta\leq_{\mathcal{O}}\gamma\) We call such an \(\mathcal{O}\)-path an \(\mathcal{O}\)-path from \(\alpha\) to \(\beta\). The \(\mathcal{O}\)-path is minimal if the \(\gamma\) in 4 is required to be the minimal element in \(\operatorname{rng}\left\{\lambda\right\}^{\mathcal{O}}\) with \(\beta\leq_{\mathcal{O}}\gamma\). **Lemma B.2**.: _If \(\beta\leq_{\mathcal{O}}\alpha\) then there is a unique minimal \(\mathcal{O}\)-path from \(\alpha\) to \(\beta\). Moreover, if \(\delta\) is a minimal \(\mathcal{O}\)-path from \(\alpha\) to \(\beta\) then \(\delta\!\upharpoonright_{n},n>0\) is the minimal \(\mathcal{O}\)-path from \(\alpha\) to \(\delta(n-1)\)._ Proof.: Clearly, no initial segment of an\(\mathcal{O}\)-path from \(\alpha\) to \(\beta\) can be an \(\mathcal{O}\)-path from \(\alpha\) to \(\beta\) as an \(\mathcal{O}\)-path is a strictly decreasing sequence under \(<_{\mathcal{O}}\). Moreover, there is at most one minimal \(\mathcal{O}\)-path from \(\alpha\) to \(\beta\) as a minimal \(\mathcal{O}\)-path from \(\alpha\) to \(\beta\) is the lexicographically least \(\mathcal{O}\)-path (under \(<_{\mathcal{O}}\) ) from \(\alpha\) to \(\beta\). This establishes the uniqueness and implies the moreover claim as well. It only remains to show that there is always such an \(\mathcal{O}\)-path. Suppose not, then let \(f(0)=\alpha\) and define \(f(n+1)\) as per the definition of a minimal \(\mathcal{O}\)-path from \(\alpha\) to \(\beta\). If we ever have \(f(n)=\beta\) then \(f\!\upharpoonright_{n+1}\) is a minimal \(\mathcal{O}\)-path from \(\alpha\) to \(\beta\). If not, then, inductively, \(f(n)>_{\mathcal{O}}\beta\) and the conditions provide a unique definition for \(f(n+1)\). Thus, \(f\) is an infinite descending sequence of notations. Contradiction. **Definition B.3**.: We inductively define \(l^{\diamond}(\beta),\beta^{\diamond}\) for \(\beta<_{\mathcal{O}}\alpha\) as follows. Let \(\delta\) be the minimal \(\mathcal{O}\)-path from \(\alpha\) to \(\beta\), \(n=|\delta|\) and \(\gamma=\delta(n-2)\). We stipulate that \(l^{\diamond}(\alpha)=0\) and break our definition into the following cases. Case \(\gamma\in{}^{+}\mathcal{O}\)**:**: Define \(l^{\Diamond}(\beta)=l^{\Diamond}(\gamma)\). If \(\gamma\) is an even notation then define \(\beta^{\Diamond}=\gamma\) and otherwise define \(\beta^{\Diamond}=\gamma^{\Diamond}\). Case \(\gamma\in\overrightarrow{\mathcal{O}}\)**:**: Let \(m\in\omega\) be such that \(\beta=\left\{\gamma\right\}^{\mathcal{O}}(m)\). Define \(\beta^{\Diamond}=\gamma\) and \(l^{\Diamond}(\beta)=l^{\Diamond}(\gamma)+m\). **Lemma B.4**.: _If \(\lambda\in\overrightarrow{\mathcal{O}},\lambda\leq_{\mathcal{O}}\alpha\) and_ \[\beta_{m}=\begin{cases}\left\{\lambda\right\}^{\mathcal{O}}(m)&\text{if }\left\{ \lambda\right\}^{\mathcal{O}}(m)\text{ is even}\\ \left\{\lambda\right\}^{\mathcal{O}}(m)&\text{otherwise}\end{cases}\] _then for all sufficiently large \(m\), \(\beta_{m}{}^{\Diamond}=\lambda\) and \(l^{\Diamond}(\beta_{m})\geq m\)._ Proof.: Let \(\delta_{m}\) be the minimal \(\mathcal{O}\)-path from \(\alpha\) to \(\beta_{m}\). We first establish that for all sufficiently large \(m\) we have \(\lambda\in\operatorname{rng}\delta_{m}\). Note that, by examination of the conditions from definition B.1, if \(\delta_{n}(k)=\kappa\geq_{\mathcal{O}}\lambda\) then we clearly have that \(\delta_{n}{}^{\uparrow}{}_{k+1}\prec\delta_{m}\) for \(m\geq n\) (as \(\kappa\) clearly satisfies the minimality requirement for \(\delta_{m}\)). Thus, if we see \(\lambda\in\operatorname{rng}\delta_{n}\) for any \(n\), we are done so suppose this never happens. Define \(f(k)\) to be equal to \(\delta_{n}(k)\) for the least value \(n\) such that \(\delta_{n}(k)>_{\mathcal{O}}\lambda\). Obviously, \(f(0)\) is defined so suppose that \(f(k+1)\) fails to be defined for some minimal \(k\). If \(f(k)\) was a successor ordinal than \(\delta_{m}(k+1)=\lambda\) where \(m\) is the witness defining \(f(k)\) contradicting the supposition. Thus, we can assume that \(f(k)=\kappa\in\overrightarrow{\mathcal{O}}\). But, if \(\left\{\kappa\right\}^{\mathcal{O}}(x)\leq_{\mathcal{O}}\lambda\) for all \(x\) then we'd have \(\kappa\leq_{\mathcal{O}}\lambda\) contra our assumption. Thus, we can pick \(x\) maximal with \(\left\{\kappa\right\}^{\mathcal{O}}(x)<_{\mathcal{O}}\lambda\) and then choose \(n\) so that \(\left\{\lambda\right\}^{\mathcal{O}}(n)>_{\mathcal{O}}\left\{\kappa\right\}^{ \mathcal{O}}(x)\). Therefore, \(\delta_{n}(k+1)\geq_{\mathcal{O}}\left\{\kappa\right\}^{\mathcal{O}}(x+1)\geq_ {\mathcal{O}}\lambda\) and, by assumption, we can't have equality showing that \(f(k+1)\) is defined. But the function \(f\) defines an infinite decreasing sequence of notations. Contradiction. Therefore, for all sufficiently large \(m\) we must have \(\lambda\in\operatorname{rng}\delta_{m}\). If \(\lambda\in\operatorname{rng}\delta_{m}\) then we obviously have \(\beta_{m}=\delta(|\delta|-1)\) and either \(\lambda=\delta(|\delta|-2)\) if \(\beta_{m}=\left\{\lambda\right\}^{\mathcal{O}}(m)\) or \(\lambda=\delta(|\delta|-3),\left\{\lambda\right\}^{\mathcal{O}}(m)=\beta_{m}+1= \delta(|\delta|-2)\). In both cases, we clearly have \(\beta_{m}{}^{\Diamond}=\lambda\) and \(l^{\Diamond}(\beta_{m})\geq m\). **Proposition B.5**.: _Suppose that for all \(\beta\leq_{\mathcal{O}}\alpha\) we have \(T_{\beta}{}^{\uparrow}{}_{l^{\Diamond}(\beta)}=\tilde{T}_{\beta^{\Diamond}}{} ^{\uparrow}{}_{l^{\Diamond}(\beta)}\) where \(\tilde{T}_{\kappa}\supset T_{\kappa}\) for all \(\kappa\) then all of the following hold_ 1. \(\beta^{\Diamond},l^{\Diamond}(\beta)\) _are given by a computable function of_ \(\alpha,\beta\)_._ 2. _For all_ \(\beta<_{\mathcal{O}}\alpha\)_,_ \(\beta^{\Diamond}\) _is an even notation satisfying_ \(\beta<_{\mathcal{O}}\beta^{\Diamond}\leq_{\mathcal{O}}\alpha\)__ 3. _If_ \(\beta<_{\mathcal{O}}\kappa<_{\mathcal{O}}\beta^{\Diamond}\) _then_ \(T_{\kappa}{}^{\uparrow}{}_{l^{\Diamond}(\beta)}\supset\tilde{T}_{\beta^{\Diamond}}{ }^{\Diamond}{}_{l^{\Diamond}(\beta)}\) _and_ \(l^{\Diamond}(\kappa)\geq l^{\Diamond}(\beta)\)_. Furthermore, if we always have_ \(T_{\beta}^{(\infty)}{}^{\uparrow}{}_{l^{\Diamond}(\beta)}=\tilde{T}_{\beta^{ \Diamond}}{}^{\left\langle\infty\right\rangle}{}_{l^{\Diamond}(\beta)}\) _then_ \(T_{\kappa}^{(\infty)}{}^{\uparrow}{}_{l^{\Diamond}(\beta)}=\tilde{T}_{\beta^{ \Diamond}}{}^{\left\langle\infty\right\rangle}{}_{l^{\Diamond}(\beta)}\)__ 4. _If_ \(\lambda\leq_{\mathcal{O}}\alpha\) _is a limit notation then we can computably (in_ \(\lambda,n,\alpha\)_) enumerate a sequence of even notations_ \(\beta_{n}<_{\mathcal{O}}\lambda\) _with_ \(\beta_{n}{}^{\Diamond}=\lambda\) _such that both_ \(\beta_{n}\) _and_ \(l^{\Diamond}(\beta_{n})\geq n\) _are strictly monotonically increasing._ Proof.: The computability is clear from the definition establishing 1. For the remainder of the proof let \(\delta,\gamma\) be as in definition B.3. To verify 2 note that if \(\beta^{\Diamond}\) was going to be an odd notation then we set \(\beta^{\Diamond}=\gamma^{\Diamond}\) which guarantees that \(\beta^{\Diamond}\) is even. As the only elements that can appear in a minimal \(\mathcal{O}\)-path from \(\alpha\) to \(\beta\) are \(\leq_{\mathcal{O}}\alpha\) this establishes the other part of this claim. To verify 3 suppose that \(\kappa<_{\mathcal{O}}\beta^{\Diamond}\). We claim that if \(\delta^{\prime}\) is a minimal \(\mathcal{O}\)-path from \(\alpha\) to \(\kappa\) then \(\delta^{\prime}\succ\delta^{-}\). This follows by examination of the conditions from definition B.1 and the fact that \(\beta<_{\mathcal{O}}\kappa<_{\mathcal{O}}\beta^{\Diamond}\). But if \(\kappa^{\prime}\) appears in \(\operatorname{rng}\delta^{\prime}\setminus\operatorname{rng}\delta^{-}\) then we must have \(T_{\kappa^{\prime}}\!\restriction_{l^{\Diamond}(\beta)}\supset\tilde{T}_{ \gamma}\!\restriction_{l^{\Diamond}(\beta)}\) and \(l^{\Diamond}(\kappa^{\prime})\geq l^{\Diamond}(\beta)\) (by induction on \(n\in\operatorname{dom}\delta^{\prime}\setminus\operatorname{dom}\delta^{-}\)). A similar argument shows the equality claim for the pruned trees under the provided assumptions. Finally, for 4 let \(\beta^{\prime}_{n}\) be the sequence defined in lemma B.4 and inductively define \(\beta_{n+1}\) to be the first element \(\beta^{\prime}_{m}\) with \(\beta^{\prime}_{m}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Finally, we note that not only is the limit lemma completely uniform and relativeizeable we can ensure that when we iterate it's application only the final limit is at risk of being undefined. **Lemma B.7**.: _Given a functional \(\Upsilon_{2}\), we can uniformly construct a computable functional \(\rho^{X}(z,s_{1},s_{0})\) such that if \(\rho_{1}^{X}(z,s_{1})\stackrel{{\text{\tiny def}}}{{=}}\lim_{s_{ 0}\to\infty}\rho_{n}^{X}(z,s_{1},s_{0})\) then_ * _For all_ \(X\)__\(\rho_{1}^{X},\rho^{X}\) _are total binary valued functions._ * \(\Upsilon_{2}(X^{\prime\prime};z)\!\!\downarrow\{0,1\}\) _iff_ \(\rho_{2}^{X}(z)\!\!\downarrow\!=\Upsilon_{2}(X^{\prime\prime};z)\)_._ _Moreover, \(\rho^{X}(z,s_{1},s_{0})\) depends only on the computations of the form \(\Upsilon_{2}(\tau;z)\) for some \(\tau\)._ Proof.: Using the limit lemma relativized to \(X^{\prime}\) we can derive a total computable binary valued functional whose limit gives \(\Upsilon_{2}(X^{\prime\prime};z)\) when \(\Upsilon_{2}(X^{\prime\prime};z)\!\!\downarrow\!\!\in\{0,1\}\) and, by alternating between \(0,1\) whenever we haven't settled on a computation witnessing \(\Upsilon_{2}(X^{\prime\prime};z)\!\!\downarrow\!\!\in\{0,1\}\), ensures that otherwise the limit fails to exist. Using a stagewise approximation to \(X^{\prime}\) gives us our functional \(\rho^{X}\) and ensures \(\rho_{1}^{X}\) is total. The moreover, follows by attention to how the relativization was performed.
2304.05973
HiPrompt: Few-Shot Biomedical Knowledge Fusion via Hierarchy-Oriented Prompting
Medical decision-making processes can be enhanced by comprehensive biomedical knowledge bases, which require fusing knowledge graphs constructed from different sources via a uniform index system. The index system often organizes biomedical terms in a hierarchy to provide the aligned entities with fine-grained granularity. To address the challenge of scarce supervision in the biomedical knowledge fusion (BKF) task, researchers have proposed various unsupervised methods. However, these methods heavily rely on ad-hoc lexical and structural matching algorithms, which fail to capture the rich semantics conveyed by biomedical entities and terms. Recently, neural embedding models have proved effective in semantic-rich tasks, but they rely on sufficient labeled data to be adequately trained. To bridge the gap between the scarce-labeled BKF and neural embedding models, we propose HiPrompt, a supervision-efficient knowledge fusion framework that elicits the few-shot reasoning ability of large language models through hierarchy-oriented prompts. Empirical results on the collected KG-Hi-BKF benchmark datasets demonstrate the effectiveness of HiPrompt.
Jiaying Lu, Jiaming Shen, Bo Xiong, Wenjing Ma, Steffen Staab, Carl Yang
2023-04-12T16:54:26Z
http://arxiv.org/abs/2304.05973v1
# HiPrompt: Few-Shot Biomedical Knowledge Fusion via Hierarchy-Oriented Prompting ###### Abstract. Medical decision-making processes can be enhanced by comprehensive biomedical knowledge bases, which require fusing knowledge graphs constructed from different sources via a uniform index system. The index system often organizes biomedical terms in a hierarchy to provide the aligned entities with fine-grained granularity. To address the challenge of scarce supervision in the biomedical knowledge fusion (BKF) task, researchers have proposed various unsupervised methods. However, these methods heavily rely on ad-hoc lexical and structural matching algorithms, which fail to capture the rich semantics conveyed by biomedical entities and terms. Recently, neural embedding models have proved effective in semantic-rich tasks, but they rely on sufficient labeled data to be adequately trained. To bridge the gap between the scarce-labeled BKF and neural embedding models, we propose HiPrompt, a supervision-efficient knowledge fusion framework that elicits the few-shot reasoning ability of large language models through hierarchy-oriented prompts. Empirical results on the collected KG-Hi-BKF benchmark datasets demonstrate the effectiveness of HiPrompt. Biomedical Knowledge Fusion, Few-Shot Prompting, Large Language Models for Resource-Constrained Field, Retrieve & Re-Rank + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: preprint preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint preprint: AIP/12/15 + Footnote †: preprint: preprint preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint preprint: AIP/12/15 + Footnote †: preprint preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + they are developed independently by different groups of specialists. Second, unlike the existing KG entity alignment problem (Sututut et al., 2017; Wang et al., 2018) that contains many labeled entity-entity pairs as training samples, biomedical knowledge integration is supervision-scar. Third, the topology of a KG and a hierarchy are very different, where the KG is a general graph, while the hierarchy is a directed acyclic graph. **Existing research.** Pioneer studies on BKF mainly rely on the biomedical desaurus to normalize words and match lexical to establish alignment between KGs and the hierarchy (Kang et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Later, researchers explore combing first-order logic (Kang et al., 2017), probabilistic alignment (Sutut et al., 2017), or non-literal string comparisons (Kang et al., 2018) with lexical matching for unsupervised BKF. However, these methods fail to capture the rich semantics conveyed in entities and terms (_e.g._, synonyms, definitions, types), which are essential to handle the inconsistent naming conventions from multi-sources. Another line of work leverage neural embedding models (Kang et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) to represent entities as dense vectors using semantic attributes, structural properties, and alignment supervisions. These models perform better than unsupervised models when sufficient training samples are available. However, the scarcity of supervision in the BKF problem leads to the underfitting of these data-cager neural models. Moreover, none of the existing methods explicitly leverages the hierarchical structure of terms in the biomedical hierarchy. **Present work.** To address above challenges, we present **HiPrompt**, a few-shot BKF framework via **Hierarchy-Oriented **Prompt**ing. HiPrompt employs a large language model (LLM) to generatively propose terms from the hierarchy to be aligned with entities from the KG. The key insight is that LLMs (Kang et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) can be rapidly adapted to an unseen task via the gradient-free "prompt-based learning" (Sutut et al., 2017; Wang et al., 2018), thus removing the dependencies on the task-specific supervision. HiPrompt applies prompt-based learning with a curated task description for the BKF task and a tiny number of demonstrations generated from the few-shot samples. This mimics the procedure of how humans accomplish a new task by learning from previous experiences and generalizing them to a new context. Moreover, we add the hierarchical context to the prompts to further improve the performance of HiPrompt. To evaluate the performance of our proposed HiPrompt, we create _KG-Hi-BKF_, a new benchmark for BKF with two datasets collected from two biomedical KGs (Kang et al., 2017; Wang et al., 2018) and one disease hierarchy (Wang et al., 2018) with manual verification. Empirical results demonstrate the effectiveness of our HiPrompt framework, which largely outperforms both conventional unsupervised lexical matching models and neural semantic embedding models. ## 2. Biomedical Knowledge Fusion ### Problem Definition BKF aims at aligning existing specialized biomedical KGs into a uniform biomedical index system that can be represented by a hierarchy. We define the biomedical KG and hierarchy as follows: A biomedical KG is a multi-relation graph \(\mathcal{G}=(E,R,RT)\), where \(E,R,RT\) are a set of various types of entities, a set of relation names, and \(RT\in E\times R\times E\) is the set of relational triples, respectively. A biomedical hierarchy is a directed acyclic graph (DAG) \(\mathcal{H}=(T,TP)\), where \(T\) is a set of terms, and \(TP\in T\times T\) is a set of hypernymymy term pairs, respectively. The topology differences between KG and hierarchy distinguish our BKF task from other related tasks (_e.g._, entity alignment, KG integration). Moreover, both entities \(E\) and terms \(T\) contain rich associated semantic attributes (_e.g._, definition, synonyms). Finally, we define our task as follows: **Definition 2.1** (biomedical knowledge fusion).: Given a biomedical KG \(\mathcal{G}\), a biomedical hierarchy \(\mathcal{H}\), a set of pre-aligned entity-term pairs \([\mathbf{e}_{a},\mathbf{t}_{a}]_{a=1}^{M}\), and a set of unaligned entities \([\mathbf{e}_{1},\mathbf{e}_{2},\cdots,\mathbf{e}_{N}]\in\mathcal{G}\). The goal is to link each unaligned entity to the hierarchy \(LK=\{(\mathbf{e}_{t},t_{j})|\mathbf{e}_{i}\in\mathcal{G},t_{j}\in\mathcal{H}\}\) such that \(t_{j}\) is the most specific term in the hierarchy for entity \(\mathbf{e}_{i}\) in KG. In our work, we focus on the few-shot settings where the sample size \(M\) is very small to reflect the scarcity of labeled data that is ubiquitous in the biomedical field. ### Technical Details of HiPrompt Figure 2 shows the overall architecture of our proposed HiPrompt framework. To tackle the BKF task with limited training samples, our key insight is to utilize LLMs via hierarchy-oriented prompting. However, LLMs can not accommodate very lengthy input prompts (_e.g._, GPT-3 only supports up to 4096 tokens) that contain all candidate terms along with their hierarchy contexts. A feasible workaround is to exhaustively examine each candidate term given the query entity, but the inference cost would be dramatic (Kang et al., 2017). Therefore, we propose to use the _retrieve and re-rank_(Kang et al., 2017; Wang et al., 2018; Wang et al., 2018) approach to resolve the above challenges. **Retrieval Module.** The retriever provides an efficient solution for coarse-grained candidate filtering, thus reducing the overall inference cost of HiPrompt. Given one entity query \(\mathbf{e}_{i}\) from the KG \(\mathcal{G}\) and all candidate terms \(T\) from the hierarchy \(\mathcal{H}\), the retriever produces a coarsely ranked candidate list \((\mathbf{t}^{\prime}_{1},\mathbf{t}^{\prime}_{2},\cdots,\mathbf{t}^{\prime}_{K})\), to avoid unnecessary computations for the LLM-based re-ranker. HiPrompt framework is flexible so that any unsupervised ranking function (_e.g._, TF-IDF (Wang et al., 2018), LDA (Chen et al., 2019)) can be used to generate the ranked Figure 2. Overview of our HiPrompt framework, with a zoom-in on the LLM-based re-ranker. list. In practice, we choose the unsupervised BM25 (Kang et al., 2018) as the ranking function. Since entities and concepts have rich attributive and structural information, we further utilize these two types of information to expand (Kang et al., 2018) query entities and candidate terms. **Re-Ranking Module.** Given the query entity \(e_{i}\) and the coarsely ranked candidate list \((t_{1}^{\prime},t_{2}^{\prime},\cdots,t_{K}^{\prime})\), we request the LLM to re-rank the list to \((t_{1},t_{2},\cdots,t_{K})\) where \(t_{1}\) is the most specific term of \(e_{i}\) via the gradient-free prompt-based learning. Figure 2 provides an example of the input prompt and the response of the re-ranker. The input prompt is composed of (1) curated textual _task description_, (2) illustrative _demonstration_ from few-show samples, and (3) the _test prompt_ constructed from the query entity and the coarsely ranked list. The LLM-based re-ranker essentially tackles the BKF task by estimating the conditional probability: \(P_{LLM}(w_{1},w_{2},\ldots,w_{n}|prompt)\), where \((w_{1},\ldots,w_{n})\) is the output word sequence with variable lengths. The desired re-ranked list can be converted from the output sequence by a simple mapping function \((t_{1},t_{2},\cdots,t_{K})=f(w_{1},w_{2},\ldots,w_{n})\). For the template of demonstration, we use the query entity to form the question string "Query: \(\{e_{i}\}\)", the coarse candidate list to form the choice string "Choices: \(\{t_{1}^{\prime}\); \(t_{2}^{\prime}\); \(\cdots\)\(t_{K}^{\prime}\)", and the ground truth to form the answer string "Answer: \(\{t_{1}\); \(t_{2}\); \(\ldots\)\(t_{K}\}\)". While there is no such ground truth sample in the zero-shot setting, we propose the _pseudo demonstration_ technique which adopts out-of-domain entity-term pairs to showcase what is the perspective format. Both real and pseudo demonstrations are essential to generate output sequences in the consistent format (Kang et al., 2018; Wang et al., 2019). For the test prompt, we use the same template of the demonstration, while leaving the answer string as "Answer:" for LLM to predict what comes next. To further elicit LLMs with hierarchical constraints and dependencies of candidate terms, we propose the novel _test prompt with hierarchy context_ where hypernyms of each candidate term are included in the context string. More specifically, we traverse the biomedical hierarchy \(\mathcal{T}\) to locate the hypernym terms \(t_{t,p1}^{\prime},\cdots,t_{t,p_{i}}^{\prime}\) of a candidate term \(t_{i}^{\prime}\). Therefore, the context string is formed as "Contexts: \(\{t_{1}^{\prime}\) is \(A\)\(t_{1,p}^{\prime}\); \(\cdots\); \(t_{K}^{\prime}\) is a \(t_{K,p}^{\prime}\)". ## 3. Experiments **Benchmark Datasets.** We use the following data sources to create our KG-Hi-BKF benchmark1: (1) SDKG (Wang et al., 2019): a disease-centric KG that covers five cancers and six non-cancer diseases. (2) re-poDB (Kang et al., 2018): we adopt their original triples, and generate entity attributes by querying DrugBank (Drugbank, 2019) and UMLS Metthesaurus (Drugbank, 2019). (3) DzHi (Zhu et al., 2020): a hierarchy derived from the widely used Disease Ontology (Zhu et al., 2020) which has a depth of 13. We first use the mapping existing in the resources themselves, which leads to many-to-many linkages between two KBs. We further manually verify the correctness of the many-to-many linkages and curate the datasets to the correct stage. Table 2 shows the statistics of the created benchmark. As can be seen, the linkages follow the one-to-one assumption (Zhu et al., 2020), and the scale of labeled entity-term pairs is very small. Footnote 1: KG-Hi-BKF benchmark is available at [https://doi.org/10.6084/m9.figshare.21950282](https://doi.org/10.6084/m9.figshare.21950282). **Compared Models.** We compare HiPrompt to the following two sets of baselines: (**a**) _Non-neural conventional models_: (a.1) **Edit Dist**(Kang et al., 2018) that quantifies the distance between entities and terms by the edit distance of their names. (a.2) **BM25**(Kang et al., 2018) that ranks a set of documents based on the query tokens appearing in each document. (a.3) **LogMap**(Kang et al., 2018) that matches entities and terms via logical constraints and semantical features. (a.4) **PARIS**(Kang et al., 2018) that provides a off-the-shelf fusion tool empowered by a parameter tuning-free probabilistic model. (a.5) **AML**(Kang et al., 2018) that is based on non-literal string comparison algorithms. is a probabilistic matching system based on probability estimates. (**b**) _Neural embedding models_: (b.1) **SapBERT**(Kang et al., 2018) that learns to self-align synonymous biomedical entities through a Transformer. (b.2) **MTransE**(Kang et al., 2018) that extends the translational KG embedding method TransE(Kang et al., 2018) to multi-language system entity alignment by axis calibration and linear transformations. (b.3) **SelfKG**(Kang et al., 2018) that designs a self-negative sampling strategy to push sampled negative pairs far away from each other when no labeled positive pairs are available. **Quantitative evaluations.** We mainly focus on zero-shot and one-shot settings, and utilize the remaining labeled samples as the test set to report quantitative results. Several _strict_ and _lenient_ evaluation metrics are used. For strict metrics that appreciate only the exact correct prediction, we adopt **Hits@k** and mean reciprocal rank \begin{table} \begin{tabular}{l l|r r r r r r|r r r r r} \hline \hline \multirow{2}{*}{Setting} & \multirow{2}{*}{Model} & \multicolumn{8}{c}{SDKG-DzHi} & \multicolumn{8}{c}{repOB-DzHi} \\ & & Hits@1 & Hits@3 & nDCG@1 & nDCG@3 & WuP & MRR & Hits@1 & Hits@3 & nDCG@1 & nDCG@3 & WuP & MRR \\ \hline \multirow{6}{*}{Zero-shot} & Edit Dist & 65.51 & 70.39 & 68.08 & 50.82 & 85.53 & 68.69 & 68.69 & 71.37 & 71.71 & 54.15 & 85.21 & 70.71 \\ & BM25 & 73.07 & 87.40 & 77.56 & 63.01 & 91.97 & 81.06 & 59.38 & 74.75 & 70.33 & 64.51 & 90.71 & 68.84 \\ & LogMap & 75.75 & 79.06 & 76.97 & 54.82 & 85.06 & 77.38 & 86.60 & 87.73 & 87.38 & 60.79 & 91.68 & 87.09 \\ & PAMS & 22.68 & 22.68 & 23.15 & 16.13 & 43.85 & 22.68 & 6.35 & 6.35 & 6.42 & 4.44 & 32.28 & 6.35 \\ & AML & OOM & OOM & OOM & OOM & OOM & 78.00 & 78.56 & 78.67 & 54.90 & 86.02 & 78.26 \\ & SapBERT & 69.61 & 87.24 & 76.38 & 63.86 & 93.78 & 78.97 & 75.04 & 90.69 & 81.24 & 73.51 & 94.25 & 83.61 \\ & SelfKG & 57.95 & 69.45 & 58.98 & 47.29 & 74.25 & 64.70 & 72.78 & 81.10 & 75.95 & 63.78 & 88.41 & 77.71 \\ & HiPrompt & **90.79** & **93.08** & **91.57** & **77.00** & **96.74** & **92.13** & **88.01** & **91.26** & **90.70** & **82.85** & **97.06** & **90.64** \\ \hline \multirow{2}{*}{One-shot} & SapBERT & 69.56 & 87.22 & 76.34 & 63.84 & 93.29 & 78.93 & 75.00 & 90.68 & 81.21 & 73.51 & 94.13 & 83.59 \\ & MTransE & 0.0 & 0.16 & 0.0 & 0.05 & 35.09 & 0.16 & 0.0 & 0.28 & 0.14 & 0.27 & 28.89 & 0.37 \\ \cline{1-1} & HiPrompt & **92.11** & **95.11** & **93.53** & **77.63** & **97.25** & **93.91** & **88.28** & **91.53** & **90.61** & **81.31** & **96.39** & **90.28** \\ \hline \hline \end{tabular} \end{table} Table 1. Main experiment results (in percentages). \begin{table} \begin{tabular}{l|c|c c c} \hline \hline Dataset & Source & \(\#\)Disease & \(\#\)Entities & \(\#\)Links \\ \hline \multirow{2}{*}{SDKG-DzHi} & SDKG & 841 & 19,416 & 635 \\ & DzHi & 11,159 & 11,159 & 635 \\ \hline \multirow{2}{*}{repOB-DzHi} & repoDB & 2,074 & 3,646 & 709 \\ & DzHi & 11,159 & 11,159 & 709 \\ \hline \hline \end{tabular} \end{table} Table 2. Statistics of the KG-Hi-BKF benchmark. (MRR). For lenient metrics that also reward near-hits, we adopt nDCG@k with exponential decay (Beng et al., 2017) and hierarchy-based term relatedness score **WuP**(Xu et al., 2018). All compared baselines are executed with their recommended hyperparameters. For all non-neural conventional models, we only report the zero-shot results as they are unsupervised methods. For neural embedding methods, we report the zero-shot results utilizing released model weights (SapBERT) or conducting self-supervised training (SelfKG), while reporting the one-shot results by fine-tuning these models (SapBERT, MTransE) on the one demonstrative training sample. For our HiPrompt, we use GPT-3 (Beng et al., 2017) as the LLM for re-ranker and set its temperature hyperparameters as 0 to lower the completion randomness. Using a single prompt template is sufficient since initial exploration shows that various templates do not have a significant impact on model performance. We exclude the use of automatic prompt generation techniques (Zhu et al., 2018; Wang et al., 2019) due to the limited availability of training data. **Main Results.** Table 1 shows the quantitative results for zero-shot and one-shot settings. HiPrompt largely outperforms all other methods in all evaluation metrics under both settings, which demonstrates the effectiveness of the proposed hierarchy-oriented prompting. Under the zero-shot setting, the non-neural unsupervised baseline LogMap achieves the second-best performance. All examined models can successfully generate predictions except AML throws out-of-memory (OOM) errors on the SDKG-DzHi dataset. PARIS performs worst in the zero-shot setting because it can not predict aligned terms for each query entity. Instead, PARIS produces the alignment based on its own ad-hoc threshold. MTransE performs worst in the one-shot setting since it is underfitting using just one training sample. Comparing the same models (SapBERT, HiPrompt) between zero-shot and one-shot settings, we observe the performance differences are negligible, thus indicating that effectively eliciting the adaptive reasoning ability is one of the key factors to tackling supervision-scarce BKF problem. **Ablation Studies.** We further conduct ablation studies to evaluate the impact of our hierarchy-oriented techniques. Table 3 compares the different expansion strategies for HiPrompt's retrieval module. As can be seen, if expanding the KG entities and hierarchy terms with both attributive and structural features ("_Artr.+Str._" variant), the retriever can achieve the best Hits@K performance. Table 4 compares different LLMs and different prompts for HiPrompt's re-ranking module. Among the examined LLMs, GPT-3 with 175 billion parameters surpasses GPT-JT (Wang et al., 2019) with 6B parameters and OPT-6.7B (Wang et al., 2019) with 6.7B parameters due to its large parameter space. When adding the proposed hierarchy context to the name-only prompts, every LLM achieves better performance on all metrics, thus demonstrating the importance of explicit hierarchy-oriented information. We also observe that improvements for GPT-JT and OPT-6.7B are more significant than GPT-3, since GPT-3 may already have such hierarchical information encoded. **Case Studies.** Figure 3 shows the fusion results from BM25, EditDist, and HiPrompt. In general, HiPrompt can find the most specific terms in the hierarchy for the query entities, by satisfying the semantic similarities and hierarchical constraints simultaneously. For instance, HiPrompt recognizes that "_immune system disease_" is the most appropriate for the query "_immune suppression_", rather than its hypernym "_disease of anatomical entity_" that is too general, or hyponyms such as "_immune system cancer_" or "_allergic disease_" that are too specific. On the other hand, EditDist only considers lexical matching, thereby ignoring the different naming conventions of the same biomedical concepts. BM25 also mainly relies on lexical matching, but it incorporates the names, definitions, and synonyms of biomedical terms during the matching, resulting in better performance in handling various names. However, BM25 ignores the hierarchical information, which leads to the inappropriate granularity of aligned terms (_e.g._, the term "_epidemic typhus_" is too broad for the query entity " typhus, epidemic Louse-Borne"). ## 4. Conclusions This paper studies how to automatically fuse KGs into a standard hierarchical index system with scarce labeled data. Our novel framework, HiPrompt, uses hierarchy-oriented prompts to elicit the few-shot reasoning ability of large language models and is designed to be supervision-efficient. Performance comparison on the newly collected KG-Hi-BKF benchmark with two datasets demonstrates the effectiveness of HiPrompt. Interesting future directions for BKF include: (1) exploring an automatic way to generate hierarchy-aware prompts to further reduce manual intervention; (2) expanding the scope of biomedical knowledge fusion to allow the hierarchy to dynamically grow with the aligned entities. \begin{table} \begin{tabular}{c|c c c|c c c} \hline \multirow{2}{*}{ \begin{tabular}{c} LMs \\ \end{tabular} } & \multicolumn{4}{c}{SDKG-DzTaxo} & \multicolumn{4}{c}{repOB-DzTaxo} \\ & Hits@1 & Hits@3 & MRR & Hits@1 & Hits@3 & MRR \\ \hline \multicolumn{7}{c}{_One-shot_ (prompt w/o Hi. Context)} \\ GPT-3 & **91.80** & **94.32** & **93.45** & **87.85** & **91.24** & **89.92** \\ GPT-JT & 75.08 & 86.44 & 81.80 & 58.33 & 69.77 & 66.42 \\ OPT-6.7B & 68.93 & 80.44 & 76.38 & 60.73 & 73.59 & 69.33 \\ \hline \multicolumn{7}{c}{_One-shot_ (prompt w/ Hi. Context)} \\ GPT-3 & **92.11** & **95.11** & **93.91** & **88.28** & **91.53** & **90.28** \\ GPT-JT & 80.76 & 93.69 & 87.45 & 69.07 & 82.91 & 77.24 \\ OPT-6.7B & 72.40 & 84.86 & 79.64 & 63.70 & 77.68 & 72.41 \\ \hline \end{tabular} \end{table} Table 4. Re-ranker with various LLMs and prompts. Figure 3. Case Studies on unlabeled data. Terms highlighted in violet denote the correct alignments for query entities. \begin{table} \begin{tabular}{c|c c c|c c} \hline \multirow{2}{*}{ \begin{tabular}{c} Expan. \\ \end{tabular} } & \multicolumn{4}{c}{_SDKG-DzHi_} & \multicolumn{4}{c}{_repOB-DzHi_} \\ & Hits@5 & Hits@10 & Hits@20 & Hits@5 & Hits@10 & Hits@20 \\ \hline Name & 88.66 & 89.61 & 90.55 & 85.05 & 88.72 & 90.27 \\ +Attr. & 94.96 & 96.85 & 98.11 & 89.00 & 92.52 & 95.20 \\ +Str. & 90.08 & 90.71 & 91.81 & 88.15 & 90.27 & 92.24 \\ +Attr.+Str. & **96.85** & **97.64** & **98.74** & **91.11** & **93.65** & **95.63** \\ \hline \end{tabular} \end{table} Table 3. Retriever with various expansion strategies. ## Acknowledgement This research is supported by the internal fund and GPU servers provided by the Computer Science Department of Emory University. Bo Xiong was supported by the International Max Planck Research School for Intelligent Systems (IMPRS-IS) and was funded by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No: 860801 and Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy--EXC 2075-390740016 (SimTech).
2309.02038
Rapid droplet leads the Liquid-Infused Slippery Surfaces more slippery
The introduction of lubricant between fluid and substrate endows the Liquid-Infused Slippery Surfaces with excellent wetting properties: low contact angle, various liquids repellency, ice-phobic and self-healing. Droplets moving on such surfaces have been widely demonstrated to obey a Landau-Levich-Derjaguin (LLD) friction. Here, we show that this power law is surprisingly decreased with the droplet accelerates: in the rapid droplet regime, the slippery surfaces seem more slippery than LLD friction. Combining experimental and numerical techniques, we find that the meniscus surrounding the droplet exhibits an incompletely developed state. The Incompletely Developed Meniscus possesses shorter shear length and thicker shear thickness than the prediction of Bretherton model and therefore is responsible for the more slippery regime. With an extended Bretherton model, we not only provide an analytical description to the IDM behavior but also the friction when the Capillary Number of the moving droplet is larger than the Critical Capillary Number.
Kun Li, Cunjing Lv, Xi-Qiao Feng
2023-09-05T08:30:28Z
http://arxiv.org/abs/2309.02038v2
# Rapid droplet leads the Liquid-Infused Slippery Surfaces more slippery ###### Abstract The introduction of lubricant between fluid and substrate endows the Liquid-Infused Slippery Surfaces with excellent wetting properties: low contact angle, various liquids repellency, ice-phobic and self-healing. Droplets moving on such surfaces have been widely demonstrated to obey a Landau-Levich-Derjaguin (LLD) friction. Here, we show that this power law is surprisingly decreased with the droplet accelerates: in the rapid droplet regime, the slippery surfaces seem more slippery than LLD friction. Combining experimental and numerical techniques, we find that the meniscus surrounding the droplet exhibits an incompletely developed state. The Incompletely Developed Meniscus (IDM) possesses shorter shear length and thicker shear thickness than the prediction of Bretherton model and therefore is responsible for the more slippery regime. With an extended Bretherton model, we not only provide an analytical description to the IDM behavior but also the friction when the Capillary Number of the moving droplet is larger than 5\(\times\)10\({}^{3}\). The strong hysteresis between droplets and solids prevalent in daily life expects surfaces with extreme liquid-repellency [1-3]. The inherent interpretation of this hysteresis is based on chemical or physical defects of the solid surfaces [4]. In order to improve mobility, smoothing the substrates would be a straightforward strategy, either by perfecting the surfaces or by introducing lubricants. Compared to the perfection of surfaces, the introduction of lubricants, either gas (Leidenfrost [5], Superhydrophobic surfaces [6]) or another immiscible liquid [2, 7], seems more feasible and has been extensively studied. However, the methods to trap a stable layer of gas film between droplets and solids both have their flaws[5, 6, 8, 9]. Therefore, since the proposal of impregnating lubricant oil between fluids and substrates with physical or chemical effects [2, 7], inspired by _Nepenthes_ pitcher plants, the Liquid-Infused Slippery Surfaces has attracted extensive attentions for its promising performances: capable of repelling various liquids[2, 7, 10], maintain low hysteresis [2, 3, 7, 11], self-healing [2, 12], ice-phobic [13-15] and high stability [16, 17]. Among these potentials, how the droplets shed down SLIS is one important fundamental problem that has caught wild attention and bear significant implications[12, 18]. The first investigation about this topic was realized by Smith et al., where they explored the viscous dissipation originates from the lubricant film beneath the droplet, wetting ridge around the droplet and the droplet [11]. They found that, with the lubricant more viscous than the droplet, the dissipation is dominated in the wetting ridge with a Stokes-type friction which leads to a linear relationship between the friction and droplet velocity [11]. However, subsequent studies revealed a power-law nature instead of linear formulation between the drag force and speed due to the speed-dependent meniscus with different methods [19]. For example, with a cantilever force sensor, Daniel et al. measured the drag force acting on a sliding droplet directly and determined a Landau-Levich-Derjaguin (LLD) formula for the dissipation force to the speed [20]. This finding was also proved by regarding the movement of droplets down an inclined Liquid-Infused Slippery Surface as a uniform motion, when the balance between the friction and the gravity takes place [3]. While this power-law model has been confirmed and refined in further researches[19, 21], researchers have also identified its limitations to the cases of small Capillary Number (Ca) and leave large-Ca cases unsolved [3, 22], which, according to our subsequent study, is equally crucial in practical applications. In this letter, with experimental, numerical and theoretical techniques, we extend our study of droplet movement on Slippery Surfaces to large-Ca cases and verify a more slippery regime with smaller scaling than the small-Ca LLD friction. Our systematic experiments distinctly indicate that, in conjunction with the evolving Fig. 1: Schematic of the experimental setup: a drop of deionized water of volume \(V_{0}\) sliding down a Slippery Surface inclined at an angle \(\alpha\) with velocity \(v\). The thickness of film \(h\) on the trailing trajectory of the droplet is measured using a modified fluorescence microscope (Nikon Ti2) and high-speed cameras with interference fringes. The dimension of meniscus \(r_{m}\) is measured with zoom-in back-lighting photography.
2310.18665
A power law solution for FRLW Universe with observational constraints
This paper examines a power law solution under $f(R,T)$ gravity for an isotropic and homogeneous universe by considering its functional form as $f(R,T) = R + \xi RT$, where $\xi$ is a positive constant. In $f(R,T)$ gravity, we have built the field equation for homogeneous and isotropic spacetime. The developed model's solution is $a = \alpha t^{\beta}$. We have used the redshift in the range $0 \leq z \leq 1.965$ and obtained the model parameters $\alpha$, $\beta$, $H_0$ by using the Markov Chain Monte Carlo (MCMC) method. The constrained values of the model parameter are as follows: $H_0 = 67.098^{+2.148}_{-1.792}$ km s$^{-1}$ Mpc$^{-1}$, $H_0 = 67.588^{+2.229}_{-2.170}$ km s$^{-1}$ Mpc$^{-1}$, $H_0 = 66.270^{+2.215}_{-2.181}$ km s$^{-1}$ Mpc$^{-1}$, $H_0 = 65.960^{+2.380}_{-1.834}$ km s$^{-1}$ Mpc$^{-1}$, $H_0 = 66.274^{+2.015}_{-1.864}$ km s$^{-1}$ Mpc$^{-1}$ which have been achieved by bounding the model with the Hubble parameter ($H(z)$) dataset, Baryon Acoustic Oscillations (BAO) dataset, Pantheon dataset, joint $H(z)$ + Pantheon dataset and collective $H(z)$ + BAO + Pantheon dataset, respectively. These computed $H_o$ observational values agree well with the outcomes from the Plank collaboration group. Through an analysis of the energy conditions' behaviour on our obtained solution, the model has been examined and analysed. Using the Om diagnostic as the state finder diagnostic tool and the jerk parameter, we have also investigated the model's validity. Our results show that, within a certain range of restrictions, the proposed model agrees with the observed signatures.
Lokesh Kumar Sharma, Suresh Parekh, Sanjay Maurya, Kuldeep Singh, Saibal Ray, Kalyani C. K. Mehta, Vaibhav Trivedi
2023-10-28T10:33:19Z
http://arxiv.org/abs/2310.18665v1
# A power law solution for FRLW Universe with observational constraints ###### Abstract This paper examines a power law solution under \(f(R,T)\) gravity for an isotropic and homogeneous universe by considering its functional form as \(f(R,T)=R+\xi RT\), where \(\xi\) is a positive constant. In \(f(R,T)\) gravity, we have built the field equation for homogeneous and isotropic spacetime. The developed model's solution is \(a=\alpha t^{\beta}\). We have used the redshift in the range \(0\leq z\leq 1.965\) and obtained the model parameters \(\alpha\), \(\beta\), \(H_{0}\) by using the Markov Chain Monte Carlo (MCMC) method. The constrained values of the model parameter are as follows: \(H_{0}=67.098^{+2.148}_{-1.792}\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(H_{0}=67.588^{+2.220}_{-2.170}\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(H_{0}=66.270^{+2.215}_{-2.181}\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(H_{0}=65.960^{+2.380}_{-1.834}\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(H_{0}=66.274^{+2.015}_{-2.184}\) km s\({}^{-1}\) Mpc\({}^{-1}\) which have been achieved by bounding the model with the Hubble parameter (\(H(z)\)) dataset, Baryon Acoustic Oscillations (BAO) dataset, Pantheon dataset, joint \(H(z)+\) Pantheon dataset and collective \(H(z)+\) BAO + Pantheon dataset, respectively. These computed \(H_{0}\) observational values agree well with the outcomes from the Plank collaboration group. Through an analysis of the energy conditions' behaviour on our obtained solution, the model has been examined and analysed. Using the Om diagnostic as the state finder diagnostic tool and the jerk parameter, we have also investigated the model's validity. Our results show that, within a certain range of restrictions, the proposed model agrees with the observed signatures. ## I Introduction For an extended period, there has been a status quo on the spacetime of the Universe, which seems to be maintaining a non-dynamic phase until Hubble [1] did not offer the concept of expanding Universe based on his observation of the galactic motion. However, after several decades, a new phase of the astrophysical observations, viz. high redshift supernovae [2; 3], \(H(z)\) measurements of SN Ia [4; 5; 6], Cosmic Microwave Background Radiation (CMBR) [7; 8], baryon acoustic oscillations [9] and Planck data [10] have provided all strong shreds of evidence that we are now inhabiting not only in an expanding but also an accelerating phase of the universe [11]. The current Universe, however, was shown to have started in a decelerating phase. It has been a considerable issue for theoretical astrophysicists and cosmologists to understand the cause of the unexpected finding of the Universe's late-time acceleration [12]. The anti-gravitational effects of non-baryonic energy are predicted by general relativity (GR) to be a significant contributor to the accelerated expansion of the current universe [13]. This means that the cosmological constant \(\Lambda\) has a new function in the mystical story and is driving the late-time unexpected expansion. To yet, however, it has not been well described in terms of the fine-tuning and cosmic coincidence conundrums [14]. Therefore, the credit for starting the modelling of the accelerating Universe goes to Caldwell [15], who was the first to characterise the dynamical nature of exotic energy by using a convenient equation of state (EOS) parameter (\(\omega^{de}\)). Later, Copeland [14] introduced scalar field related dark energy concepts, viz. phantom, quintessence, tachyon-type models, etc. and examined the empirical evidence for the late-time accelerated expansion of the cosmic spacetime. Dynamical dark energy with a changeable EOS parameter is explored in specific examples for both spatially homogeneous and anisotropic spacetime [16; 17; 18; 19; 20; 21; 22]. A few critical investigations on \(\Lambda\)-dominated era are available in the following Refs. [23; 24]. Some accurate analytical collapse solutions under the influence of in the absence of shear have been studied by several authors [23; 24]. Since then, the investigation of the impact of a specific condition on the formulation of an exact analytical solution of the relativistic stellar interior has been carried out under the assumption of (i) an expansion-free, (ii) non-static and (iii) non-diagonal cosmic filament filled with non-isotropic fluids [24]. However, a recent modification to general relativity has focused cosmologists' attention on the cosmological constant concerns, leading them to find out the true origin of the late-time acceleration without turning to an exotic energy or cosmological constant. The \(f(R,T)\) the ory of gravity, suggested by Harko et al. [25], is responsible for the shift in the geometrical part of the Einstein-Hilbert action. In this theory, the trace \(T\) of the energy-momentum tensor and the Ricci scalar are vital to the matter Lagrangian. Consideration of the quantum field effect and the potential of the creation of particle is also at the forefront of \(f(R,T)\) gravity. Astrophysical investigations would benefit greatly from considering such possibilities since they imply the existence of a link between the quantum theoretical concept and \(f(R,T)\) gravity [25]. In the following references [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46] one may have look into the significant uses of the \(f(R,T)\) theory of gravity in the fields of astrophysics and cosmology. Interestingly, Ricci scalar (\(R\)) and trace of the energy-momentum tensor (\(T\)) are supposed to be linked with the matter Lagrangian in the \(f(R,T)\) theory of gravity [47; 48; 49]. To describe the present destiny of the Universe, the typical FLRW cosmological models basically need an isotropic as well as homogeneous distribution of matter-energy component within the physical system. However, it has been argued that a spherically symmetric and spatially homogeneous cosmological model is essential to explain adequately the observed phenomena [50]. Therefore, with an isotropic and homogeneous fluid distribution in \(f(R,T)\) gravity, the present work attempts to investigate a plausible model of the Universe within the observational constraints. Under the motivation mentioned above, we have outlined the manuscript as follows. In Sect. II, we provide a brief mathematical background of the metric and \(f(R,T)\) Gravity. In Sect. III, the solution of the field equations using the power law of our current model is considered in detail. In Section IV, we have done the observational analysis, i.e., a discussion of the observational constraints on the model parameters is provided. Physical parameters and diagnostic analysis have been done with the help of graphical plots in Sect. V. Finally, Sect. VI is assigned to make some comments as the conclusion of the study. ## II The metric and \(f(R,T)\) gravity The usual form of the FRLW metric is given by \[ds^{2}=-c^{2}dt^{2}+a^{2}\left(dx^{2}+dy^{2}+dz^{2}\right), \tag{1}\] where \(a\) represents the scale factor, which is a function of \(t\) alone. The Einstein-Hilbert action under \(f(R,T)\) gravity can be provided as \[S=\int d^{4}x\sqrt{-g}L_{m}+\frac{1}{16\pi}\int d^{4}x\sqrt{-g}f(R,T), \tag{2}\] where \(g\) and \(L_{m}\) are, respectively, the metric determinant and matter-Lagrangian density. Then Einstein's general relativistic field equation in the \(f(R,T)\) gravity is as follows: \[[f^{\prime}_{1}(R)+f^{\prime}_{2}(R)f^{\prime}_{3}(T)]R_{ij}-\frac{1}{2}f^{ \prime}_{1}(R)g_{ij}+\] \[(g_{ij}\nabla^{i}\nabla_{i}-\nabla_{i}\nabla_{j})[f^{\prime}_{1}(R)+f^{\prime }_{2}(R)f^{\prime}_{3}(T)]=\] \[[f^{\prime}_{2}(R)f^{\prime}_{3}(T)]T_{ij}+f_{2}(R)\left[f^{\prime}_{3}(T)p+ \frac{1}{2}f_{3}(T)\right]g_{ij}+8\pi, \tag{3}\] where \(f(R,T)\) is assumed to have the form \(f(R,T)=f_{1}(R)+f_{2}(R)f_{3}(T)\). In this case, the primes signify derivatives concerning the configuration in question. Additionally, we consider the cases with \(f_{1}(R)=f_{2}(R)=R\) and \(f_{3}(T)=\xi T\)[47], where \(\xi\) is a constant. Hence, Eq. (3) in its formal structure can be written as \[G_{ij}=8\pi T^{eff}_{ij}=8\pi(T_{ij}+T^{ME}_{ij}), \tag{4}\] where the curvature tensor, the effective energy-momentum tensor, the matter energy-momentum tensor, and an additional energy term that is known to arise from the trace of the energy-momentum tensor \(T\) are, respectively, \(G_{ij}\), \(T^{eff}_{ij}\), \(T_{ij}\), and \(T^{ME}_{ij}\)[51]. The latter can be articulated as \[T^{ME}_{ij}=\frac{\xi R}{8\pi}\left(T_{ij}+\frac{3\rho-7p}{2}g_{ij}\right), \tag{5}\] where \(p\) and \(\rho\) are the pressure and the energy density of the perfect fluid of the cosmic structure. Now, after application of the Bianchi identities in Eq. (4), one can obtain \[\nabla^{i}T_{ij}=-\frac{\xi R}{8\pi}\left[\frac{1}{2}g_{ij}\nabla^{i}(\rho-3p )+\nabla^{i}(T_{ij}+pg_{ij})\right]. \tag{6}\] Hence, plugging Eq. (6) in Eq. (3), we can get \[G_{ij}=(8\,\pi+\xi)\ T_{ij}-(1+\xi)\ R_{ij}+\frac{R}{2}\left[1+\xi\ (2\,p+T)\,\right]g_{ij}, \tag{7}\] where \(T_{ij}=\,(\rho+p)\ u_{i}\,u_{j}-p\,g_{ij},c=1,\,u^{i}=(1,0,0,0)\,,\)\(u_{i}=(-1,0,0,0)\,,\)\(g_{ij}\,u^{i}\,u^{j}=-1,\,\frac{R}{2}=-3\,\left(\frac{\dot{a}^{2}}{a^{2}}+\frac{ \ddot{a}}{a}\right),\)\(T_{00}=2\,p+\rho\,,\)\(T_{ii}=-a^{2}\,p\,,\)\(i=1,2,3\) and \(T=\,3\,p+\rho\). Here \(\dot{a}\) denotes a differentiation of the scale factor \((a)\) with respect to the time coordinate \((t)\). Now, by solving Eqs. (7) and (1), we get \[3H^{2}=8\pi\left[\rho-\frac{3\xi}{8\pi}(3\rho-7p)(\dot{H}+2H^{2})\right], \tag{8}\] \[2\dot{H}+3H^{2}=-8\pi\left[p+\frac{9\xi}{8\pi}(\rho-3p)(\dot{H}+2H^{2})\right], \tag{9}\] where \(H(=\frac{\dot{a}}{a})\) is the Hubble parameter. It is to be noted from Eqs. (8) and (9) that we have a physical system where there are two equations involved therein three unknown variables, viz. \(H\), \(\rho\) and \(p\). As a result, we can not solve these equations in a general and straightforward way. Therefore, we must assume a parameterization scheme to find an explicit solution for \(\rho\) and \(p\). However, this scheme should be based on some basic requirements, including theoretically consistent and observational verification. For this purpose, in the next step, we shall employ the power law expansion [52], which can be shown to be appropriate to explore features of the observable Universe. ## III Solution of the field equations by using power law In the FRLW metric (1), there is an unpredictable time-dependent function, denoted by \(a(t)\). We can write \(a(t)=\alpha e^{\beta t}\) with \(\alpha\) and \(\beta\). The acceleration of the cosmos at later times is analogous to the power law cosmology described by this form of \(a(t)\)[52; 53; 54; 55; 56; 49]. The space-time (1), which can be interpreted as follows \[ds^{2}\,=\,-dt^{2}+\alpha^{2}t^{2\beta}\,\left(dx^{2}+dy^{2}+dz^{2}\right), \tag{10}\] where \(\alpha\) and \(\beta\) are arbitrary constants. The deceleration parameter is given by [57; 58] \[q=-\frac{a\ddot{a}}{\dot{a}^{2}}=-\frac{\beta-1}{\beta}. \tag{11}\] The scale factor is given by \[a=\alpha t^{\beta}, \tag{12}\] whereas the following expressions are obtained for the Hubble parameter, the scalar expansion and the proper volume, respectively [57; 58]: \[H=\frac{\beta}{t}, \tag{13}\] \[\Theta=3H, \tag{14}\] \[\Theta=\frac{3\,\,\beta}{t}, \tag{15}\] \[V\,=\,a^{3}=\,\,\alpha^{3}t^{3\beta}. \tag{16}\] In this connection, the shear tensor can be provided as \[\sigma_{ij}\,=\,u_{(i;j)}-\frac{1}{3}\,\Theta\left(g_{ij}+u_{i}\,u_{j}\right)+ \dot{u}_{(i}\,u_{j)}, \tag{17}\] and the non-vanishing components of the \(\sigma_{i}^{j}\) are \[\left\{\begin{array}{l}\sigma_{1}^{1}\,=\,-\frac{2\,\beta}{t},\\ \\ \sigma_{2}^{2}\,=\,\sigma_{3}^{3}\,=\,\sigma_{4}^{4}\,=\,0.\end{array}\right. \tag{18}\] The expressions for the pressure \(p\) and the energy density \(\rho\), now from the Einstein field equations (8) and (9) can be obtained as follows: \[p=-\frac{\beta(3\beta-2)\left[8\pi t^{2}-9\xi\beta(2\beta-1)\right]+27\xi \beta^{2}(2\beta-1)}{\left[64\pi^{2}t^{2}-288\pi\xi\beta(2\beta-1)\right]t^{2 }+54\xi^{2}\beta^{2}(2\beta-1)^{2}}, \tag{19}\] \[\rho=\frac{3\beta^{2}[8\pi t^{2}-27\xi\beta(2\beta-1)]+21\xi\beta^{2}(3\beta- 2)(2\beta-1)}{\left[64\pi^{2}t^{2}-288\pi\xi\beta(2\beta-1)\right]t^{2}+54\xi ^{2}\beta^{2}(2\beta-1)^{2}}. \tag{20}\] It is to be noted that our model can easily retrieve the scenario of general relativity (GR) for the imposed condition \(\xi=0\). As a result, the expressions of the cosmic pressure and density can be recovered as follows: \[p=\frac{\beta(3\beta-2)}{8\pi t^{2}}, \tag{21}\] \[\rho=\frac{3\beta^{2}}{8\pi t^{2}}. \tag{22}\] ## IV Observational analysis: constraints on model parameters In this Section, we try to limit the model parameters \(H_{o}\), \(\alpha\) and \(\beta\) with reference to the observationally obtained \(H(z)\) dataset under the redshift range \(0\leq z\leq 1.965\). The \(H(z)\) observational dataset is provided in the references. Moreover, the scaling factor with respect to redshift is given by \[a=\frac{a_{0}}{z+1}=\alpha t^{\beta}, \tag{23}\] where \(a_{0}\) denotes the present value of the scale factor. The age of the Universe is computed with the following equation \[H(z)=-\frac{1}{z+1}\frac{dz}{dt}. \tag{24}\] Putting together Eqs. (23 - 24) and there after a bit manipulation, we can easily get the following Hubble parameter in its functional form as \[H(z)=\beta\left(\frac{a_{0}}{\alpha}\right)^{-\frac{1}{\beta}}(z+1)^{\frac{1} {\beta}} \tag{25}\] From Eq. (25), the present value of the Hubble constant is determined as \(H_{0}=\beta\left(\frac{a_{0}}{\alpha}\right)^{-\frac{1}{\beta}}\). Let us now confine the model parameters, viz. \(H_{0}\), \(\alpha\) and \(\beta\) to the redshift range \(0\leq z\leq 1.965\) by using the observable \(H(z)\), BAO and Pantheon datasets. The distribution of the datasets for the observational aspect of \(H(z)\) is shown in 1 and the data points are listed in \(\mathbb{II}\). The details for BAO and Pantheon are taken from ref. [59]. ## V Physical parameters and diagnostic analysis ### Energy Conditions Energy conditions (ECs) are like the cosmic rules that help us to understand that the universe (energy as well as matter) distributes throughout the universe. They are based on the Einstein equations of gravity and serve as the laws of the cosmos. These conditions reveal the distribution of the energy and matter in space. 1. Weak Energy Condition (WEC): The WEC says that the energy cannot be negative or zero anywhere in the universe. This condition helps us keep the universe's rules fair and consistent. Figure 16 shows the nature of WEC for our model using the parameters obtained in the Bayesian Analysis. 2. Null Energy Condition (NEC): The NEC is about light, which states that when light travels through the universe, it always encounters some energy. This condition keeps the universe's physics sensible and prevents strange things from happening. Figure 13 shows the nature of NEC for our model using the parameters obtained in the Bayesian Analysis. 3. Strong Energy Condition (SEC): The SEC is like a stricter version of the NEC. It ensures that energy cannot be negative and limits how things behave under gravity. It is like saying gravity always pulls things together; it cannot push them apart. This rule helps maintain order in the universe. Figure 15 shows the nature of SEC for our model using the parameters obtained in the Bayesian Analysis. 4. Dominant Energy Condition (DEC): The DEC builds on the NEC and ensures that energy is not only non-negative but that how energy flows around cannot be too wild. It is like saying that energy cannot move faster than light or get too crazy. This condition ensures that the universe does not have any weird surprises. Figure 14 shows the nature of DEC for our model using the parameters obtained in the Bayesian Analysis. All the above-mentioned energy conditions are com \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Parameter** & \(H(z)\) & **BAO** & **Pantheon** & \(H(z)_{1}\) & \(H(z)_{2}\) \\ \hline \(H_{0}\) & \(67.098^{+2.148}_{-1.792}\) & \(67.588^{+2.229}_{-2.170}\) & \(66.270^{+2.215}_{-2.181}\) & \(65.960^{+2.380}_{-1.834}\) & \(66.274^{+2.015}_{-1.864}\) \\ \(\alpha\) & \(67.014^{+0.096}_{-0.087}\) & \(67.016^{+0.084}_{-0.111}\) & \(66.998^{+0.090}_{-0.092}\) & \(66.998^{+0.099}_{-0.107}\) & \(67.030^{+0.114}_{-0.126}\) \\ \(\beta\) & \(1.000^{+0.008}_{-0.010}\) & \(0.997^{+0.011}_{-0.010}\) & \(1.003^{+0.011}_{-0.010}\) & \(1.005^{+0.009}_{-0.011}\) & \(1.004^{+0.009}_{-0.009}\) \\ \(t_{C}\) & \(14.903574\) & \(14.7511392\) & \(15.1350535\) & \(15.2365069\) & \(15.1492289\) \\ \hline \hline \end{tabular} \end{table} Table 1: Parameter values obtained from different datasets after running MCMC and Bayesian analysis where \(t_{C}\) is the age of the Universe (Gyr) with the notations as \(H(z)_{1}=H(z)+Pantheon\) and \(H(z)_{2}=H(z)+Pantheon+BAO\) Figure 1: Error bar plot of the 57 point \(H(z)\) data used in analysis of our model Figure 2: One-dimensional marginalized distribution and two-dimensional contours for our \(f(R,T)\) model parameters \(H_{0}\), \(\alpha\), \(\beta\) using the \(H(z)\) dataset presented in Table 2 bined in Fig. 9 whereas the energy condition relations are as shown mathematically below [60; 61]: \[\mathrm{NEC}:\rho+p\geq 0, \tag{26}\] \[\mathrm{WEC}:\rho+p\geq 0\ \&\ \rho\geq 0,\] (27) \[\mathrm{SEC}:\rho+p\geq 0\ \&\ \rho+3p\geq 0,\] (28) \[\mathrm{DEC}:\rho-p\geq 0\ \&\ \rho\geq 0. \tag{29}\] Fig. 10 displays the energy density distribution \(\rho\) with respect to time \(t\), whereas Fig. 11 illustrates the pressure \(p\). Our results demonstrate that all other energy conditions i.e., NEC, WEC, and DEC, are met, except the SEC. The universe's growing rate supports the SEC violation. Consequently, the late-time acceleration of the present universe may be satisfactorily explained by the \(f(R,T)\) theory of gravity, which benefits from the trace energy \(T\) contribution without requiring the presence of dark energy or the cosmological constant in the universe's energy content. ### State finder diagnostic State Finder Diagnostics are like cosmic detectives helping us unravel the mysteries of dark energy and the universe's evolution. These diagnostics are like a cosmic compass, guiding us through the complexities of cosmic evolution. State Finder Diagnostics build upon the two key parameters \(r\) and \(s\). These parameters help us understand how the universe is changing over time. One can think of them as the cosmic meters that tell us about the universe's expansion and the stuff that makes it up. They are dimensionless parameters that encapsulate the essence of the universe's solution, providing a lens through which we discern the universe's underlying dynamics. The general mathematical parametric definitions of these factors are as follows: \[r=\frac{\ddot{\ddot{a}}}{aH^{3}},r=\frac{(-1+r)}{3(-\frac{1}{2}+q)}. \tag{30}\] However, the equations for \(r\) and \(s\) in our model which are expressed in terms of \(q\), can be obtained as follows: \[r =q(1+2q),\] \[s =\frac{2}{3}(q+1).\] Figure 17 has demonstrated that the scale factor trajectories in the derived model follow a particular set of routes. Our approach corresponds to the outcomes obtained from the power law cosmology for the cosmic diagnostic pair. ### Om(z) parameter Researchers typically use the state finder parameters \(r-s\) and do analysis using the Om diagnostic to look at different dark energy theories in their academic works. The Hubble parameter H and the cosmic redshift z combine to generate the \(Om(z)\) parameter, which is essential. Figure 3: One-dimensional marginalized distribution and two-dimensional contours for our \(f(R,T)\) model parameters \(H_{0}\), \(\alpha\) and \(\beta\) using the BAO dataset ref. [59] Figure 4: One-dimensional marginalized distribution and two-dimensional contours for our \(f(R,T)\) model parameters \(H_{0}\), \(\alpha\) and \(\beta\) using the Pantheon dataset ref. [59] Initially, Sahni et al. [62] and later on others [63; 64; 65] offer the \(Om(z)\) parameter formula in the context of changed gravity as follows: \[Om(z)=\frac{[\frac{H(z)}{H_{0}}]^{2}-1}{(1+z)^{3}-1}. \tag{31}\] Here, the Hubble parameter is denoted by \(H_{0}\). According to Shahalam et al. [66], the quintessence (\(\omega\geq-1\)), \(\Lambda\)CDM and phantom (\(\omega\leq-1\)) dark energy (DE) models are represented by the negative, zero and positive values of \(Om(z)\), respectively. We obtain the \(Om(z)\) parameter for the present model as follows: \[Om(z)=\frac{(1+z)^{2/b}-1}{(1+z)^{3}-1}. \tag{32}\] ### Jerk Parameter The Hubble parameter, the deceleration parameter, and the jerk parameter - all offer profound insights into cosmic evolution. The jerk parameter, denoted as \(j\), measures how the universe's acceleration changes over time. [63]. A positive \(j\) (\(j\geq 0\)) means that the universe's acceleration is speeding up, which means it is getting faster. On the other hand, a negative \(j\) (\(j\leq 0\)) implies the acceleration is slowing down. Various dark energy theories Figure 5: One-dimensional marginalized distribution and two-dimensional contours for our \(f(R,T)\) model parameters \(H_{0}\), \(\alpha\) and \(\beta\) using the combination of \(H(z)\) and Pantheon dataset Figure 6: One-dimensional marginalized distribution and two-dimensional contours for our \(f(R,T)\) model parameters \(H_{0}\), \(\alpha\) and \(\beta\) using the combination of \(H(z)\), BAO and Pantheon dataset Figure 7: One-dimensional marginalized distributions and two-dimensional contours representing our \(f(R,T)\) model parameters \(H_{0}\), \(\alpha\) and \(\beta\), depicting the combined variability across all dataset combinations predict different values for the jerk parameter. By measuring \(j\) and comparing it to these predictions, scientists can get clues about the nature of the dark energy, which is supposed to be the mysterious force driving the universe's acceleration. When combined with other cosmic parameters, the jerk parameter helps improve our models of the universe. It rigorously tests these models, ensuring they accurately match what we observe in the real universe. For our model, we have obtained parameters, especially the variation of \(j\) w.r.t the redshift \(z\), which is shown in Fig. 19. Figure 8: Graphical representation of cosmic time over redshift. It is crucial for understanding cosmic evolution and validating cosmological models. Five plots are for different combinations of \(H(z)\), BAO and Pantheon datasets. The age of the Universe (\(t_{C}\)) at \(z=0\) are mentioned in the Table 1 Figure 10: Graphical representation illustrates the dynamic variation of the energy density (\(\rho\)) over the time (\(t\)) under various parameter conditions derived from distinct combinations of the \(H(z)\), BAO and Pantheon datasets Figure 9: Visualisation of all Energy Conditions vs time ## VI Conclusion The purpose of the present investigation is basically to provide a mathematical model under the \(f(R,T)\) modified gravity. The scheme to satisfy our motivation we have considered FRLW cosmology under a specific functional form \(f(R,T)=R+\xi RT\) with the assumption of homogeneous and isotropic spacetime. We have constructed the Einstein field equation under the modified platform, i.e. \(f(R,T)\) gravity, for homogeneous and isotropic spacetime and found the expressions for several cosmological parameters within the redshift range \(0\leq z\leq 1.965\). However, to obtain the model parameters \(\alpha\), \(\beta\) and \(H_{0}\) we have uniquely employed the Markov Chain Monte Carlo (MCMC) method. The constrained values of the Hubble parameter in the present era are as follows: \(H_{0}=67.098^{+2.148}_{-1.792}\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(H_{0}=67.588^{+2.229}_{-2.170}\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(H_{0}=66.270^{+2.215}_{-2.181}\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(H_{0}=65.960^{+2.380}_{-1.834}\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(H_{0}=66.274^{+2.015}_{-1.864}\) km s\({}^{-1}\) Mpc\({}^{-1}\). To verify the model, we have used the Hubble parameter (\(H(z)\)) dataset, Baryon Acoustic Oscillations (BAO) dataset, Pantheon dataset, combined \(H(z)\) + Pantheon dataset and combined \(H(z)\) + BAO + Pantheon dataset, respectively. For Pantheon dataset and analysis we have specifically followed the materials of the refs. [70; 71; 72]. Figure 11: Graphical representation illustrates the dynamic variation of the pressure \(p\) over time (\(t\)) under various parameter conditions derived from distinct combinations of the \(H(z)\), BAO and Pantheon datasets Figure 14: Visualisation of the dominant energy condition (DEC) vs time for all dataset combinations Figure 12: Plot of variation of Equation of State parameter \(\omega\) vs time t The plot suggests that dark energy contributes to the universe’s accelerated expansion but with some variations over time, potentially leading to interesting cosmological consequences Figure 13: Visualisation of null energy condition (NEC) vs time for all dataset combinations In this connection, one point we would like to add here regarding the functional form of \(f(R,T)\) Mishra et al. [67] argue that for any modified theory of gravity, we should modify Einstein's GR following the prescriptions, i.e. either via the geometrical part or the matter part with a positive energy density and negative pressure to demonstrate the cosmic acceleration [68]. Harko et al. [25] proposed the functional form of \(f(R,T)\) as follows: (i) \(f(R,T)=R+2f(T)\), (ii) \(f(R,T)=f_{1}(R)+f_{2}(T)\) and (iii) \(f(R,T)=f_{1}(R)+f_{2}(R)f_{3}(T)\), where \(f_{1}(R)\), \(f_{2}(R)\), \(f_{2}(T)\), \(f_{3}(T)\) are arbitrary functions of their respective arguments. However, in the present investigation, we have specifically adopted the scheme \(f(R,T)=f_{1}(R)+f_{2}(T)\). It has been observed that the derived numerical value for \(H_{o}\) corresponds to the results observed by the Plank collaboration group. We have analysed the model by studying the energy conditions as a physical test. In addition to this, we have also executed a few pathological examinations through the jerk parameter, the Om diagnostics and the state finder diagnostic tools. Our observations and findings, as exhibited via the Figs. 1-19 and Tables indicate that our model on the FRLW cosmological model is consistent as far as physically imposed boundary conditions. Altogether, within a specified range of constraints, the present model seems viable, Figure 16: Visualisation of the weak energy conditions (WEC) vs time for all dataset combinations Figure 15: Visualisation of the strong energy conditions (SEC) vs time for all dataset combinations exhibiting interesting features and attributes of cosmic spacetime. ## Acknowledgements S. Ray acknowledges support from the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India, under the Visiting Research Associateship Programme and facilities under ICARD, Pune at CCASS, GLA University, Mathura, India. L.K. Sharma is thankful to IUCAA for approving a short visit when the idea of the present work has been conceived. ## Data Availability Statement In this manuscript, we have used observational data as available in the literature. [Authors' comment: Our work does not produce any form of new data.] ## Conflicts of Interest The authors assert that there are no conflicts of interest pertaining to the publication of this work.
2308.02477
On the Inherent Anonymity of Gossiping
Detecting the source of a gossip is a critical issue, related to identifying patient zero in an epidemic, or the origin of a rumor in a social network. Although it is widely acknowledged that random and local gossip communications make source identification difficult, there exists no general quantification of the level of anonymity provided to the source. This paper presents a principled method based on $\varepsilon$-differential privacy to analyze the inherent source anonymity of gossiping for a large class of graphs. First, we quantify the fundamental limit of source anonymity any gossip protocol can guarantee in an arbitrary communication graph. In particular, our result indicates that when the graph has poor connectivity, no gossip protocol can guarantee any meaningful level of differential privacy. This prompted us to further analyze graphs with controlled connectivity. We prove on these graphs that a large class of gossip protocols, namely cobra walks, offers tangible differential privacy guarantees to the source. In doing so, we introduce an original proof technique based on the reduction of a gossip protocol to what we call a random walk with probabilistic die out. This proof technique is of independent interest to the gossip community and readily extends to other protocols inherited from the security community, such as the Dandelion protocol. Interestingly, our tight analysis precisely captures the trade-off between dissemination time of a gossip protocol and its source anonymity.
Rachid Guerraoui, Anne-Marie Kermarrec, Anastasiia Kucherenko, Rafael Pinot, Sasha Voitovych
2023-08-04T17:39:42Z
http://arxiv.org/abs/2308.02477v1
# On the Inherent Anonymity of Gossiping ###### Abstract Detecting the _source of a gossip_ is a critical issue, related to identifying _patient zero_ in an epidemic, or the _origin of a rumor_ in a social network. Although it is widely acknowledged that random and local gossip communications make source identification difficult, there exists no general quantification of the level of anonymity provided to the source. This paper presents a principled method based on \(\varepsilon\)-_differential privacy_ to analyze the inherent source anonymity of gossiping for a large class of graphs. First, we quantify the fundamental limit of source anonymity any gossip protocol can guarantee in an arbitrary communication graph. In particular, our result indicates that when the graph has poor connectivity, no gossip protocol can guarantee any meaningful level of differential privacy. This prompted us to further analyze graphs with controlled connectivity. We prove on these graphs that a large class of gossip protocols, namely _cobra walks_, offers tangible differential privacy guarantees to the source. In doing so, we introduce an original proof technique based on the reduction of a gossip protocol to what we call a _random walk with probabilistic die out_. This proof technique is of independent interest to the gossip community and readily extends to other protocols inherited from the security community, such as the _Dandelion_ protocol. Interestingly, our tight analysis precisely captures the _trade-off_ between dissemination time of a gossip protocol and its source anonymity. Gosip protocol, Source anonymity, Differential privacy 1 Footnote 1: Part of the work was done when Sasha Voitovych was an intern at EPFL as part of the EPFL Excellence Research Internship Program. ## 1 Introduction A gossip protocol (a.k.a., an epidemic protocol) is a distributed algorithm that disseminates information in a peer-to-peer system [51, 1, 35, 39, 20, 25]. Gossip protocols have been long used to model the propagation of infectious diseases [30, 38, 3], as well as rumors in social networks where users randomly exchange messages [18, 27]. It is commonly accepted that random and local communications between the users make source identification hard, and thus provide _inherent_ anonymity to the source of the gossip, i.e., anonymity that comes solely from the spreading dynamic without relying on any additional cryptographic primitives (as in [44]). Source anonymity in gossip protocols constitutes an active area of research. On the one hand, many works aim to establish _privacy guarantees_ for the source of the gossip by concealing it against an adversary, e.g., hiding the whistleblower on social media [28, 26, 24, 27, 7, 23]. On the other hand, a large effort is put towards identifying _privacy limits_ for the source of a gossip by designing adversarial strategies that accurately recover the source, e.g., "patient zero" identification in epidemics [34, 61, 50, 54, 9, 43]. Although a significant amount of research is dedicated to the investigation of source anonymity, existing approaches (as summarized in [34]) mainly focus on specific settings, such as locating the source of a gossip for a particular protocol, hiding it against a chosen adversarial strategy or examining the problem on a narrow family of graphs (trees, complete graphs, etc.). This prevents the results from being generalized, and it remains unclear how hard it is to recover the source of a gossip in general, naturally raising the following question. _What are the fundamental limits and guarantees on the inherent source anonymity of gossiping in a general setting?_ We take an important step towards addressing this question by adapting the celebrated mathematical framework of \(\varepsilon\)-differential privacy (\(\varepsilon\)-DP) to our context [21, 22]. Although the concept is a gold standard to measure privacy leakage from queries on tabular databases, it can be also adapted to different privacy semantics and threat models [16]. In our context, we use \(\varepsilon\)-DP to measure the _inherent_ source anonymity of gossiping in general graphs. We adopt a widely used threat model where the adversary aims to guess the source by monitoring the communications of a set of _curious_ nodes in the graph [34, 50, 52, 61, 17, 24]. Using differential privacy enables us to overcome the limitations of previous work, as DP guarantees hold regardless of the exact strategy of the attacking adversary. Additionally, DP guarantees can be combined with any prior knowledge the adversary has on the location of the source, making our results generalizable. Our contributions can be summarized as follows. ### Main results We propose a mathematical framework that adapts the concept of differential privacy to quantify source anonymity in any graph (Section 3). In doing so, we highlight the importance of considering two types of adversaries: the _worst-case_ and the _average-case_. For the worst-case adversary, we focus on privacy guarantees that hold _regardless_ of the location of the curious nodes in the graph. In other words, these guarantees hold even if the adversary knows the communication graph in advance and chooses curious nodes strategically. For the average-case adversary, we focus on privacy guarantees that hold with high probability when curious nodes are chosen uniformly at random. Here, the adversary does not know the structure of the underlying communication graph in advance. Within our mathematical framework, we establish the following results for both adversarial cases. **Privacy limits.** We first quantify a fundamental limit on the level of \(\varepsilon\)-DP any gossip protocol can provide on any graph topology (Section 4). This result indicates that no gossip protocol can ensure any level of differential privacy on poorly connected graphs. This motivates us to consider graphs with controlled connectivity, namely expander graphs. Expanders are an important family of strongly connected graphs that are commonly considered in the gossip protocols literature [8, 29, 12]. On this class, we get the following results. **Privacy guarantees.** We prove that a large class of gossip protocols provides tangible differential privacy guarantees to the source (Section 5). We first consider the parameterized family of gossip protocols known as \((1+\rho)\)-cobra walks [19, 12, 49, 6], which constitutes a natural generalization of a simple random walk. A cobra walk can be seen as an SIS (Susceptible-Infected-Susceptible) epidemic, a well-established model for analyzing the spread of epidemics and viruses in computer networks [30, 38]. In particular, a \((1+\rho)\)-cobra walk is an instance of an SIS epidemic scheme where active nodes constitute the infectious set, the duration of the infectious phase is equal to one and every infected node can only infect one or two of its neighbors at a time. In order to establish differential privacy guarantees on this class of gossip protocols, we rely on the critical observation that the cobra walk has a quantifiable probability of mixing before hitting a curious node (see Section 1.2 for more details on this observation). This characteristic is not unique to cobra walks, as it is shared by several other types of gossip protocols. Accordingly, we also show how to generalize our privacy guarantees to the \(\rho\)-Dandelion protocol [7], first introduced as an anonymous communication scheme for Blockchains. **Dissemination time vs. privacy trade-off.** As an important by-product of our analysis, we precisely capture the trade-off between dissemination time and privacy of a large class of gossip protocols operating on sufficiently dense graphs we call near-Ramanujan graphs. The privacy-latency tension has been suggested several times in the literature [7, 5, 33]. However, our work presents the first formal proof of this long-standing empirical observation. Specifically, we show that our privacy guarantees are tight for both \((1+\rho)\)-cobra walks [12] and \(\rho\)-Dandelion protocol [7]. Additionally, we give a tight analysis of the dissemination time as a function of parameter \(\rho\). This analysis leads us to conclude that increasing parameter \(\rho\) results in a faster dissemination, but decreases privacy guarantees of the protocol, formally establishing the existence of a trade-off between privacy and dissemination time. As cobra walks are strongly related to SIS-epidemics, and Dandelion to anonymous protocols in peer-to-peer networks, our results are relevant for both epidemic and source anonymity communities. ### Technical challenges & proof techniques A major technical contribution of our paper is the privacy guarantee of \((1+\rho)\)-cobra walks in non-complete graphs. The derivation of this result has been challenging to achieve for two reasons. Firstly, our objective is to establish differential privacy guarantees in general graphs, which is a more complex scenario than that of complete graphs (as seen in [5]), where any communication between pairs of nodes is equiprobable, and symmetry arguments can be utilized. Yet, this technique is no longer applicable to our work. The fact that no symmetry assumptions about graph structure can be made calls for new more sophisticated proof techniques. Second, cobra walks are challenging to analyze directly. State-of-the-art approaches analyzing the dissemination time of cobra walks circumvent this issue by analyzing a dual process instead, called BIPS [12, 13, 6]. There, the main idea is to leverage the duality of BIPS and cobra walks with respect to hitting times [12]. While hitting times provide sufficient information for analyzing the dissemination time of a cobra walk, they cannot be used to evaluate differential privacy, as they do not provide sufficient information about the probability distribution of the dissemination process. We overcome this difficulty through a two-step proof technique, described below. **Step I: Reduction to a random walk with probabilistic die out.** To establish \(\varepsilon\)-differential privacy, we essentially show that two executions of the same \((1+\rho)\)-cobra walk that started from different sources are statistically indistinguishable to an adversary monitoring a set of curious nodes. In doing so, we design a novel proof technique that involves reducing the analysis of gossip dissemination in the presence of curious nodes, to a _random walk with probabilistic die out_. Such a protocol behaves as a simple random walk on the communication graph \(G\), but it is killed at each step (i) if it hits a curious node, or otherwise (ii) with probability \(\rho\). We show that disclosing the death site of such a random walk to the adversary results in a bigger privacy loss than all the observations reported by the curious nodes during the gossip dissemination. Then, we can reduce the privacy analysis of cobra walks to the study of such a random walk with probabilistic die out. **Step II: Analysis of a random walk with probabilistic die out.** To study a random walk with probabilistic die out, we characterize the spectral properties of the (scaled) adjacency matrix \(\mathbf{Q}\) corresponding to the subgraph of \(G\) induced by the non-curious nodes. In particular, we show that if curious nodes occupy a small part of every neighborhood in \(G\), then the subgraph induced by non-curious nodes (i) is also an expander graph (Lemma 24) and (ii) has an almost-uniform first eigenvector (Lemma 26). While (i) is a direct consequence of the Cauchy Interlacing Theorem, (ii) is more challenging to obtain. We need to bound \(\mathbf{Q}\) from above and below by carefully designed matrices with an explicit first eigenvector (Lemma 25). Combining (i) and (ii) allows us to precisely estimate the behavior of the random walk with probabilistic die out, which yields the desired differential privacy guarantees. **Generality of the proof.** The reduction to a random walk with probabilistic die out is the most critical step of our proof. It is general and allows us to analyze several other protocols without having to modify the most technical part of the proof (Step II above). We demonstrate the generality of this technique by applying this reduction to the Dandelion protocol and obtain similar privacy guarantees to cobra walks (Lemma 23 and Theorem 8). ### Related work **Inherent anonymity of gossiping.** To the best of our knowledge, only two previous works have attempted to quantify the inherent source anonymity of gossiping through differential privacy [5, 33]. The former work [5] is the first to analyze source anonymity using differential privacy. It measures the guarantees of a class of gossip protocols with a muting parameter (which we call "muting push" protocols) and contrasts these guarantees with the dissemination time of these protocols on a complete graph. Both the threat model and the nature of the technical results in [5] heavily depend on the completeness of the graph. In such a context, the analysis is considerably simplified for two reasons. Firstly, the presence of symmetry allows for the curious node locations to be ignored, rendering the average-case and the worst-case adversaries equivalent. Secondly, in contrast to what would happen in non-complete graphs, since any node can communicate with any other node in each round, a single round of communication is sufficient to hide the identity of the source. However, when considering the spread of epidemics or the propagation of information in social networks, communication graphs are seldom complete [46]. Our work highlights that non-completeness of the graph potentially challenges the differential privacy guarantees that gossip protocols can achieve and also makes it important to distinguish between average and worst-case threat models. Therefore, our results constitute a step toward a finer-grained analysis of the anonymity of gossiping in general graphs. Note that our work can be seen as a strict generalization of the results of [5], since, in addition to cobra walks and Dandelion, we also show that our proof techniques described in Section 1.2 apply to "muting push" protocols (see Appendix E). The second approach [33] addresses a problem that appears to be similar to ours at first glance, as it aims to quantify source anonymity in non-complete graphs. However, the authors consider a different threat model, where an adversary can witness any communication with some probability instead of only those passing through the curious nodes. Furthermore, the paper only gives negative results and does not provide any differential privacy guarantees, which is the most technically challenging part of our paper. **Dissemination time vs. privacy trade-off.** Several previous works [60, 4, 15, 56] have suggested the existence of a tension between source anonymity (i.e., privacy) and latency of message propagation. Under the threat model we consider in this work (with curious nodes), [7] conjectured that the Dandelion protocol would exhibit a trade-off between (their definition of) source anonymity and dissemination time. Later, works [5] and [33] provided more tangible evidence for the existence of a dissemination time vs. privacy trade-off when analyzing source anonymity through differential privacy. However, these works do not provide a tight analysis of the tension between dissemination time and privacy, hence making their observation incomplete. To the best of our knowledge, our work is the first to rigorously demonstrate the existence of a trade-off between the dissemination time of a gossip protocol and the privacy of its source thanks to the _tightness_ of our analysis. ## 2 Preliminaries For a vector \(\mathbf{x}\in\mathbb{R}^{m}\), we denote by \(x_{i}\) its \(i\)th coordinate, i.e., \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{m})^{\top}\). Similarly, for a matrix \(\mathbf{M}\in\mathbb{R}^{m\times m^{\prime}}\), we denote by \(M_{ij}\) its entry for the \(i\)th row and \(j\)th column. Furthermore, for any symmetric matrix \(\mathbf{M}\in\mathbb{R}^{m\times m}\), we denote by \(\lambda_{1}(\mathbf{M})\geq\lambda_{2}(\mathbf{M})\geq\ldots\geq\lambda_{m}(\mathbf{M})\) its eigenvalues. We use \(\mathbf{1}_{m}\in\mathbb{R}^{m}\) to denote an all-one vector, \(\mathbf{I}_{m}\in\mathbb{R}^{m\times m}\) to denote the identity matrix, \(\mathbf{J}_{m}\in\mathbb{R}^{m\times m}\) to denote an all-one square matrix, and \(\mathbf{O}_{m\times m^{\prime}}\in\mathbb{R}^{m\times m^{\prime}}\) to denote an all-zero matrix. Finally, for any \(\mathbf{x}\in\mathbb{R}^{m}\), we denote by \(\left|\left|\mathbf{x}\right|\right|_{p}\triangleq\left(\sum_{i=1}^{m}\left|x_{i} \right|^{p}\right)^{1/p}\) the \(\ell_{p}\) norm of \(\mathbf{x}\) for \(p\in[1,\infty)\) and by \(\left|\left|\mathbf{x}\right|\right|_{\infty}\triangleq\max_{i\in m}\left|x_{i}\right|\) the \(\ell_{\infty}\) norm of \(\mathbf{x}\). Throughout the paper, we use the _maximum divergence_ to measure similarities between probability distributions. We consider below a common measurable space \((\Omega,\Sigma)\) on which the probability measures are defined. Let \(\mu\), \(\nu\) be two probability measures over \(\Sigma\). The _max divergence_ between \(\mu\) and \(\nu\) is defined as2 Footnote 2: Note that we allow \(\nu(\sigma)=0\) in the definition. If \(\nu(\sigma)=0\) but \(\mu(\sigma)>0\) for some \(\sigma\in\Sigma\), the max divergence is set to \(\infty\) by convention. \[D_{\infty}\left(\mu\parallel\nu\right)\;\triangleq\sup_{\sigma\in\Sigma,\;\mu (\sigma)>0}\ln\frac{\mu(\sigma)}{\nu(\sigma)}.\] Furthermore, for two random variables \(X,Y\) with laws \(\mu\) and \(\nu\) respectively, we use the notation \(D_{\infty}\left(X\parallel Y\right)\) to denote \(D_{\infty}\left(\mu\parallel\nu\right)\). ### Graph theoretical terminology Consider an undirected connected graph \(G=(V,E)\), where \(V\) is the set of nodes and \(E\) is the set of edges. \(G\) cannot have self-loops or multiple edges. For any \(v\in V\), we denote by \(N(v)\) the set containing the neighbours of \(v\) in \(G\) and by \(\deg(v)\) the number of edges incident to \(v\). Furthermore, \(G\) is said to be a _regular graph_, if there exists \(d(G)\) such that \(\deg(v)=d(G)\) for every \(v\in V\); \(d(G)\) is called the degree of the graph. Additionally, for a set \(U\subseteq V\) and \(v\in V\), we denote by \(\deg_{U}\left(v\right)\) the number of neighbours of \(v\) contained in \(U\), i.e., \(\deg_{U}\left(v\right)=\left|N(v)\cap U\right|\). Below, we introduce some additional graph terminology. [Vertex cut & connectivity] A _vertex cut_ of \(G\) is a subset of vertices \(K\subseteq V\) whose removal disconnects \(G\) or leaves just one vertex. A minimum vertex cut of \(G\) is a vertex cut of the smallest size. The size of a minimum vertex cut for \(G\), denoted \(\kappa(G)\), is called the vertex connectivity of \(G\)._ Consider an undirected connected graph \(G=(V,E)\) of size \(n\) where \(V\) is an ordered set of nodes. We denote by \(\mathbf{A}\) the adjacency matrix of \(G\), i.e., \(A_{vu}=1\) if \(\{v,u\}\in E\) and \(A_{vu}=0\) otherwise. We also denote by \(\hat{\mathbf{A}}=\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\) the normalized adjacency matrix of \(G\), where \(\mathbf{D}\) is the diagonal degree matrix, i.e., \(D_{vu}=\deg(v)\) if \(v=u\) and \(0\) otherwise. Since \(\hat{\mathbf{A}}\) is a symmetric and normalized matrix, the eigenvalues of \(\hat{\mathbf{A}}\) are real valued and \(\lambda_{1}(\hat{\mathbf{A}})=1\). Using this terminology, the _spectral expansion_ of \(G\) is defined as \[\lambda(G)\triangleq\max\{|\lambda_{2}(\hat{\mathbf{A}})|,|\lambda_{n}(\hat{\mathbf{ A}})|\}. \tag{1}\] [Expander graph] Consider an undirected regular graph \(G\). If \(d(G)=d\) and \(\lambda(G)\leq\lambda\), then \(G\) is said to be a \((d,\lambda)\)-expander graph. ### Gossip protocols Consider an undirected connected communication graph \(G=(V,E)\) where two nodes \(u,v\in V\) can directly communicate if and only if \(\{u,v\}\in E\). One node \(s\in V\), called the _source_, holds a unique gossip \(g\) to be propagated throughout the graph. In this context, _a gossip protocol_ is a predefined set of rules that orchestrates the behavior of the nodes with regard to the propagation of \(g\). Essentially, the goal of a protocol is that with probability \(1\) every node in \(G\) eventually receives \(g\). We assume discrete time steps and synchronous communication, i.e., the executions proceed in rounds of one time step.3 While every node in \(G\) has access to the global clock, we assume that the execution of the protocol starts at a time \(t_{*}\in\mathbb{Z}\), which is _only_ known to the source \(s\). Footnote 3: Although, for clarity, we focus on a synchronous communication, our analysis of privacy guarantees in Section 5 readily extends to an asynchronous setting. **Execution of a gossip protocol.** At any point of the execution of the protocol, a node \(u\in V\) can either be active or non-active. Only active nodes are allowed to send messages during the round. A gossip protocol always starts with the source \(s\) being the only active node, and at every given round \(t+1\) active nodes are the nodes that received the gossip at round \(t\). We will use \(X_{t}\subseteq V\) to denote the set of active nodes at the beginning of round \(t\geq t_{*}\) and set \(X_{t_{*}}=\{s\}\) by convention. Denoting by \((u\to v)\) a communication between nodes \(u\) and \(v\), we define \(\mathcal{C}\) to be the set of all possible communications in \(G\), i.e., \(\mathcal{C}=\{(u\to v):\{u,v\}\in E\}\cup\{(u\to u):u\in V\}\). Note that we allow an active node \(u\) to send a fictitious message to itself to stay active in the next communication round. Then, the \(t^{\text{th}}\) round of an execution for a given protocol \(\mathcal{P}\) can be described by a pair \((X_{t},C_{t})\), where \(X_{t}\subseteq V\) is a set of active nodes, and \(C_{t}\) is the (multi)set of communications of \(\mathcal{C}\) which happened at round \(t\). We denote by \(S\) the random variable characterizing the _execution_ of the protocol. Naturally, an _execution_ is described by a sequence of rounds, i.e., \(S=\{(X_{t},C_{t})\}_{t\geq t_{*}}\). We define _expected dissemination time_ of the protocol as the expected number of rounds for all nodes to receive the gossip during an execution. Finally, we denote \(\mathcal{E}\) the set of all possible executions. **Cobra and random walk.** Coalescing-branching random walk protocol (a.k.a., cobra walk) [19, 12, 49, 6] is a natural generalization of a simple random walk that is notably useful to model and understand Susceptible-Infected-Susceptible (SIS) epidemic scheme [30, 38]. We consider a \((1+\rho)\)-cobra walk as studied in [12] with \(\rho\in[0,1]\)4. This is a gossip protocol where at every round \(t\geq t_{*}\), each node \(u\in X_{t}\) samples a token from a Bernoulli distribution with parameter \(\rho\). If the token equals zero, \(u\) samples uniformly at random a node \(v\) from its neighbors \(N(u)\) and communicates the gossip to it, i.e., \((u\to v)\) is added to \(C_{t}\). If the token equals one, the protocol _branches_. Specifically, \(u\) independently samples two nodes \(v_{1}\) and \(v_{2}\) at random (with replacement) from its neighbors and communicates the gossip to both of them, i.e., \((u\to v_{1})\), and \((u\to v_{2})\) are added to \(C_{t}\). At the end of the round, each node \(u\in X_{t}\) deactivates. Note that, when \(\rho=0\), this protocol degenerates into a simple random walk on the graph; hence it has a natural connection with this random process. Footnote 4: Some prior works also study \(k\)-cobra walks with branching parameter \(k\geq 3\)[19]. We do not consider this class, since our negative result for a 2-cobra walk (Theorem 3.2) implies that a \(k\)-cobra walk for any \(k\geq 3\) does not satisfy a reasonable level of differential privacy. **Dandelion protocol.** Dandelion is a gossip protocol designed to enhance source anonymity in the Bitcoin peer-to-peer network. Since it was introduced in [7], it has received a lot of attention from the cryptocurrency community. Dandelion consists of two phases: (i) the anonymity phase, and (ii) the spreading phase. The protocol is parameterized by \(\rho\in[0,1)\), the probability of transitioning from the anonymity phase to the spreading phase. Specifically, the phase of the protocol is characterized by a token \(anonPhase\in\{0,1\}\) held by a global oracle and initially equal to \(0\). At the beginning of each round of the Dandelion execution, if \(anonPhase=1\) the global oracle sets \(anonPhase=0\) with probability \(\rho\) and keeps \(anonPhase=1\) with probability \(1-\rho\). Once \(anonPhase=0\), the global oracle stops updating the token. Based on this global token, at each round, active nodes behave as follows. If the \(anonPhase=1\), the execution is in the anonymity phase and an active node \(u\) samples a node \(v\) uniformly at random from its neighborhood \(N(u)\) and communicates the gossip to it, i.e., \((u\to v)\) is added to \(C_{t}\). Afterwards, node \(u\) deactivates, i.e., in the anonymity phase only one node is active in each round. If the \(anonPhase=0\), the execution is in the spreading phase. Then the gossip is broadcast, i.e., each node \(u\in X_{t}\) communicates the gossip to all of its neighbors and for \(\forall v\in N(u)\), \((u\to v)\) is added to \(C_{t}\). ## 3 Mathematical framework for source anonymity in general graphs Given a source and a gossip protocol, we fix the probability space \((\mathcal{E},\Sigma,\mathbb{P})\), where \(\Sigma\) is the standard cylindrical \(\sigma\)-algebra on \(\mathcal{E}\) (as defined in Appendix A.1 of [62]) and \(\mathbb{P}\) is a probability measure characterizing the executions of the protocol. In the remaining, to avoid measurability issues, we only refer to subsets of \(\mathcal{E}\) from \(\Sigma\). ### Measuring source anonymity with differential privacy We now describe the mathematical framework we use to quantify source anonymity of gossiping. We consider a threat model where an external adversary has access to a subset \(F\subset V\) of size \(f<n-1\) of _curious_ nodes. Curious nodes in \(F\) execute the protocol correctly, but report their communications to the adversary. The adversary aims to identify the source of the gossip using this information. We distinguish two types of adversaries, namely worst-case and average-case, depending on the auxiliary information they have on the graph. **Threat models: worst-case and average-case adversaries.** On the one hand, a _worst-case_ adversary is aware of the structure of the graph \(G\) and may choose the set of curious nodes to its benefit. On the other hand, the _average-case_ adversary is not aware of the topology of \(G\) before the start of the dissemination, hence the set of curious nodes is chosen uniformly at random among all subsets of \(V\) of size \(f\). We assume that the messages shared in the network are unsigned and are passed unencrypted. Also, the contents of transmitted messages (containing the gossip) do not help to identify the source of the gossip. In other words, adversaries can only use the information they have on the dissemination of the gossip through the graph to locate the source. We also assume that the adversary does not know the exact starting time \(t_{\star}\in\mathbb{Z}\) of the dissemination. To formalize the observation received by the external adversary given a set of curious nodes \(F\), we introduce a function \(\Psi^{(F)}\) that takes as input communications \(C\) from a single round and outputs only the communications of \(C\) visible to the adversary. Note that a communication \((v\to u)\) is visible to the adversary if and only if either \(v\) or \(u\) belongs to \(F\). Consider an execution \(S=\{(X_{t},C_{t})\}_{t\geq t_{\star}}\) of a gossip protocol, and denote by \(t_{\textsc{adv}}\) the first round in which one of the curious nodes received the gossip. Then we denote by \(S_{\textsc{adv}}=\{\Psi^{(F)}(C_{t})\}_{t\geq t_{\textsc{adv}}}\) the random variable characterizing the observation of the adversary for the whole execution. Note that the adversary does not know \(t_{\star}\), hence it cannot estimate how much time passed between \(t_{\star}\) and \(t_{\textsc{adv}}\). For Dandelion, the adversary actually also has access to the value of \(anonPhase\) in round \(t\), i.e., we have \(S_{\textsc{adv}}=\{\Psi^{(F)}(C_{t}),anonPhase_{t}\}_{t\geq t_{\textsc{adv}}}\). We omit this detail from the main part of the paper for simplicity of presentation, but it does not challenge our results on privacy guarantees. See Appendix C.4 for more details. #### 3.2.1 Measuring source anonymity We formalize source anonymity below by adapting the well-established definition of differential privacy. In the remaining of the paper, for a random variable \(A\), we will write \(A^{(s)}\) to denote this random variable conditioned on the node \(s\in V\setminus F\) being the source. In our setting, we say that a gossip protocol satisfies differential privacy if for any \(u,v\in V\) the random sequences \(S_{\textsc{adv}}^{(v)}\) and \(S_{\textsc{adv}}^{(u)}\) are statistically indistinguishable. More formally, we define differential privacy as follows. [Differential privacy] Consider an undirected graph \(G=(V,E)\) and a set of curious nodes \(F\subset V\). Then, a gossip protocol satisfies \(\varepsilon\)-differential privacy (\(\varepsilon\)-DP) for the set \(F\) if, for any two nodes \(v,u\in V\setminus F\), the following holds true \[D_{\infty}\left(S_{\textsc{adv}}^{(v)}\parallel S_{\textsc{adv}}^{(u)}\right) \leq\varepsilon.\] When establishing differential privacy guarantees against a _worst-case adversary_, we aim to find a value \(\varepsilon\) which only depends on the number of curious nodes \(f\), and is _independent_ of the identity of the nodes in \(F\). Accordingly, we say that a gossip protocol satisfies \(\varepsilon\)_-DP against a worst-case adversary_ if it satisfies \(\varepsilon\)-DP for any set \(F\subset V\) such that \(|F|=f\). When establishing differential privacy against an _average-case adversary_, we aim to find a value of \(\varepsilon\) for which the protocol satisfies \(\varepsilon\)-DP _with high probability5_ when choosing the \(f\) curious nodes uniformly at random from \(V\). Formally, let \(\mathcal{U}_{f}\left(V\right)\) be the uniform distribution over all subsets of \(V\) of size \(f\), a gossip protocol satisfies \(\varepsilon\)_-DP against an average-case adversary_ if Footnote 5: An event is said to hold with high probability on graph \(G\) of size \(n\), if it holds with probability \(\geq 1-1/n\). \[\mathbb{P}_{F\sim\mathcal{U}_{f}\left(V\right)}\left[\max_{v,u\in V\setminus F }D_{\infty}\left(S_{\textsc{adv}}^{(v)}\parallel S_{\textsc{adv}}^{(u)} \right)\leq\varepsilon\right]\geq 1-\frac{1}{n}. \tag{2}\] ### Semantic of source anonymity Differential privacy is considered the gold standard definition of privacy, since \(\varepsilon\)-DP guarantees hold _regardless_ of the strategy of the adversary and any prior knowledge it may have on the location of the source. Yet, the values of \(\varepsilon\) are notoriously hard to interpret [40, 32]. To better understand the semantic of our definition of differential privacy, we consider below two simple examples of adversarial strategies: maximum a posteriori and maximum likelihood estimations. For these strategies, we derive bounds on the probability of an adversary successfully guessing the source in an effort to give a reader an intuition on the meaning of the parameter \(\varepsilon\). The proofs are deferred to Appendix F. **Maximum a posteriori strategy.** Maximum a posteriori (MAP) strategy can be described as follows. Suppose an adversary has an a priori distribution \(p\) that assigns to every node in \(V\backslash F\) a probability of being the source of the gossip. Intuitively, \(p\) corresponds to the set of beliefs the adversary has on the origin of the gossip before observing the dissemination. This prior might reflect information acquired from any auxiliary authority or some expert knowledge on the nature of the protocol. Suppose the adversary observes an event \(\sigma\). Then, a MAP-based adversary "guesses" which node is the most likely to be the source, assuming event \(\sigma\) occurred and assuming the source has been sampled from the prior distribution \(p\). Such guess is given by \[\hat{s}_{MAP}=\underset{v\in V\backslash F}{\operatorname{argmax}}\, \mathbb{P}_{s\sim p}\left[v=s\mid S^{(s)}_{\textsc{adv}}\in\sigma\right]= \underset{v\in V\backslash F}{\operatorname{argmax}}\,\mathbb{P}\left[S^{(v )}_{\textsc{adv}}\in\sigma\right]p(v). \tag{3}\] Using \(\varepsilon\)-DP, we can upper bound the success probability of such a guess. Suppose the protocol satisfies \(\varepsilon\)-DP, then the probability of correctly identifying a source \(s\sim p\) conditioned on \(\sigma\) happening is upper bounded as follows \[\mathbb{P}_{s\sim p}\left[\hat{s}_{MAP}=s\mid S^{(s)}_{\textsc{adv}}\in\sigma \right]\leq\exp(\varepsilon)p\left(\hat{s}_{MAP}\right). \tag{4}\] Such an upper bound has a simple interpretation. Note that \(p(\hat{s}_{MAP})\) characterizes the maximum probability of a successfully guessing \(\hat{s}_{MAP}\) based solely on adversary's prior knowledge. Then, the upper bound above states that the probability of a successful guess after observing the dissemination is amplified by a factor of at most \(\exp(\varepsilon)\) compared to success probability of a guess based on a priori knowledge only. **Maximum likelihood strategy.** Maximum likelihood estimation (MLE) occupies a prominent place [24, 54, 55, 50] in the literature, both for designing source location attacks, and for defending against adversaries that follow an MLE strategy. This method is a special instance of MAP estimator in (3) with a uniform prior distribution \(p=\mathcal{U}\left(V\setminus F\right)\) on the source. We can show that, if the protocol satisfies \(\varepsilon\)-DP, such guess has a bounded success probability. \[\mathbb{P}_{s\sim\mathcal{U}\left(V\setminus F\right)}\left[\hat{s}_{MLE}=s \mid S^{(s)}_{\textsc{adv}}\in\sigma\right]\leq\frac{\exp(\varepsilon)}{n-f}. \tag{5}\] ## 4 Fundamental limits of source anonymity: lower bound on \(\varepsilon\) We start by studying the fundamental limits of differential privacy in general graphs. Specifically, we aim to show that vertex connectivity constitutes a hard threshold on the level of source anonymity gossiping can provide. First, we present a warm-up example indicating that in a poorly connected graph, no gossip protocol can achieve any meaningful level of differential privacy against a worst-case adversary. We then validate this intuition by devising a universal lower bound on \(\varepsilon\) that applies for any gossip protocol and any undirected connected graph. Complete proofs related to this section can be found in Appendix B. ### Warm-up Consider a non-complete graph \(G=(V,E)\) and \(K\subset V\), a vertex cut of \(G\). Then, by definition, deleting \(K\) from \(G\) partitions the graph into two disconnected subgraphs. When \(f\geq|K|\), a worst-case adversary can take \(F\) such that \(K\subseteq F\). Then, the curious nodes can witness all the communications that pass from one subgraph to the other. Intuitively, this means that any two nodes that are not in the same subgraph are easily distinguishable by the adversary. Hence, differential privacy cannot be satisfied. This indicates that the level of differential privacy any gossip protocol can provide in a general graph fundamentally depends on the connectivity of this graph. To validate this first observation and determine the fundamental limits of gossiping in terms of source anonymity, we now determine a lower bound on \(\varepsilon\). ### Universal lower bound on \(\varepsilon\) We present, in Theorem 5, a universal lower bound on \(\varepsilon\) which holds for any gossip protocol, on any connected graph and for both the worst-case and the average-case adversaries. Consider an undirected connected graph \(G=(V,E)\) of size \(n\), a number of curious nodes \(f>1\), and an arbitrary gossip protocol \(\mathcal{P}\). If \(\mathcal{P}\) satisfies \(\varepsilon\)-DP against an average-case or a worst-case adversary, then \[\varepsilon\geq\ln(f-1).\] Moreover, if \(\kappa(G)\leq f\), then \(\mathcal{P}\) cannot satisfy \(\varepsilon\)-DP with \(\varepsilon<\infty\) against a worst-case adversary. Proof sketch. To establish the above lower bound, we assume that the adversary simply predicts that the first non-curious node to contact the curious set is the source of the gossip. As the definition of differential privacy does not assume a priori knowledge of the adversarial strategy, computing the probability of success for this attack provides a lower bound on \(\varepsilon\). We first demonstrate the result for the average-case adversary. Assume that \(F\) is sampled uniformly at random from \(V\). We can show that there exists \(v\in V\) such that the attack implemented by the adversary succeeds with large enough probability when \(v\) is the source of the gossip. This fact essentially means that this \(v\) is easily distinguishable from any other node in the graph, which yields the lower bound \(\varepsilon\geq\ln(f-1)\) in the average case. We now consider the worst-case adversary. Assume that \(F\) can be chosen by the adversary. As the lower bound \(\varepsilon\geq\ln(f-1)\) holds with positive probability when \(F\) is chosen at random, there exists at least one set \(F\) for which it holds. Choosing this set of curious nodes establishes the claim for the worst-case adversary. Furthermore, when \(\kappa(G)\leq f\), we follow the intuition from Section 4.1 to build a set \(F\) that disconnects the graph. Using this set, we prove that \(\varepsilon\) cannot be finite. Theorem 5 shows that the connectivity of the graph is an essential bottleneck for differential privacy in a non-complete graph. This stipulates us to study graphs with controlled connectivity, namely \((d,\lambda)\)-expander graphs. Note that in a \((d,\lambda)\)-expander, the vertex connectivity does not exceed \(d\). Hence, Theorem 5 implies that no gossip protocol can satisfy any meaningful level of differential privacy against a worst-case adversary on a \((d,\lambda)\)-expander if \(f\geq d\). Considering this constraint, while studying a gossip against a worst-case adversary, we only focus on cases where the communication graph \(G\) has a large enough degree \(d\). ## 5 Privacy guarantees: upper bound on \(\varepsilon\) We now present a general upper bound on \(\varepsilon\) that both holds for \((1+\rho)\)-cobra walks and \(\rho\)-Dandelion on \(d\)-regular graphs with fixed expansion, i.e., \((d,\lambda)\)-expander graphs. Complete proofs related to this section can be found in Appendix C. Our privacy guarantees are quite technical, which is justified by the intricacies of the non-completeness of the graph. Recall that, in the case of complete topologies analyzed in [5], after one round of dissemination all information on the source is lost unless a curious node has been contacted. However, in a general expander graph, this property does not hold anymore. Indeed, even after multiple rounds of propagation, the active set of the protocol can include nodes that are close to the location of the source \(s\). Thus, differential privacy may be compromised. ### Adversarial density The attainable level of source anonymity for a given protocol is largely influenced by the location of curious nodes. However, accounting for all possible placements of curious nodes is a very challenging and intricate task. To overcome this issue and state our main result, we first introduce the notion of _adversarial density_ that measures the maximal fraction of curious nodes that any non-curious node may have in its neighborhood. Upper bounding the adversarial density of a graph is a key element to quantifying the differential privacy guarantees of a gossip protocol. Formally, this notion is defined as follows. Consider an undirected connected \(d\)-regular graph \(G=(V,E)\), and an arbitrary set of curious nodes \(F\subseteq V\). The _adversarial density_ of \(F\) in \(G\), denoted \(\alpha_{F}\), is the maximal fraction of curious nodes that any node \(v\in V\setminus F\) has in its neighborhood. Specifically, \[\alpha_{F}\triangleq\max_{v\in V\setminus F}\frac{\deg_{F}(v)}{d}.\] For any set of curious nodes \(F\), we have \(\alpha_{F}\leq f/d\). Hence, even when \(F\) is chosen by a worst-case adversary, the adversarial density is always upper bounded by \(f/d\). However, for the average-case adversary we can obtain a much tighter bound, stated in Lemma 7 below. Consider an undirected connected \(d\)-regular graph \(G=(V,E)\) of size \(n\) and a set of curious nodes \(F\sim\mathcal{U}_{f}\left(V\right)\), with adversarial density \(\alpha_{F}\). We denote \(\beta=f/n\) and \(\gamma=\ln(n)/(ed)\), where \(e\) is Euler's constant. Then, with probability at least \(1-1/n\), \(\alpha_{F}\leq\alpha\) with \[\alpha\leq 4e\frac{\max\{\gamma,\beta\}}{1+\max\{\ln(\gamma)-\ln(\beta),0\}}.\] Furthermore, if there exist \(\delta>0,c>0\) such that \(f/n>c\) and \(d>\ln(n)/(c^{2}\delta^{2})\) then a similar statement holds with \(\alpha\leq(1+\delta)\beta\). We deliberately state this first lemma in a very general form. This allows us to precisely quantify how the upper bound on the adversarial density improves as \(f\) decreases. To make this dependency clearer, we provide special cases in which the bound on \(\alpha_{F}\) is easily interpreted. First, assume that \(d\in\omega_{n}(\log(n))\) and \(f/n\in\Omega_{n}(1)\). Then, \(\alpha_{F}\) is highly concentrated around \(f/n\), up to a negligible multiplicative constant, when \(n\) is large enough. On the other hand, when the ratio \(f/n\) becomes subconstant, the concentration becomes looser. In particular, if \(d\in\omega_{n}(\log(n))\) and \(f/n\in o_{n}(1)\), then \(\alpha_{F}\in o_{n}(1)\) with high probability. Finally, if \(f/n\) drops even lower (e.g., when \(f/n\in n^{-\Omega_{n}(1)}\)), we get \(\alpha_{F}\in O_{n}(1/d)\) or \(\alpha_{F}\in n^{-\Omega_{n}(1)}\) with high probability for any \(d\). ### General upper bound on \(\varepsilon\) Thanks to Lemma 7 bounding adversarial density, we can now state our main theorem providing a general upper bound on \(\varepsilon\) for \((1+\rho)\)-cobra walks and \(\rho\)-Dandelion. Consider an undirected connected \((d,\lambda)\)-expander graph \(G=(V,E)\) of size \(n\), let \(f\) be the number of curious nodes, and let \(\mathcal{P}\) be a \((1+\rho)\)-cobra walk with \(\rho<1\). Set \(\alpha=f/d\) (resp. set \(\alpha\) as in Lemma 7). If \(\lambda<1-\alpha\), then \(\mathcal{P}\) satisfies \(\varepsilon\)-DP against a worst-case adversary (resp. an average-case adversary) with \[\varepsilon=\ln(\rho(n-f)+f)-2\tilde{T}\ln(1-\alpha)-\tilde{T}\ln(1-\rho)-\ln( 1-\lambda)+\ln(24),\] and \(\tilde{T}=\left\lceil\log_{\frac{\lambda}{1-\alpha}}\left(\frac{1-\alpha}{4(n -f)}\right)\right\rceil\left(\log_{\frac{\lambda}{1-\alpha}}(1-\alpha)+2\right)+2\). The above statement also holds if \(\mathcal{P}\) is a \(\rho\)-Dandelion protocol with \(\rho<1\). Note that the upper bound on \(\varepsilon\) in Theorem 3 improves as the number of curious nodes \(f\) decreases (since \(\alpha\) decreases with \(f\)) or when the expansion improves (as \(\lambda\) decreases, \(\tilde{T}\) also decreases). Yet, there is a complex interplay between the parameters \(n,f,d,\) and \(\lambda\) above. Additionally, we point out that for a worst-case adversary the privacy guarantees can be established only if \(f/d<1\). For the average-case, this assumption can be dropped, and we are able to establish positive results for \(f\) as high as \(\Theta_{n}(n)\). ## 6 Proof sketch for Theorem 3 Although results for worst-case and average-case adversaries have their own technical specificity, they both share the same general idea. Specifically, we introduce a random process that helps bounding from above the value of \(\varepsilon\). This random process resembles a random walk that at each step reveals its position to the adversary with some probability that depends on \(\rho\) and on the state of the process. We call this process a _random walk with probabilistic die out_. Then, we show that such random walk mixes sufficiently well before its position is revealed, which provides indistinguishability between any two possible sources. The first half of our proof (step I) relies on the reduction of a gossip protocol to a random walk with probabilistic die out. This part is slightly different for different protocols, but for simplicity we only present step I for the cobra walk, and defer the proof for Dandelion to Appendix C.4. In the second half (step II), we only analyze a random walk with probabilistic die out. It is hence universal and applies to both cobra walks and Dandelion protocols. ### Step I: reduction to a random walk with probabilistic die out Consider a \((1+\rho)\)-cobra walk started at \(s\) and denote \(W^{(s)}\) the random variable indicating the last position of the cobra walk before it either branches or hits a curious node. More formally, if the round at which the cobra walk branches or contacts a curious node for the first time is \(\tau\), then the active set at this round would be \(X_{\tau}^{(s)}=\{W^{(s)}\}\), with \(W^{(s)}\in V\setminus F\). We first show that disclosing \(W^{(s)}\) to the adversary reveals more information about the source than \(S_{\textsc{adv}}^{(s)}\) (see Lemma 21). Intuitively, this follows from the Markov property of the active set \(\left\{X_{t}^{(s)}\right\}_{t\geq t_{*}}\) of the cobra walk. In fact, by definition of \(\tau\), we have \(\tau\leq t_{\textsc{adv}}\). Hence, the sequence of adversarial observations \(S_{\textsc{adv}}^{(v)}\) can be obtained from \(X_{\tau}^{(s)}=\left\{W^{(s)}\right\}\) via a randomized mapping independent of the initial source \(s\). Then, using the data processing inequality Theorem 14 of [42]) we show that for any two possible sources \(u,v\in V\backslash F\), we have \[D_{\infty}\left(S_{\textsc{adv}}^{(v)}\parallel S_{\textsc{adv}}^{(u)} \right)\leq D_{\infty}\left(W^{(v)}\parallel W^{(u)}\right). \tag{6}\] This means that it suffices to obtain an upper bound on \(D_{\infty}\left(W^{(v)}\parallel W^{(u)}\right)\) for any \(u,v\in V\backslash F\) to obtain an appropriate value for \(\varepsilon\). Then, we note that \(W^{(s)}\) can be described as the death site of a process we refer to as _random walk with probabilistic die out_, which was started at \(s\). Such a process constitutes a random walk which is killed at each step either (i) if it hits a curious node, or otherwise (ii) with probability \(\rho\). We illustrate this process in Figure 1 and how it relates to the cobra walk. ### Step II: upper bounding the max divergence between death sites The rest of the proof is dedicated to analyzing the probability distribution of the death site of such a process. Let \(\mathbf{Q}=\hat{\mathbf{A}}[V\setminus F]\) be the principled submatrix of \(\hat{\mathbf{A}}\) induced by the rows and columns of \(V\setminus F\) and let \(\mathbf{R}\) be a diagonal matrix of size \((n-f)\times(n-f)\) such that \(R_{ww}=\deg_{F}\left(w\right)/d\) for every \(w\in V\setminus F\). Then, \(W^{(s)}\) can be described as an absorbing Markov chain. More precisely, let nodes from \(V\setminus F\) be transient states, and equip every node \(w\in V\setminus F\) with an absorbing state \(\operatorname{sink}(w)\) which corresponds to the event of dying at \(w\). The transition matrix of our absorbing Markov chain can be written in a block form as \[\mathbf{P}=\begin{bmatrix}(1-\rho)\mathbf{Q}&\mathbf{O}_{(n-f)\times(n-f)}\\ \rho\mathbf{I}_{n-f}+(1-\rho)\mathbf{R}&\mathbf{I}_{n-f}\end{bmatrix}. \tag{7}\] In the above, \(\mathbf{P}_{xy}\) denotes the transition probability from a state \(y\) to a state \(x\). The first \(n-f\) columns correspond to transition probabilities from transient states \(w\in V\setminus F\) and the last \(n-f\) ones correspond to transition probabilities from absorbing states \(\operatorname{sink}(w)\) for \(w\in V\setminus F\). Figure 1: Illustration of the reduction from a cobra walk (Fig. 0(a)) to a random walk with probabilistic die out (Fig. 0(b)). In Fig. 0(a), the dissemination continues after the walk branches and hits the curious set \(F\) in several places. In the random walk with die out, instead of letting the dissemination branch, we stop the dissemination as soon as the cobra walk branches and report the position of the branching node. The probability of transitioning between two transient states \(v,u\in V\setminus F\) (top-left block of \(\mathbf{P}\)) is defined similarly to a simple random walk on \(G\), multiplied by the probability of not branching \((1-\rho)\). The transition probability between \(w\) and \(\operatorname{sink}(w)\) (bottom-left block of \(\mathbf{P}\)) is naturally defined as the probability of branching plus the probability of contacting a curious node at the current step without branching. According to the above, being absorbed in \(\operatorname{sink}(w)\) corresponds to the event \(W^{(s)}=w\). Hence, using \(\mathbf{Q}\) and \(\mathbf{R}\) to compute a closed form expression for absorbing probabilities of the above Markov chain (see Lemma 11), we can rewrite \(D_{\infty}\left(W^{(v)}\parallel W^{(u)}\right)\) as follows \[D_{\infty}\left(W^{(v)}\parallel W^{(u)}\right)=\max_{w\in V\setminus F}\ln \frac{(\mathbf{I}_{n-f}-(1-\rho)\mathbf{Q})_{vw}^{-1}}{(\mathbf{I}_{n-f}-(1-\rho)\mathbf{Q})_{ uv}^{-1}}. \tag{8}\] To conclude the proof, we now need to upper bound the right-hand side (8). To do so, we first note that, as per Theorem 3.2.1 in [36], we can use the following series decomposition, \[(\mathbf{I}_{n-f}-(1-\rho)\mathbf{Q})^{-1}=\sum_{t=0}^{\infty}(1-\rho)^{t}\mathbf{Q}^{t}. \tag{9}\] This means that we can reduce the computation of \(D_{\infty}\left(W^{(v)}\parallel W^{(u)}\right)\) to analyzing the powers of the matrix \(\mathbf{Q}^{t}\). Furthermore, for large values of \(t\), we can approximate \(\mathbf{Q}^{t}\) by a one-rank matrix using the first eigenvalue and the first eigenvector of \(\mathbf{Q}\) (see Lemma 18). This motivates us to study the spectral properties of \(\mathbf{Q}\). We begin by showing (see Lemma 24) that \(\mathbf{Q}\) is dominated by its first eigenvalue. To further estimate the coordinates of the first eigenvector of \(\mathbf{Q}\), we need to introduce subsidiary matrices \(\overline{\mathbf{Q}}\) and \(\underline{\mathbf{Q}}\) (see Lemma 25). We carefully design these matrices to have an explicit first eigenvector and so that their entries bound from above and below respectively those of \(\mathbf{Q}\). Using these two properties, we obtain a measure of how far the first eigenvector of \(\mathbf{Q}\) is from the uniform vector \(\mathbf{1}_{n-f}/\sqrt{n-f}\) (see Lemma 26). By controlling spectral properties of \(\mathbf{Q}\), we establish efficient one-rank approximations of high powers of \(\mathbf{Q}\). Applying this to (8), we obtain an upper bound on the max divergence between \(W^{(v)}\) and \(W^{(u)}\), for any \(u,v\in V\setminus F\). Specifically, assuming that the adversarial density \(\alpha_{F}<1-\lambda\), we get \[D_{\infty}\left(W^{(v)}\parallel W^{(u)}\right)\leq\ln(\rho(n-f)+f)-2\tilde{T }\ln(1-\alpha_{F})-\tilde{T}\ln(1-\rho)-\ln(1-\lambda)+\ln(24),\] where \(\tilde{T}=\left\lceil\log_{\frac{\lambda}{1-\alpha_{F}}}\left(\frac{1-\alpha _{F}}{4(n-f)}\right)\right\rceil\left(\log_{\frac{\lambda}{1-\alpha_{F}}}(1- \alpha_{F})+2\right)+2\). Finally, substituting (6) in the above, and upper bounding \(\alpha_{F}\) as per Section 5.1 we get the expected result. ## 7 Trade-off: Dissemination time vs. privacy Note that when the gossip protocol parameter \(\rho\) decreases, the privacy guarantees in Theorem 8 improve. Yet, this worsens the dissemination time, which suggests the existence of a _trade-off_ between the dissemination time and the source anonymity of the protocol. In this section, we formalize this observation by showing the tightness of Theorem 8 on a family of strong expanders called _near-Ramanujan graphs_. Intuitively, for dense enough graph topologies, most terms in Theorem 8 vanish, hence considerably simplifying the analysis of the result. Near-Ramanujan graphs can be defined as follows. [Near-Ramanujan family of graphs] Let \(\mathcal{G}\) be an infinite family of regular graphs. \(\mathcal{G}\) is called near-Ramanujan if there exists a constant \(c>0\) such that \(\lambda(G)\leq cd(G)^{-1/2}\) for any graph \(G\in\mathcal{G}\) of large enough size. This choice of graph family is motivated by the fact that near-Ramanujan graphs naturally arise in the study of dense random regular graphs. In fact, for any large enough \(n\) and any \(3\leq d\leq n/2\) (with \(dn\) even) a random \(d\)-regular graph on \(n\) nodes is near-Ramanujan with high probability as shown in [11, 58]. That means that almost every \(d\)-regular graph is near-Ramanujan. Besides using near-Ramanujan graphs, we assume the topologies to be dense enough, i.e., \(d\in n^{\Omega_{n}(1)}\). Refining the statement of Theorem 3.2 to this family of graphs, we obtain the following corollary. Let \(\mathcal{P}\) be a \((1+\rho)\)-cobra walk and let \(\mathcal{G}\) be a family of \(d\)-regular near-Ramanujan graphs with \(n\) nodes and \(d\in n^{\Omega_{n}(1)}\). Suppose \(f/d\in 1-\Omega_{n}(1)\) (resp. \(f/n\in 1-\Omega_{n}(1)\)). Then, for any \(G\in\mathcal{G}\) of large enough size \(n\) and any \(\rho\in 1-\Omega_{n}(1)\), \(\mathcal{P}\) satisfies \(\varepsilon\)-DP against a worst-case adversary (resp. an average-case adversary) for some \[\varepsilon\in\ln\left(\rho(n-f)+f\right)+O_{n}(1).\] The above statement also holds if \(\mathcal{P}\) is a \(\rho\)-Dandelion protocol with \(\rho<1\). From Corollary 3.2, when \(\rho=0\), we obtain a level of differential privacy that matches, up to an additive constant, the universal lower bound \(\varepsilon\geq\ln(f-1)\). Accordingly, \(\rho=0\) leads to an _optimal_ differential privacy guarantee. However, in this case, both the cobra walk and the Dandelion protocol degenerate into simple random walks with dissemination time in \(\Omega_{n}(n\log(n))\)[2]. Increasing \(\rho\) parameter makes the dissemination faster, but potentially worsens the privacy guarantees. Studying Dandelion and cobra walks, we show that the result in Corollary 3.2 is tight up to an additive constant. Then, we formally validate our intuition that decreasing \(\rho\) increases the dissemination time by providing corresponding tight guarantees on dissemination time. Finally, to put our results in perspective, we compare them to a random walk (optimal privacy but high dissemination time), and to a 2-cobra walk (optimal dissemination time with bad, completely vacuous, privacy guarantees). We summarize our findings for both worst-case and average-case adversaries in the table below and defer the detailed analysis to Appendix D. \begin{table} \begin{tabular}{c c c c} \hline Protocol & Privacy (\(\varepsilon\)) & Dissemination time & References \\ \hline Random walk & \(\ln(f)+\Theta_{n}\left(1\right)\) & \(\Theta_{n}\left(n\log\left(n\right)\right)\) & Corollary 3.2, Theorem 3.2, Theorem 3.2 and 4.4 \\ \hline \(\rho\)-Dandelion & \(\ln\left(\rho(n-f)+f\right)+\Theta_{n}\left(1\right)\) & \(\Theta_{n}\left(\frac{1}{\rho}+D\right)\) & Corollary 3.2, Theorem 3.2 and 4.4 \\ \hline \((1+\rho)\)-Cobra walk & \(\ln\left(\rho(n-f)+f\right)+\Theta_{n}\left(1\right)\) & \(O_{n}\left(\frac{\log\left(n\right)}{\rho^{3}}\right)\), \(\Omega_{n}\left(\frac{\log\left(n\right)}{\rho}\right)\) & Corollary 3.2 and 4.4 \\ \hline \(2\)-Cobra walk & \(\ln(n)+\Omega_{n}\left(1\right)\) & \(\Theta_{n}\left(\log\left(n\right)\right)\) & Theorem 3.2 and 4.4 \\ \hline \end{tabular} \end{table} Table 1: Summary of the tension between differential privacy of a \((1+\rho)\)-cobra walk and Dandelion gossip and their dissemination time on dense near-Ramanujan graphs. Graphs have diameter \(D\) and consist of \(n\) nodes, \(f\) of which are curious. Note that the upper bounds on \(\varepsilon\) hold under assumptions in Corollary 3.2. Lower bounds on \(\varepsilon\) hold assuming \(f/n\in 1-\Omega_{n}(1)\), and for cobra walk we also assume \(f\in n^{\Omega_{n}(1)}\). Dissemination time bounds for cobra walk and Dandelion hold for \(\rho\in\omega_{n}\left(\sqrt{\log(n)/n}\right)\) and \(\rho\in\Omega_{n}(1/n)\) respectively. ## 8 Summary & future directions This paper presents an important step towards quantifying the inherent level of source anonymity that gossip protocols provide on general graphs. We formulate our results through the lens of differential privacy. First, we present a universal lower bound on the level of differential privacy an arbitrary gossip protocol can satisfy. Then, we devise an in-depth analysis of the privacy guarantees of \((1+\rho)\)-cobra walk and \(\rho\)-Dandelion protocols on expander graphs. When \(\rho=0\), the protocols spread the gossip via a random walk, which achieves optimal privacy, but has poor dissemination time. On the other hand, we show that increasing \(\rho\) improves the dissemination time while the privacy deteriorates. In short, our tight analysis allows to formally establish the trade-off between dissemination time and the level of source anonymity these protocols provide. An interesting open research question would be to establish whether this "privacy vs dissemination time" trade-off is fundamental or if there exists a class of gossip protocols that could circumvent this trade-off. We consider differential privacy, because, unlike other weaker notions of privacy (e.g., MLE-based bounds), it can be applied against an _arbitrary_ strategy of the adversary, factoring in _any_ prior beliefs an adversary may have about the location of the source and the nature of the gossip protocol. This makes differential privacy strong and resilient. However, differential privacy is often criticized for being too stringent in some settings. Consequently, a number of possible interesting relaxations have been proposed in the literature such as Pufferfish [37] and Renyi differential privacy [48]. Adapting our analysis to these definitions constitutes an interesting open direction as it would enable consideration of less stringent graphs structures and probability metrics. Finally, we believe that our results could be applied to solve privacy related problems in other settings. For example, it was recently observed in [14] that sharing sensitive information via a randomized gossip can amplify the privacy guarantees of some learning algorithms, in the context of privacy-preserving decentralized machine learning. However, this work only considers the cases when the communication topology is a clique or a ring. We believe that the techniques we develop in this paper can be useful to amplify privacy of decentralized machine learning on general topologies. This constitutes an interesting open problem.
2303.09888
Saddle-Node Bifurcation of Periodic Orbit Route to Hidden Attractors in Nonlinear Dynamical Systems
Hidden attractors are present in many nonlinear dynamical systems and are not associated with equilibria, making them difficult to locate. Recent studies have demonstrated methods of locating hidden attractors, but the route to these attractors is still not fully understood. In this letter, we present the route to hidden attractors in systems with stable equilibrium points and in systems without any equilibrium points. We show that hidden attractors emerge as a result of the saddle-node bifurcation of stable and unstable periodic orbits. Real-time hardware experiments were performed to demonstrate the existence of hidden attractors in these systems. Despite the difficulties in identifying the suitable initial conditions from the appropriate basin of attraction, we performed experiments to detect hidden attractors in nonlinear electronic circuits. Our results provide new insights into the generation of hidden attractors in nonlinear dynamical systems.
Suresh Kumarasamy, Malay Banerjee, Vaibhav Varshney, Manish Dev Shrimali, Nikolay V. Kuznetsov, Awadhesh Prasad
2023-03-17T11:05:04Z
http://arxiv.org/abs/2303.09888v1
# Saddle-Node Bifurcation of Periodic Orbit Route to Hidden Attractors in Nonlinear Dynamical Systems ###### Abstract Hidden attractors are present in many nonlinear dynamical systems and are not associated with equilibria, making them difficult to locate. Recent studies have demonstrated methods of locating hidden attractors, but the route to these attractors is still not fully understood. In this letter, we present the route to hidden attractors in systems with stable equilibrium points and in systems without any equilibrium points. We show that hidden attractors emerge as a result of the saddle-node bifurcation of stable and unstable periodic orbits. Real-time hardware experiments were performed to demonstrate the existence of hidden attractors in these systems. Despite the difficulties in identifying the suitable initial conditions from the appropriate basin of attraction, we performed experiments to detect hidden attractors in nonlinear electronic circuits. Our results provide new insights into the generation of hidden attractors in nonlinear dynamical systems. The study of nonlinear and complex systems often involves the emergence of oscillations and even complex oscillations. Understanding the emergence of these nonlinear oscillations has been of utmost importance since the early 20th century. In nonlinear systems, the self-excited attractors can be easily identified by locating the trajectory emerges from the neighborhood of an unstable equilibrium. When the initial condition is selected near the equilibrium point, the trajectory will eventually settle in the self-excited attractor after a period of transient dynamics. However, this approach is not directly applicable to multi-stable and mega-stable dynamical systems, which contain multiple coexisting attractors [1; 2]. These types of systems can be found in climate models, the human brain, slowly rotating pendula [3], and ecological systems [4]. Recent studies have revealed a class of multi-stable attractors called hidden attractors, which are difficult to identify and locate [5; 6; 7]. The concept of hidden attractors was first introduced in connection with Hilbert's 16th problem formulated in 1900 [8; 9]. Kuznetsov et al. reported the chaotic hidden attractor in Chua's circuit [10; 11]. This research has since led to further efforts to locate hidden attractors [12; 13; 14]. The basins of attraction of hidden attractors do not touch the unstable manifold of saddle fixed points and are located away from these points, making them difficult to detect using standard computational methods. However, there have been some successful efforts to locate hidden attractors using perpetual points [5; 7; 15]. Hidden attractors can be generated through boundary crisis of an existing chaotic saddle [16] or through flattening of trajectories coming from infinity that are unrelated to any equilibrium states [5]. Hidden global attractors cannot exist in the Euclidean phase space, where the global attractor always contains at least one equilibrium state, but can exist in a cylindrical phase space [17; 18]. However, the exact methodology to determine the route to a hidden attractor in dynamical systems remains unknown. Let us now consider case (a) in the schematic diagram in Figure 1. When a system is unable to reach an equilibrium point due to a change in stability, it can result in periodic oscillations through bifurcations, such as Hopf bifurcation. As the parameter changes, successive bifurcations may lead to chaotic dynamics in the system. The question of importance is, if the only equilibrium point is remains stable under the bifurcation, what are the successive bifurcations or routes to the creation of periodic attractors, also known as hidden attractors? Another Figure 1: Schematic diagram for the creation of an attractor. class of dynamic systems is where there are no equilibrium points (case (c)), and in such cases, hidden attractors can seemingly appear from nowhere. The intriguing question is, how do hidden attractors arise from nothing, and what are the routes for the creation of such hidden attractors in systems with no equilibrium points (as shown in case (b) in Figure 1)? Currently, the routes to the hidden attractors are not well understood. To answer this intriguing question, we have found that hidden attractors appear as a result of saddle-node bifurcation of periodic and unstable periodic orbits in certain dynamic systems. We will present two cases of systems where hidden attractors appear through saddle-node bifurcation of periodic orbits. Case (i) involves systems with one stable equilibrium point, while case (ii) involves a system with no equilibrium points. The letter is organized as follows: First, we present the emergence of hidden attractors in a nonlinear system with one stable fixed point, as introduced by Wang \(et\)\(al.\)\([\)\(19\)\(]\). We illustrate the process of generating hidden attractors and the basin of attraction for these attractors. Next, we demonstrate the generation of hidden attractors in a nonlinear system without fixed points. Finally, we provide a summary and conclusion of our work. To begin, let us examine an instance of a nonlinear system that features a stable equilibrium point. This particular system was originally introduced by Wang et al. \([\)\(19\)\(,\)\(20\)\(]\). \[\dot{x} = yz+\alpha\] \[\dot{y} = x^{2}-y\] \[\dot{z} = 1-4x, \tag{1}\] where \(\alpha\) is a parameter. The system shows the period doubling route to chaos as we decrease the parameter \(\alpha\). Stability analysis shows that the system has one stable fixed point at \((1/4,1/16,-16\alpha)\), when the parameter \(\alpha>0\). For the negative \(\alpha\) the fixed point is unstable. At a higher value of \(\alpha\gtrsim 0.065\) the system has only a stable fixed point attractor and no other existing attractor (as shown in Fig. 2). When the parameter \(\alpha<0\) the system has no attractor. The system shows period doubling route to chaos in the parameter interval \(\alpha\in(0,0.065)\). As the parameter \(\alpha\) decreased from \(0.065\) a period-one limit cycle is created out of the blue sky. The period-one limit cycle is a hidden attractor because the basin contain no fixed point. The basin boundary does not intersect with any fixed point of the system. On the further decrease of \(\alpha\) the period-doubling bifurcation leads to chaotic hidden oscillations. Fig. 2 shows various dynamics observed in the system for different values of the parameter \(\alpha\). _Role of unstable periodic orbits in the creation of hidden attractor._ Now, let us explore how hidden attractors are created in systems with only one fixed point. For the parameter \(\alpha\gtrsim 0.065\), the system has a globally stable (globally attracting) stationary set, as shown in Fig. 3(a). The green star in the figure represents the stable fixed point attractor. In this parameter range, initial conditions starting from anywhere in the state space converge to this globally attracting fixed point. As we decrease the parameter to \(\alpha<0.065\), the global stability of the stationary set (fixed point) is violated by the appearance of boundaries due to global bifurcations away from the vicinity of the stationary set. This global bifurcation creates hidden boundaries for the global stability of the system. If an attractor is born via such a non-local bifurcation that causes the loss of global stability, then the attractor is hidden because the basin of its attraction is separated from the locally attractive stationary set. Fig. 3(b) shows the basins of attraction of initial conditions \(x_{0}\in(-2,2)\) and \(z_{0}\in(-10,5)\) of Eq. (1) for \(\alpha=0.0148\) when projected on \(xz\)-plane. The original fixed point of the system remains attractive and has a basin of attraction around it, represented by the black-shaded region in the plot. The light gray region outside represents the basin of attraction corresponding to the hidden attractor. To understand the global bifurcation that leads to the birth of hidden attractors, we plotted the fixed points and their stability, also known as the continuation diagram (Fig. 4). As the parameter \(\alpha\) decreases to \(0.065\), the system exhibits a saddle-node bifurcation of periodic orbits (marked as SNO). The green dotted line in the plot represents the extrema of the stable periodic orbit, and the blue dotted line represents the extrema of unstable periodic orbit. The red dotted line in Fig. 4(a) shows the stable fixed points, and the black dotted lines show the unstable fixed points. The stable unstable periodic orbits are created with the help of \(XPPAUT\)\([\)\(21\)\(]\). This global saddle-node bifurcation on the orbit annihilates the global stability of the attracting set and creates the basin boundary. The basin boundary is separated by Figure 2: (a) Fixed point attractor along with transient trajectory (\(\alpha=0.08\)), (b) period-1 (\(\alpha=0.06\)) (c) period-2 (\(\alpha=0.03\)) and chaotic attractor (\(\alpha=-0.01\)) for system, Eq. (1). the unstable periodic orbits, which act as a separatrix. As we decrease the parameter \(\alpha\) towards zero, the area of the basin containing the attracting fixed point shrinks (Fig. 3(c) for \(\alpha=0.0045\)), which can be understood from the maxima of the unstable periodic orbits. The plot in Fig. 3(d) shows the fraction of the set of initial conditions that lead to the locally stable attraction. The fraction decreases as we approach the parameter \(\alpha\to 0\). We can see from Fig. 4(a) that the maxima and minima of the branch of the period one unstable limit cycle approach each other. When \(\alpha=0\), there is no width between these two branches. Essentially, these two branches collide with each other, and the stability of the fixed point changes from an attracting fixed point to a repelling (unstable) fixed point. In other words, we have a subcritical Hopf bifurcation at \(\alpha=0\). While decreasing the parameter to \(\alpha<0\), the stability of the equilibrium point of the system becomes unstable. The above results clearly show that the system has local bifurcation (i) At \(\alpha=0\), the system exhibits subcritical Hopf bifurcation, and global bifurcation (ii) at \(\alpha=0.065\), the system shows saddle-node bifurcation of periodic orbit. After the saddle-node bifurcation, further decreasing the parameter leads to successive periodic doubling bifurcation of hidden attractors, ultimately leading to hidden chaotic attractors in the system. Note that the hidden attractors (basin of hidden attractors) created through the saddle-node bifurcation of orbits continue to exist in the system irrespective of other local bifurcations in the system. For example, the system has a subcritical Hopf bifurcation at \(\alpha=0\), and this local bifurcation does not affect the existence of hidden attractors created through saddle-node bifurcation. The birth of hidden attractors through this saddle-node bifurcation of orbits is found in other systems with one stable fixed point. _Hidden motion without any fixed point:_ Consider another dynamical system with hidden attractors [22], which has no equilibrium point: \[\dot{x}=y,\quad\dot{y}=z,\quad\dot{z}=-y+\gamma x^{2}+\beta xz+\alpha, \tag{2}\] where \(\alpha\), \(\beta\), and \(\gamma\) are parameters. In general, the system has two fixed points \((\pm\sqrt{-\alpha/\gamma},0,0)\) for any one of the parameters \(\gamma<0\) or \(\alpha<0\). The system has no equilibria if both of these parameters are positive. We fix the parameters \(\beta=1.1\) and \(\gamma=0.1\) and vary \(\alpha\) from \(1\) to \(1.06\), i.e., there is no fixed point for this range. Figure 5 shows the orbit diagram overlapped with the continuation diagram of the system in Eq. (2) as a function of parameter \(\alpha\). It shows that a periodic orbit of large amplitude is created near \(\alpha\sim 1.058\). A period-doubling bifurcation leads to chaotic motion as the parameter \(\alpha\) is decreased. Fig. 6 shows the trajectory in phase space of the system Eq. (2) for various values of the parameter \(\alpha\). Fig. 6(a) shows the period-1 orbit for \(\alpha=1.05\), 6(b) is the periodic-2 limit cycle for \(\alpha=1.02\), 6(c) shows the period-4 limit cycle for \(\alpha=1.016\) and 6(d) is the chaotic attractor for the value of \(\alpha=1.0\). Note that all these attractors are hidden attractors. Unlike the previous case, Eq. (2) has no fixed point and hence the system has no global attractors [18]. Euclidean phase space Figure 3: Basin of attraction of initial conditions for (a) \(\alpha=0.8\), (b) \(\alpha=0.0148\) and (c) \(\alpha=0.0045\) and (d) fraction of locally attractive stationary set for system, Eq. (1). Figure 4: (a) Fixed points and continuation diagram for the system Eq. (1) as a function of parameter \(\alpha\). SNO denotes the point of saddle node bifurcation on orbits. SPO and UPO are the stable and unstable periodic orbits. PD is the periodic doubling bifurcation points. (b) The bifurcation diagram overlapped with the continuation diagram. cannot have a global attractor without any equilibrium states because a global attractor must attract all orbits, which is not possible without any fixed points or equilibrium states. This is due to the fact that a fixed point is necessary for convergence of nearby orbits, which is required for the existence of a global attractor [17]. For \(\alpha>1.058\), the system does not have basin boundaries and attractors as there are no local bifurcations. However, when the parameter is decreased to \(\alpha<1.058\), the system exhibits a saddle-node bifurcation on orbit (SNO) which leads to the birth of hidden attractors. The stable and unstable periodic orbits are represented by SPO and UPO, respectively, in Fig. 5. This global bifurcations (SNO) create basin boundaries that have attractors in the system. In the interval of \(\alpha\in(1,1.058)\), the system has a basin of attraction for the hidden attractor created at the saddle-node bifurcation. As \(\alpha\) decreases further, the hidden attractor disappears as it collides with the unstable periodic attractor. Note that the hidden attractor exists only within a small basin created by the saddle-node bifurcation, and initial conditions outside the basin diverge to infinity. _Experimental evidence:_ The existence of hidden attractors in nonlinear electronic circuits is demonstrated in this section. The design of such circuits can be realized through the utilization of coupled first-order differential equations. However, identifying hidden attractors within these circuits posses a challenging task that requires a careful experimental approach. To detect these attractors, the appropriate initial conditions must be selected from the appropriate basin of the attraction. The present study demonstrates the detection of hidden attractors in nonlinear electronic circuits using \(\mu\)A741 ICs for the integrator part of the circuit and AD633AN ICs for the multiplier function. The resistors used in the circuit were selected as Figure 5: (a) Bifurcation diagram as a function of parameter \(\alpha\) for system, Eq. (2). SNO denotes the point of saddle node bifurcation on orbits. SPO and UPO are the stable and unstable periodic orbits. PD is the periodic doubling bifurcation points. Figure 6: (a) Period-1 (\(\alpha=1.05\)) (b) period-2 (\(\alpha=1.02\)), (c) period-4 (\(\alpha=1.016\)) and (d) chaotic (\(\alpha=1\)) attractors for system, Eq. (2). Figure 7: Experimental result: (a) circuit diagram for the Eq. 1. (b) Period-one limit cycle for \(v_{\alpha}=0.075\), (c) Period-two for the \(v_{\alpha}=0.07\), (c) and (d) are the chaotic attractor for \(v_{\alpha}=0.068\) and \(v_{\alpha}=0.002\) respectively. \(100k\Omega\), \(R_{2},R_{4}=10k\Omega\), and \(R_{5}=25k\Omega\). The capacitors in the integrator were valued at \(C_{1}=C_{2}=C_{3}=10nF\). The results of the circuit design are presented in Fig. 7(a). As the parameter corresponding to \(\alpha\) (\(v_{\alpha}\)), is decreased from 0.1V, the circuit shows a periodic doubling route to chaos. The phase space corresponding to the period doubling route chaos is presented in Figs. 7. Figs 7a & 7b show the period-one and period-two oscillations for the parameter \(v_{\alpha}=0.075\) and \(v_{\alpha}=0.07\) respectively. Figs. 7c & 7d are the chaotic attractors for two different values of \(v_{\alpha}=0.068\) and \(v_{\alpha}=0.002\) respectively. In summary, in this study, we demonstrate the existence of hidden attractors in nonlinear dynamical systems. We demonstrate that hidden attractors are a consequence of a saddle-node bifurcation of orbits, which is the result of a collision between stable and unstable periodic orbits. We explore two different scenarios: (i) systems with one stable equilibrium point that contain hidden attractors, and (ii) systems with no equilibrium points. In the first scenario, hidden attractors arise due to the loss of global stability and the occurrence of global bifurcations, specifically saddle-node bifurcations of periodic orbits, leading to the creation of basins with no associated equilibrium points. In the second scenario, the absence of equilibrium points results in a lack of global stability or global attractors, leading to unstable solutions. By selecting suitable parameters, we observe the emergence of hidden attractors as a result of the collision between stable periodic and unstable periodic orbits, which is a saddle-node bifurcation of orbits. We have constructed real-time electronic circuits to demonstrate the system with one equilibrium point and the experimental results support our findings and verify the presence of hidden attractors in these systems. We found similar results in various systems with hidden attractors, regardless of whether they contain one stable equilibrium point or no equilibrium points. Although we have not presented results for general dynamical systems with hidden attractors, the results we have presented answer one of the intriguing questions related to the emergence of hidden attractors in nonlinear dynamical systems. SK acknowledge the Center for Nonlinear Systems, Chennai Institute of Technology (CIT), India, vide funding number CIT/CNS/2023/RP-016. MDS and NVK would like to acknowledge the funding support by DST-RSF, Indo-Russia project vide grant no. INT/RUS/RSF/P-18/2019. AP thanks IoE, the University of Delhi for financial assistance.
2305.13586
Hidden hierarchy in the rheology of dense suspensions
Dense suspensions of fine particles are significant in numerous biological, industrial, and natural phenomena. They also provide an ideal tool to develop statistical mechanics description for out-of-equilibrium systems. Predicting the bulk response of such materials has been challenging since these systems often undergo liquid-solid transitions upon a small change in solid concentration or applied loading. Developing an understanding of the mechanisms that drive these phenomena has over the last several years led to a surge in research activity at the intersection of fluid mechanics, granular materials, driven disordered systems, tribology, and soft condensed matter physics. One central aspect that emerged is that these phenomena are due to a shear-activated or deactivated network of contacts between particles. The perspective briefly presents the current state of understanding and challenges associated with relating the flow of material at the bulk scale with the microscopic physics at the particle scale.
Abhinendra Singh
2023-05-23T01:27:55Z
http://arxiv.org/abs/2305.13586v1
# Hidden hierarchy in the rheology of dense suspensions ###### Abstract Dense suspensions of fine particles are significant in numerous biological, industrial, and natural phenomena. They also provide an ideal tool to develop statistical mechanics description for out-of-equilibrium systems. Predicting the bulk response of such materials has been challenging since these systems often undergo liquid-solid transitions upon a small change in solid concentration or applied loading. Developing an understanding of the mechanisms that drive these phenomena has over the last several years led to a surge in research activity at the intersection of fluid mechanics, granular materials, driven disordered systems, tribology, and soft condensed matter physics. One central aspect that emerged is that these phenomena are due to a shear-activated or deactivated network of contacts between particles. The perspective briefly presents the current state of understanding and challenges associated with relating the flow of material at the bulk scale with the microscopic physics at the particle scale. **Keywords: Rheology, Shear thickening, Constraints, Continuum models** ## 1 Introduction Suspensions of fine particles in a liquid are present in many industrial, geotechnical, and biological phenomena with examples ranging from the transport of concrete, mudflow, and red blood cells [1, 2, 3, 4]. In these settings, the solid fine particles are often found to be roughly equal proportion by volume, termed "dense suspensions" [5]. Although thoroughly studied in many fields (Chemical Engineering, Material Science, Physics, etc.) and practically useful, a unifying constitutive model relating material properties, composition, and bulk response remains elusive; there is no analog of Navier-Stokes equation or Statistical Mechanics description of sheared dense suspensions. Historically, the suspensions have been treated as a fluid mechanics problem, i.e., the dynamics have been dominated by the viscous stresses induced from fluid due to the relative motion of particles [6; 7]. However, the experimentally observed features could not be explained using the fluid mechanics approach, particularly the reversible liquid-solid transition often demonstrated by cornstarch (c.f. Dr. Seuss's Oobleck). The shear-induced reversible transformation of cornstarch or cement from a low-viscosity easily flowing liquid to much-enhanced viscosity (close to solid) is highly undesirable in most cases, such as creating problems in coating and mixing, clogging of transport materials [8]. Recently, this phenomenon has been leveraged in various applications, such as stab-proof armor, impact mitigation, and smart speed bumps [9; 10]. In this brief perspective, I will focus on a particular particle size limit (radius \(R\gtrsim 1\mu\)m), where Brownian forces can be neglected. The viscosity (ratio of shear stress and rate) in this particular limit should theoretically be rate (or stress) independent. However, under shear, a variety of striking features such as yielding, shear-thinning, shear thickening, or even jamming are observed [5; 11]. These behaviors ultimately stem from detailed interparticle interactions that are further influenced by solid-fluid interfacial chemistry, roughness, properties of surrounding liquid, etc. Given the plethora of factors that can affect these interactions, establishing a predictive link between microscopic particle-level details and the material response is still a daunting task. Figure 1 presents the essential challenge associated with creating a link between particle properties and bulk response. There exists a hidden hierarchy associated with time-and-length scales that muddle the direct correlation between particle physics and material response. The particle-level properties such as shape, size, roughness, chemistry, etc. affect the interparticle constraints, viz., sliding, rolling, and twisting. These constraints on interparticle relative motion further affect the force network that is formed under the action of external deformation. The formation or breakage of this network ultimately dictates the bulk response or _rheology_ of the material. In what follows, I will introduce each relevant physics for a particular hierarchy and briefly discuss the recent advances. I will eventually close this perspective with the challenges and future issues. **Relevant definitions** _Constituents_: The particles considered here are mostly rigid, arbitrarily shaped, having a crystalline or amorphous structure, while in some cases, soft particles are also considered. The background solvent mediates the hydrodynamic interactions upon the relative motion of particles. It is mostly a simple liquid (like water), but it can also be polymeric or even a multiphase liquid (cement phase in concrete). Here, we only consider Newtonian solvent with viscosity \(\eta_{0}\). _Continuum quantities_: Stress \(\Sigma\) is a tensorial quantity defined as force per unit area. The full stress tensor can be expressed in terms of shear stress \(\sigma=\Sigma_{12}\), particle pressure \(\Pi=(\Sigma_{11}+\Sigma_{22}+\Sigma_{33})/3\), first normal stress difference \(N_{1}=\Sigma_{11}-\Sigma_{11}\), and second normal stress difference \(N_{2}=\Sigma_{22}-\Sigma_{33}\). The strain rate tensor \(E\) is the symmetric part of the velocity gradient tensor \(E=(\nabla u+\nabla u^{T})/2\). The strain rate is denoted by \(\dot{\gamma}\). Eventually, the viscosity is defined as \(\eta=\sigma/\dot{\gamma}\) and can be interpreted as the resistance offered by the material in response to the external deformation. _General rheological features_: The term "Rheology" was invented by Prof. Bingham and it means _the study of deformation and flow of matter_[8]. When the stress increases linearly with the applied deformation rate, \(\sigma\propto\dot{\gamma}\) means the viscosity of the system \(\eta\) is independent of the deformation rate \(\dot{\gamma}\). This viscosity has a unique value at every concentration and increases with increasing \(\phi=NV_{p}/V\) (for \(N\) particles of volume \(V_{p}\) in system of volume \(V\)), often diverging at the jamming transition \(\phi_{J}\)[5]. The precise value of \(\phi_{J}\) depends on particle properties such as their size distribution and their surface interactions. Often the suspensions show rather complex nonlinear rheological behaviors such as a shear-rate dependence of viscosity, the existence of minimum stress for the suspension to flow (yielding), or time dependence of viscosity (thixotropy or viscoelasticity). When the viscosity of material \(\eta\) decreases with an increase in deformation rate \(\dot{\gamma}\) is called shear-thinning. In many suspensions, the viscosity increases with shear rate (or shear stress). This is termed shear thickening and is the main focus of this perspective. When viscosity \(\eta\) increases continuously with the shear rate \(\dot{\gamma}\), it is termed continuous shear thickening (CST). In some cases, Figure 1: **Hierarchy in the rheology of dense suspensions**: Rheology of dense suspensions can be understood across a hierarchy of length scales and unraveling the interrelationship between macroscopic rheology, mesoscale structural correlations, microscopic constraints hindering relative motion between particles, and nanoscopic particle surface properties. _Top panel_ depicting some relevant practical examples of dense suspensions. the increase in viscosity can increase abruptly by orders of magnitude and is called discontinuous shear thickening (DST). ## 2 Bulk response: connecting rheology with constraints Here, we focus on the simplest case of non-Brownian, neutrally buoyant, rigid particles in a Newtonian fluid under simple shear, i.e., the case where the shear rate is spatially uniform. The basic ingredients involved in this problem are particle size \(R\), solvent viscosity \(\eta_{0}\), and density (equal for solid particles and solvent \(\rho_{P}=\rho_{s}\)). In the case of rigid particles, there is no force/energy scale. The macroscopic control variables in the problem are the system size \(L\) (here we consider \(L\gg R\)), solid fraction of particles \(\phi\) (relative volume occupied by particles), and strain (\(\gamma=\dot{\gamma}t\), \(\gamma\to\infty\) considered here). The simple dimensional analysis will imply 4 dimensionless parameters will control the physics here. These are the Stokes number \(St\equiv\rho_{P}\dot{\gamma}R^{2}/\eta_{0}\), the Reynolds number \(Re\equiv\rho_{s}\dot{\gamma}L^{2}/\eta_{0}\), relative viscosity of the suspension \(\eta_{r}=\eta/\eta_{0}=\sigma/\eta_{f}\dot{\gamma}\), and strain \(\gamma\). The particle size and other details imply that we are in the limit of \(St=0\) and \(Re=0\). This means that the rheology in steady-state depends only on volume fraction \(\phi\) (note that we assumed \(\gamma\to\infty\)), implying stress is linear in the deformation rate, i.e., Newtonian rheology. The assumption of the linear relation between \(\sigma(\dot{\gamma})\) can in principle be extended to particle pressure \(\Pi\) and normal stress differences (mostly found to be negative). These terms only depend on \(\phi\), and in the case of dense suspensions the viscosity (defined for each stress component) eventually diverges at the so-called jamming point as \((\phi_{J}-\phi)^{-\beta}\) (\(\beta\) is mostly found to be 2, but this detail is not important for the discussion). Jamming behavior is a well-studied concept in dry granular materials [12, 13, 14], but a rather new one in the case of dense suspensions. Traditionally the rheology in dense suspension has been approached from a constant volume perspective [15]. Recent developments have considered pressure-imposed perspectives as well and have demonstrated the rheology of the two cases to be equivalent [16, 17, 18, 19, 20]. The dimensional analysis suggests the rheology to be Newtonian, while experimental evidence suggests the rheology to be a combination of Newtonian, shear thinning, and shear thickening/jamming [5, 21, 11]. Newtonian rheology requires that the force exerted on a particle due to external shear must be balanced by another force. Experimentally, while preparing the initial sample the particles need to be stabilized against clustering/aggregating using polymer/addition of salt leading to a finite-range repulsive force. This repulsive force leads to a characteristic repulsive stress \(\sigma^{*}=F_{R}/R^{2}\) and competes with the viscous stress \(\propto\eta_{0}\dot{\gamma}\). The recent surge in research activity has demonstrated that the thickening (or thinning) behavior can be understood in terms of _stress-activated_ (or released) constraints on relative motion [22, 23, 24]. In particular, the strong CST or DST has been related to the stress-activated transition from an unconstrained lubricated to a constrained state where the relative motion of particles in the tangential pairwise direction is significantly constrained (or hindered) beyond the onset stress \(\sigma^{*}\)[25, 26, 27, 5, 23]. This constraint can originate from direct frictional contact between particles [25, 26, 28, 29, 30, 31] or enhanced lubrication force at the particle roughness (or asperity) level [32]. At stress levels \(\sigma\ll\sigma^{*}\), the lubrication layer between particles is maintained, limiting the role of surface details (like asperity, roughness, etc.) leading to frictionless rheology. With the increase in applied stress \(\sigma\), the repulsive barrier is overcome and lubrication/repulsion gives way to a frictional contact state. Maxwellian stability argument suggests that the presence of constraints (or friction) leads to a reduction in the number of contacts \(Z_{c}\) needed for mechanical equilibrium (or the iso-static condition); implying \(\phi_{J}^{\mu}<\phi_{J}^{0}\)[12, 13, 33, 34]. With increasing stress, the system makes a transition from a frictionless limit (\(\phi_{J}^{0}\equiv\phi_{J}(\sigma/\sigma^{*}\to 0)\)) to a frictional limit (\(\phi_{J}^{\mu}\equiv\phi_{J}(\sigma/\sigma^{*}\to\infty)\)) with \(\phi_{J}^{\mu}<\phi_{J}^{0}\). This implies that for a system under constant \(\phi\), increasing stress \(\sigma\) brings the system closer to the jamming behavior. The mechanism for "frictional shear thickening" behavior is illustrated in Fig. 2. The essential ingredients are two Newtonian states with distinct jamming points, \(\phi_{J}^{0}\) and \(\phi_{J}^{\mu}<\phi_{J}^{0}\), and stress-activated particle level constraints. It is the activation of constraints on relative motion (e.g. sliding and rolling frictions) that essentially leads to the reduction in the jamming point. The exact number of contacts \(Z\) (or constraints) is rather unimportant, it is rather the degree of constraints (e.g., fraction of frictional contacts \(f(\sigma)\) that varies between 0 to 1 2 (A)) [26, 27]. This function has usually been reported to be of a sigmoidal shape around the stress needed to form frictional Figure 2: **Shear thickening behavior in dense suspension**. (A) The functional form of the fraction of frictional contacts increasing smoothly from zero to one with increasing stress (from Mari et al. [26]). (B) Two branches of Newtonian viscosity: lower (lubricated, unconstrained frictionless state, with friction coefficient \(\mu=0\) leading to \(\phi_{J}^{0}\approx 0.65\)) and upper (constrained, frictional state with friction \(\mu>0\) leading to \(\phi_{J}^{\mu}<\phi_{J}^{0}\)). Various colored arrows indicate the transitions from unconstrained to constrained state leading to: CST (monotonic \(\sigma(\dot{\gamma})\) relation) at low \(\phi\), DST (non-monotonic \(\sigma(\dot{\gamma})\) relation) at volume fractions approaching but lower than \(\phi_{J}^{\mu}\)), and DST-SJ at volume fractions \(\phi>\phi_{J}^{\mu}\) (the curve bending back to zero shear rates at large stress). contacts \(\sigma^{*}\), leading to \(f(\sigma)\ll\sigma^{*}\to 0\) and \(f(\sigma)\gg\sigma^{*}\to 1\). This essentially means suspension is in a frictionless state (with viscosity diverging at \(\phi_{J}^{0}\)) when \(f(\sigma)\to 0\) and makes a transition to a frictional state (with viscosity diverging at \(\phi_{J}^{\mu}\)) when \(f(\sigma)\to 1\) The model that rheology is Newtonian for \(\sigma\ll\sigma_{-}^{*}\) and \(\sigma\gg\sigma_{+}^{*}\) and the crossover from frictionless Newtonian state to frictional one results in shear thickening. The extent of shear thickening depends on the proximity to the frictional jamming point \(\phi_{J}^{\mu}\) or, in other words, the difference between the viscosity of the two Newtonian states. When this difference is small for \(\phi\ll\phi_{J}^{\mu}\), the curve joining two Newtonian states is monotonic, leading to continuous thickening (CST). At solid concentrations, \(\phi_{C}\geq\phi<\phi_{J}^{\mu}\), the difference between the two states is large but finite, and the flow curve becomes nonmonotonic or S-shaped, thus thickening becomes discontinuous (DST). At solid concentrations \(\phi\geq\phi_{J}^{\mu}\), the material makes the more extreme transition from liquid to solid as \(\sigma\) increases: known as shear jamming (SJ). Wyart & Cates and others have formulated the above philosophy into constitutive models for rate-dependent rheology that have been successfully compared with experiments and numerical simulations [23, 26, 27, 35, 36, 37, 38]. The close surface interactions between particles seem to be important in developing constitutive models for dense suspensions, hence short-range pairwise attraction/repulsion are critical. In the case of non-Brownian particles, attractive force originating from van der Waals [39], depletion forces due to dissolved non-interacting polymer [40], or the presence of an external field [41] can also be present. Like the case of well-studied colloidal gels [42, 15, 43], attraction can introduce competition between the formation of aggregates (sticking of particles) and their eventual breakage (due to shear). The presence of attractive interactions between particles often leads to the emergence of yield stress (the exact value of which is protocol dependent [44]). Recent numerical works have included the physics of yielding and subsequent strong shear thinning behavior into the frictional shear thickening framework and postulated more generalized rheological models [22, 36]. ## 3 Connecting mesoscale network with rheology The deformation-induced solidification or shear jamming (SJ) is common in both granular materials (quasistatic condition) and dense suspensions (Stokes-flow). Experiments from the late Prof. Behringer's group have demonstrated the appearance of a load-bearing network for mm-sized photoelastic particles under shear. These systems would jam under shear while staying in a liquid-like state under isotropic conditions. Thus shearing leads to a self-organized jammed state to support the external stress. Shear jamming is thus closely related to the collective organization in the space of forces (this space is dual to the real particle space). The network formed under shear becomes strongly correlated because of the constraints of force and torque balance on every grain. Cates _et al._[45] suggested that such a jammed (SJ) state is formed principally by the primary force chain along the compression axis with support from secondary force chains along the orthogonal tensile direction. They also postulated that such a jammed state is fragile, in the sense that this state is maintained only by the imposed load, and would fail if the load is removed, or applied in the reverse direction. Radjai _et al._[46] showed the existence of two complementary networks in dry granular systems under shear: load-bearing percolating "strong force chain" network oriented along the major (compressive) axis along with a network of "weak force chains" in the orthogonal direction. Chakraborty and coworkers have developed theoretical tools to identify the force-space organization that has been instrumental in understanding shear jamming [47, 48, 49, 50]. DST and SJ have been associated with the load-bearing frictional network formed under shear in dense suspensions. This load-bearing system spanning network resists large deformation leading to sudden orders of magnitude increase in viscosity. Strikingly, the classical theoretical tools like pair-correlation functions of particle-level dynamics \(g(r),S(k)\) do not display the distinct difference between thickened and unthickened states, while their rheology is distinctly different with orders of magnitude difference in the viscosity (along with often change in sign of first normal stress difference \(N_{1}\)[51]) [26]. This prompted the use of network science-related tools to understand the microscopic (or rather mesoscale) reasoning for strong shear thickening or jamming. However, these studies are limited compared to the dry granular materials, partly because the concept of contact (enduring) between particles is rather new to dense suspensions and is still being debated [52, 53]. Mari _et al._[26] showed that percolating frictional force chains with orthogonal support form at the volume fractions close to DST, which is qualitatively similar to ideas put forth by Cates _et al._ in dry granular materials [45]. Further, Boromand _et al._[54] analyzed these networks in terms of giant clusters and showed correspondence between DST and giant cluster growth. Gameiro _et al._[55] used topological tools called Persistence homology to connect the loop-like structure growth (the minimally rigid structure that can resist simple shear deformation) to strong shear thickening behavior. Edens _et al._[56] further investigated correlations between the geometric organization of particles, underlying forces, and rheological properties in simulations where frictional shear thickening behavior was coupled with a strong attraction that leads to yielding behavior [57]. The authors demonstrated that the changes in suspension rheology (from thinning to thickening) do not simply originate from local/global-scale particle rearrangements. These changes rather sensitively depend on the detailed balance of forces, resulting force network, and are coupled with the external deformation strength (shear stress in this case). Nabizadeh _et al._[58] have analyzed the network using the community detection algorithm and suggested that DST is correlated with the enhanced constraints on the relative motion between clusters. Rather recent studies have demonstrated the system-spanning enduring rigid clusters to be responsible for DST [59, 60]. ## 4 Connecting constraints with particle-level details: tuning & manipulating the rheology The description in Sec. 2 and thereby mean-field models are sensitive to the exact value of jamming volume friction \(\phi_{J}^{\mu}\). The measurement of forces between particles of size 100 nm to 100 \(\mu\)m is rather difficult experimentally, though some progress has been made recently [61]. A rather different approach is to focus on the general types of constraints that can hinder the relative motion between particles [22, 23, 24, 62]. The idea is given the frictional contact between two particles can hinder sliding only, or both sliding as well as rolling motion with sliding and rolling friction parameters (\(\mu_{s}\) and \(\mu_{r}\), respectively). The promise of this approach is that the chemical or physical origin on a particular type of interparticle interaction might matter far less as compared to the constraint that it offers. Singh _et al._[24, 62] have recently shown that this constraint-based approach can quantitatively reproduce shear thickening behavior across diverse kinds of dense suspensions that include a wide range of particle sizes and surface features. This work offers a quantitive understanding of shear thickening behavior in dense suspensions - a deep understanding that paves the way to tune and manipulate the rheology for practical purposes. A number of previous experimental studies have considered possible surface effects (including modifying surface or solvent-particle interactions) thus affecting constraints between relative particle motion 3. Here, in this perspective, I have only included a few of these studies (readers are encouraged to check other rather more detailed reviews [5, 11]. A series of studies [63, 64, 65] have shown that roughness at particle level can drastically influence the rheology: leading to enhanced viscosity, reduction in onset stress for shear thickening, and change in sign of the first normal stress difference \(N_{1}\). More recently Hsu _et al._[31] and Hsiao _et al._[66] have shown a systematic reduction in \(\phi_{J}^{\mu}\) with increasing roughness along with an increase in stress range over which shear thickening occurs. Researchers have also used particle and particle-fluid interfacial chemistry to tune shear thickening behavior. James _et al._[67] have shown that the addition of urea can decrease shear thickening for particles coated with carboxylic acid group (-CO\({}_{2}\)H) Figure 3: **Stress-activated constraints originating from particle level details**. Both particle roughness and interfacial chemistry can modify both sliding and rolling friction between particles. (Left): Increasing the asperity size can lead to their interlocking and thus enhancing friction (especially rolling friction). (Right): Addition of urea to carboxylic acid or decreasing the pH of solution containing carboxylic acid coated particles can both lead to disruption of hydrogen bonding further reduce the constraints. dispersed in an aqueous solution. In a subsequent study [68], disruption of hydrogen bonding by "capping" of the (-CO\({}_{2}\)H) was demonstrated to be responsible for this reduction. Tuning pH of the solvent [69], the molecular weight of the polymeric solvent [70; 71], and more recently using micron-sized particles with accessible glass transition temperatures [72] are ways to tune the constraints and thus manipulate the rheology. ## 5 Closing remarks This perspective aims to outline the current understanding of the rheology of suspension dynamics and present a vision of the hierarchy of length and time scales that are involved. Regarding practical relevance, \(\phi>0.5\) can be considered dense. However, the exact number depends on the distance from frictional jamming point \(\phi_{J}^{\mu}\) that depends on surface features or constraints \((\mu_{s},\mu_{r})\)[24]. The rapid recent development of the physics of dense suspensions has put experiments, simulations, and theory on the same footing. However, this understanding is limited to the "ideal" model suspensions (nearly monodisperse rigid particles in Newtonian solvent). Extending the simple microscopic arguments and the models developed here to continuum descriptions of "real" world suspensions will be a nontrivial task but is needed to tackle these pressing challenges. Particles in the real world are rough, non-spherical, sticky, and polydisperse, and the background solvent can be non-Newtonian. A few issues/questions that need close attention are: * Can the newly proposed constitutive models for ideal nearly monodisperse spheres be extended to real-world particles-fluid mixtures, i.e., particles with polydispersity in shape and size dispersed in a non-Newtonian solvent? * The current simulation (and theoretical) approaches have considered stress-activated friction, while the influence of strong attraction and adhesion has not been considered in detail. There has been little examination of how the external fields (electrical, magnetic, acoustic) leading to forces e.g., particle polarization in an electric field would affect the fine structure. * The pressure-controlled boundary condition is most relevant to natural and industrial connections. However, very little attention has been paid to the constitutive modeling of a pressure-controlled setup. * Current algorithms have been highly successful in predicting the dense suspension rheology but are limited to a few thousand particles. Thus, designing clever algorithms dealing with both better contact detection between particles and memory allocation without losing any essential physics is the need of the hour. * Large-scale mesoscale correlations and force network organization lead to large viscosity (close to jamming). Investigation relating microscopic constraints to network topology/geometry can be helpful in building up a statistical mechanics framework for out-of-equilibrium systems. This multi-faceted problem requires experts from soft condensed matter physics, engineering, technology, chemistry, computer science, and network science. Not only do dense suspensions have important practical applications, but there's plenty of room for all scientists to play across this hierarchy of length scales. Acknowledgments.I acknowledge Case Western Reserve University for the start-up funding for the project. I am grateful to my mentors Heinrich M. Jaeger and Juan J. de Pablo (University of Chicago); Jeffrey F. Morris and Morton M. Denn (Levich Institute, City College of New York); Stefan Luding and Vanessa Magnanimo (University of Twente, The Netherlands) for their guidance that has been critical in shaping my thoughts. The work presented here has been performed in several collaborations with Romain Mari, Ryohei Seto, Christopher Ness, Sidhant Pednekar, Jaehun Chun, Grayson L. Jackson, and Michael van der Naald. I would also like to thank Stuart Rowan, Bulbul Chakraborty, Emanuela del Gado, Safa Jamali, Lilian Hsiao, Lou Kondic, Aurora Clark, Jacinta Konrad, Sarah Hormozi, Douglas Jerolmack, and Karen Daniels for many insightful discussions. I appreciate the collaborations with Omer Sedes, Jetin E Thomas, Kabir Ramola, Qin Xu, Marcio Gameiro, and Konstantin Mischaikow. I have thoroughly enjoyed working with my colleagues and friends: Elise Chen, Endao Han, Nicole M James, Neil Dolinski, Melody X Lim, and Bryan VanSaders for many fruitful discussions over the years. ## Declarations * Funding: _Not applicable_ * Conflict of interest/Competing interests (check journal-specific guidelines for which heading to use): _None_ * Ethics approval * Consent to participate * Consent for publication _Yes_ * Availability of data and materials * Code availability _Yes_ * Authors' contributions _Not applicable_
2301.01475
Dynamic Response of Wigner Crystals
The Wigner crystal, an ordered array of electrons, is one of the very first proposed many-body phases stabilized by the electron-electron interaction. This electron solid phase has been reported in ultra-clean two-dimensional electron systems at extremely low temperatures, where the Coulomb interaction dominants over the kinetic energy, disorder potential and thermal fluctuation. We closely examine this quantum phase with capacitance measurements where the device length-scale is comparable with the crystal's correlation length. The extraordinarily high performance of our technique makes it possible to quantitatively study the dynamic response of the Wigner crystal within the single crystal regime. Our result will greatly boost the study of this inscrutable electron solid.
Lili Zhao, Wenlu Lin, Yoon Jang Chung, Adbhut Gupta, Kirk W. Baldwin, Loren N. Pfeiffer, Yang Liu
2023-01-04T07:36:29Z
http://arxiv.org/abs/2301.01475v1
# Dynamic Response of Wigner Crystals ###### Abstract The Wigner crystal, an ordered array of electrons, is one of the very first proposed many-body phases stabilized by the electron-electron interaction. This electron solid phase has been reported in ultra-clean two-dimensional electron systems at extremely low temperatures, where the Coulomb interaction dominants over the kinetic energy, disorder potential and thermal fluctuation. We closely examine this quantum phase with capacitance measurements where the device length-scale is comparable with the crystal's correlation length. The extraordinarily high performance of our technique makes it possible to quantitatively study the dynamic response of the Wigner crystal within the single crystal regime. Our result will greatly boost the study of this inscrutable electron solid. Interacting two-dimensional electron system (2DES) subjected to high perpendicular magnetic fields (\(B\)) and cooled to low temperatures exhibits a plethora of exotic states [1]. The Wigner crystal (WC) [2] terminates the sequence of fractional quantum Hall states at very small landau level filling factor [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. This electron solid is pinned by the ubiquitous residual disorder, manifests as an insulating phase in DC transport [3; 4; 5; 6; 7; 8; 9; 10; 11], and the electrons' collective motion is evidenced by a resonance in AC transport [12; 13; 14; 15; 16; 17; 18; 19]. A series of experiments have been applied to investigate this correlated solid, such as the nonlinear \(I-V\) response [4; 16], the noise spectrum [5], the huge dielectric constant [20], the weak screening efficiency [21], the melting process [21; 22; 23], the nuclear magnetic resonance [24] and the optics [25; 26]. Capacitance measurements have revealed a series of quantum phenomena [21; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. In this work, we examine the WC formed in an ultra-high mobility 2DES at \(\nu\lesssim 1/5\) using high-precision capacitance measurement [39; 40]. We find an exceedingly large capacitance at low measurement frequency \(f\) while the conductance is almost zero. This phenomenon is inconsistent with transporting electrons, but rather an evidence that the synchronous vibration of electrons induces a polarization current. When we increase \(f\), our high-precision measurement captures the fine structure of the resonance response with a puzzling "half-dome" structure. Our systematic, quantitative results provide an in-depth insight of this murky quantum phase. Our sample consists an ultra-clean low-density 2DES confined in a 70-nm-wide GaAs quantum well with electron density \(n\simeq 4.4\times 10^{10}\) cm\({}^{-2}\) and mobility \(\mu\simeq 17\)\(\times 10^{6}\) cm\({}^{2}\)/(V\(\cdot\)s). Each device has a pair of front concentric gates G1 and G2, whose outer and inner radius are \(r_{1}\) and \(r_{2}\), respectively; see the inset of Fig. 1(a) [41]. We study four devices with \(r_{1}=\)60 \(\mu\)m and \(r_{2}=60\), 80, 100 and 140 \(\mu\)m, respectively. We measure the capacitance \(C\) and conductance \(G\) between the two gates using a cryogenic bridge and analyze its output with a custom-made radio-frequency lock-in amplifier [39; 40; 41]. Fig. 1(a) shows the \(C\) and \(G\) measured from the \(r_{1}=r_{2}=60\)\(\mu\)m sample. Both \(C\) and \(G\) decrease as we increase the magnetic field \(B\), owing to the magnetic localization where the 2DES conductance \(\sigma\propto(ne^{2}\tau)/m^{*}(1+\omega_{c}^{2}\tau^{2})\), \(m^{*}\), \(\omega_{c}\) and \(\tau\) are the effective mass, cyclotron frequency and transport scattering time of the electrons, respectively [40]. The \(C\) and \(G\) are finite at \(\nu=1/2\) and \(1/4\) where the 2DES forms compressible composite Fermion Fermi sea. When \(\nu\) is an integer or a certain fraction such as \(1/3\) and \(1/5\), the 2DES forms incompressible quantum Hall liquids so that both \(C\) and \(G\) vanish [42]. In all the above cases, the current is carried by _transporting electrons_, so that \(C\) has a positive dependence on \(G\), i.e. \(C\propto G^{3/2}\), as shown in Fig. 1(b) [40]. Such a correlation discontinues when the WC forms at very low filling factors \(\nu\lesssim 1/5\), see the blue shaded regions of Fig. 1(a). The vanishing conductance \(G\) suggests that the electrons are immovable, however, the surprisingly large capacitance \(C\) evidences that the WC hosts a current even surpassing the conducting Fermi sea at \(\nu=1/2\) and \(1/4\) at much lower magnetic field! The phase transition between the WC and the liquid states are clearly evidenced by spikes in \(G\) (marked by solid circles in Fig. 1(a)) and sharp raises in \(C\). A developing minimum is seen in \(G\) at \(1/5<\nu<2/9\) (marked by the up-arrow) when \(C\) has a peak. This \(G\) minimum develops towards zero and the \(C\) peak saturates when the solid phase is stronger (see black traces in Fig. 3(a)). This is consistent with the reentrant insulating phase [3; 4; 5; 16; 19; 43; 44]. It is important to mention that the 2DES in our devices is effectively "isolated" and we are merely transferring charges between different regions within one quantum phase. Similar to the dielectric materials which also have no transporting electrons, the collective motion of all electrons, i.e. the \(k\to 0\) phonon mode of WC, can generate _polarization charges_ and corresponding polarization current in response to the in-plane component of applied electric field. An infinitesimally small but ubiquitous disorder pins the WC so that electrons can only be driven out of their equilibrium lattice site by a small displacement \(\mathbf{x}\), as shown in Fig. 1(c). During the experiments, we use excitation \(V_{\text{in}}\simeq 0.1\) mV\({}_{\text{rms}}\) and the measured WC capacitance is \(\sim\) 0.15 pF at 13.5 T. The polarization charge accumulated under the inner gate is \(Q=CV_{\text{in}}\sim 100\ e\). The corresponding electron displacement at the boundary of the inner gate, \(|\mathbf{x}(r_{1})|\simeq Q/(2\pi r_{1}ne)\sim 0.6\) nm, is much smaller than the magnetic length \(l_{B}=\sqrt{\hbar/eB}\sim 8\) nm, substantiating our assumption that the electrons vibrate diminutively around their equilibrium lattice sites. An ideal, disorder-free WC is effectively a perfect dielectric with infinite permittivity, so that the device capacitance should be close to its zero-field value \(C_{0}\sim 1\) pF when 2DES is an excellent conductor. We note that \(C_{0}\) is consistent with the device geometry, \(\epsilon_{0\epsilon_{\text{GaAs}}}\pi r_{1}^{2}/h\simeq 1.3\) pF, where \(\epsilon_{\text{GaAs}}=12.8\) is the relative dielectric constant of GaAs and \(h\simeq 960\) nm is the depth of 2DES. However, the measured \(C\sim 0.15\) pF in the WC regime is much smaller than \(C_{0}\). This discrepancy is likely caused by the friction-like disorder which poses a pinning force \(\simeq-\beta\mathbf{x}\) on the electrons. When the crystal's inversion symmetry is broken, i.e. \(\mathbf{x}\) is non-uniform and \(\mathcal{J}(\mathbf{x})\) is finite, the electron-electron interaction generates a restoring force \(\simeq-a_{0}\mu_{ij}\mathcal{J}(\mathbf{x})\), where \(\mu_{ij}\), \(a_{0}\) and \(\mathcal{J}(\mathbf{x})\) are the elastic tensor, WC lattice constant and the Jacobi matrix of \(\mathbf{x}\), respectively. At the low frequency limit, the WC is always at equilibrium and all forces are balanced, \(e\mathbf{E}-a_{0}\mu_{ij}\mathcal{J}(\mathbf{x})-\beta\mathbf{x}=0\), \(\mathbf{E}\) is the total parallel electric field on the WC. \(\mathbf{E}\) is approximately zero under the metal gates, since the gate-to-2DES distance \(h\) is small. Therefore, \(\mathbf{x}\) decreases exponentially when the distance from the gate boundary \(d\) increases, \(\mathbf{x}\propto\exp(-d/\zeta)\), where \(\zeta=\mu a_{0}/\beta\) is the decay length. Deeply inside the gates, electrons feel neither parallel electric field nor net pressure from nearby electrons, so that their displacement \(\mathbf{x}\) remains approximately zero. This region does not contribute to the capacitive response, and the effective gate area reduces to about \(2\pi r_{1}\zeta\) and \(2\pi r_{2}\zeta\) at the inner and outer gate, respectively. Because \(r_{1}=r_{2}=60\ \mu\)m in Fig. 1(a), the experimentally measured \(C\approx\epsilon_{0}\epsilon_{\text{GaAs}}/h\cdot 2\pi r_{1}\zeta/2\simeq 0.15\) pF at 13.5 T corresponds to a decay length \(\zeta\simeq 6.7\)\(\mu\)m. Interestingly, our result shows a linear dependence \(C\propto 1/B\) in Fig. 1(d), suggesting that \(\beta\propto l_{B}^{-2}\) if we assume \(\mu_{ij}\) is independent on \(B\). Especially, the pinning becomes infinitely strong, i.e. \(\beta\rightarrow\infty\), at the extreme quantum limit \(l_{B}\to 0\). Figure 1: (color online) (a) \(C\) and \(G\) measured from the \(r_{1}=r_{2}=60\ \mu\)m sample with 7 MHz excitation at 30 mK. The horizontal dashed lines represent the zeros of \(C\) or \(G\). The blue shaded regions mark the presence of WC. Inset is the cartoon of our device. (b) The correlation between \(C\) and \(G\) in panel (a) data. Transporting current dominates at \(B<8\) T where \(C\propto G^{3/2}\), indicated the red solid line. When the WC polarization current dominates, \(C\simeq 0.2\) pF and \(G\) is about zero (the blue box). (c) The schematic model describing the collective motion of electrons in the pinned WC. \(h\) is the depth of 2DES. The equally spaced (by the lattice constant \(a_{0}\)) vertical bars represent the equilibrium position of electrons. The gray-scaled solid circles represent the electron position at finite external electric field \(\mathbf{E}\). The darker gray corresponds to larger electron displacement \(\mathbf{x}\). The radius of individual electron is about the magnetic length \(l_{\text{B}}\). The accumulated charge \(Q\) is proportional to \(\nabla\cdot\mathbf{x}\), and decays exponentially as a function of the distance \(d\) from the gate boundary. \(\zeta\) is the decay length. \(C_{\text{WC}}\) is the effective capacitance of WC in the un-gated region between the two gates. (d) \(C\) v.s. \(\nu\) of the \(r_{2}\)=100 \(\mu\)m sample. The black dashed line is the zero of \(C\). The red dashed line is the linear extension of data, showing that \(C=0\) at the extreme quantum limit \(\nu=0\). (e) \(1/C_{\text{WC}}\) v.s. \(\ln(r_{2}/r_{1})\) at two different magnetic field. The permittivity of a disorder-pinned WC is no longer infinitely large, since a non-zero electric field \(\mathbf{E}\) is necessary to sustain a finite \(\mathbf{x}\). If we assume \(\mathbf{x}\) is a constant in the ring area between the two gates, so that \(e\mathbf{E}=\beta\mathbf{x}\). The residual \(\mathbf{E}\) can be modeled as a serial capacitance \(C_{\mathrm{WC}}\approx 2\pi ne^{2}/\beta\cdot[\ln(r_{2}/r_{1})]^{-1}\) in our device. We then measure different devices with \(r_{1}\)= 60 \(\mu\)m and \(r_{2}=60\), 80, 100 and 140 \(\mu\)m, and calculate the corresponding \(C_{\mathrm{WC}}\) through \(C_{\mathrm{WC}}^{-1}=C^{-1}-(r_{1}+r_{2})/r_{2}\cdot C_{r_{1}=r_{2}}^{-1}\), see Fig. 1(e). By fitting the linear dependence \(C_{\mathrm{WC}}^{-1}\propto\ln(r_{2}/r_{1})\), we estimate the pinning strength \(\beta\) to be about 1.3 \(\times 10^{-9}\) and 1.1 \(\times 10^{-9}\) N/m at \(B=13.5\) and 12 T, respectively [45]. Finally, assuming \(\mu_{ij}\approx\mu\cdot\delta_{ij}\), we can estimate the WC elastic modulus \(\mu\approx\beta\cdot\zeta/a_{0}\). For example, \(\mu\) is about \(1.6\times 10^{-7}\) N/m at 13.5 T. Fig. 2 reveals an intriguing temperature-induced solid-liquid phase transition when the WC melts. Fig. 2(a) shows \(C\) and \(G\) taken from the \(r_{2}=80\)\(\mu\)m sample at various temperatures. At a certain temperature, e.g. at \(T\approx 110\) mK, \(C\sim 0.2\) pF when the 2DES forms WC at \(\nu\lesssim 0.16\) and vanishes when it is a liquid phase at \(\nu\gtrsim 0.18\). \(G\) has a peak at \(\nu\simeq 0.175\) when \(C\) vs. \(\nu\) has the maximal negative slope, and it is small when the 2DES is either a WC at \(\nu<0.17\) or a liquid at \(\nu>0.19\)[46]. At very high temperature \(T\gtrsim 200\) mK, both \(C\) and \(G\) are close to zero. In Fig. 2(b), we summarized \(C\) and \(G\) as a function of \(T\) at two different filling factors to better illustrate this solid-liquid transition. At \(\nu\simeq 0.14\), for example, \(C\) is large and \(G\) is small at \(T\lesssim 100\) mK when the WC is stable [47], while both of them become small at \(T\gtrsim 200\) mK when the 2DES is a liquid. The \(G\) has a peak at a critical temperature \(T_{C}\), marked by the red arrows, around which the precipitous decrease of \(C\) happens. Alternatively, \(T_{C}\) at a certain filling factor \(\nu\) can be defined as the temperature when the \(G\) has a peak (black arrow in Fig. 2(a)) at \(\nu\). We summarize \(T_{C}\) obtained using these two equivalent procedures in the Fig. 2(b) inset with corresponding red and black symbols. \(T_{C}\) has a linear dependence on \(\nu\) whose two intercepts are \(T_{C}\simeq 340\) mK at the extreme quantum limit \(\nu=0\), and \(\nu\simeq 1/4\) at \(T_{C}=0\) mK. The Fig. 2(b) evolution can be qualitatively understood by the coexistence of transport and polarization currents at the solid-liquid transition. The large \(C\) reduces to almost zero when the transport current dominates over the polarization current. \(G\) is a measure of the 2DES's capacity to absorb and dissipate power. It is negligible if either of these two currents dominates, since the polarization current is dissipation-less and the dissipating transport current is difficult to excite. \(G\) becomes large when these two currents coexist nip and tuck at intermediate \(T\) when the excited polarization charge can be just dissipated by the transport current. The WC exhibits a resonance when we increase the excitation frequency. In Fig. 3(a), the \(C\) and \(G\) measured from the \(r_{2}=100\)\(\mu\)m sample using different excitation frequencies change enormously when the WC presents (blue shaded region). \(G\) is almost zero and \(C\) is large at \(f\simeq 7\) MHz, and \(G\) becomes finite and \(C\) becomes even larger at \(f\simeq 23\) MHz. At slightly higher frequency 27 MHz, \(G\) reaches its maximum and \(C\) drops to about zero. Further increasing \(f\), \(G\) gradually declines while \(C\) first becomes negative at 35 MHz and then gradually approaches zero. The summarized \(C\) and \(G\) vs. \(f\) at two certain fillings in Fig. 3(b), resembles qualitatively a resonant behavior with resonance frequency \(f_{r}\simeq 26\) MHz (when \(C=0\)). Fig. 3(c) studies this resonance at different temperatures. The data is taken from the \(r_{2}\simeq 80\)\(\mu\)m sample whose resonance frequency is about 35 MHz [48]. The abrupt change of \(C\) near \(f_{r}\) becomes gradual and the \(G\) peak flattens at higher temperatures. Figure 2: (color online) (a) \(C\) and \(G\) vs. \(\nu\) measured at various temperatures from the \(r_{2}=80\)\(\mu\)m sample with 17 MHz excitation. (b) Summarized \(C\) and \(G\) vs. \(T\) at \(\nu=0.14\) and 0.18 from the panel (a) data. A critical temperature \(T_{c}\) at certain \(\nu\) is defined either as the temperature when \(G\) has a peak at \(\nu\) in panel (a) or as the temperature when \(G\) vs. \(T\) trace reaches maximum in panel (b); marked by the black and red arrows. The panel (b) inset summarizes the \(T_{c}\) using the two equivalent definitions using black and red circles, respectively. The diagram can be separated into three different regions corresponding to the WC, the fractional quantum Hall (FQH) liquid and the compressible liquid. Both \(C\) and \(G\) become flat zero at \(T\gtrsim 280\) mK. It is noteworthy that, as long as a resonance is seen, \(f_{r}\) is nearly independent on the filling factor (Fig. 3(b)) and temperatures (Fig. 3(c)). This is consistent with another experimental study using surface acoustic wave [23]. The resonance of WC is usually explained by the pinning mode [49; 18]. The resonance frequency is related to the mean free path \(L_{T}\) of the transverse phonon through \(L_{T}=(2\pi\mu_{t,cl}/neBf_{r})^{1/2}\), where \(\mu_{t,cl}=0.245e^{2}n^{3/2}/4\pi\epsilon_{0}\epsilon_{\rm GaAs}\) is the classical shear modulus of WC. \(f_{r}=26\) MHz corresponds to \(L_{T}\simeq 3.2~{}\mu\)m, very similar to \(\zeta\simeq 6.7~{}\mu\)m in our Fig. 1(c) discussion. This is justifiable because both \(L_{T}\) and \(\zeta\) describe the length-scale within which the collective motion of WC is damped/scattered by the random pinning potential. Before ending the discussion, we would like to highlight the puzzling "half-dome" structure of the resonance. \(G\) has a regular-shaped resonance peak, i.e. \(G\) decreases gradually on both sides of \(f_{r}\), when either the WC is weak ( \(\nu\simeq 0.213\) in Fig. 3(b)) or the temperature is high (\(T\simeq 140\) mK in Fig. 3(c)). Surprisingly, the resonance peak becomes quite peculiar when the WC is strong at \(\nu\simeq 0.14\) and \(T\simeq 30\) mK. \(G\) gradually decreases from its peak at \(f_{r}\) on the high frequency side \(f>f_{r}\), while it vanishes instantly when the frequency is lower than \(f_{r}\), resulting in a "half-dome" \(G\) vs. \(f\) trace. Meanwhile, the \(C\) increases by \(\sim 2\) times and then abruptly changes to negative at \(f_{r}\). This anomalous "half-dome" feature is seen in all of our devices as long as the WC is strong and temperature is sufficiently low, suggesting a threshold frequency for the power dissipation. In conclusion, using the extraordinarily high-precision capacitance measurement technique, we investigate the dynamic response of WC systematically. From the quantitative results and using a simple model, we can study several physical properties of the WC such as elastic modulus, dielectric constant, pinning strength, etc., and discover a puzzling "half-dome" feature in the resonance peak. Our results certainly shine light on the study of WC and provides new insight on its dynamics. We acknowledge support by the National Nature Science Foundation of China (Grant No. 92065104 and 12074010) and the National Basic Research Program of China (Grant No. 2019YFA0308403) for sample fabrication and measurement. This research is funded in part by the Gordon and Betty Moore Foundation's EPiQS Initiative, Grant GBMF9615 to L. N. Pfeiffer, and by the National Science Foundation MRSEC grant DMR 2011750 to Princeton University. We thank L. W. Engel, Bo Yang and Xin Lin for valuable discussion.
2310.08476
Morse-Smale 3-diffeomorphisms with saddles of the same unstable manifold dimension
In this paper, we consider a class of Morse-Smale diffeomorphisms defined on a closed 3-manifold (non-necessarily orientable) under the assumption that all their saddle points have the same dimension of the unstable manifolds. The simplest example of such diffeomorphisms is the well-known ``source-sink'' or ``north pole - south pole'' diffeomorphism, whose non-wandering set consists of exactly one source and one sink. Such systems, as Reeb showed back in 1946, can be realized only on the sphere. We generalize his result, namely, we show that diffeomorphisms from the considered class also can be defined only on the 3-sphere.
E. M. Osenkov, O. V. Pochinka
2023-10-12T16:33:46Z
http://arxiv.org/abs/2310.08476v1
# Morse-Smale 3-diffeomorphisms with saddles of the same unstable manifold dimension ###### Abstract In this paper, we consider a class of Morse-Smale diffeomorphisms defined on a closed 3-manifold (non-necessarily orientable) under the assumption that all their saddle points have the same dimension of the unstable manifolds. The simplest example of such diffeomorphisms is the well-known "source-sink" or "north pole - south pole" diffeomorphism, whose non-wandering set consists of exactly one source and one sink. Such systems, as Reeb showed back in 1946, can be realized only on the sphere. We generalize his result, namely, we show that diffeomorphisms from the considered class also can be defined only on the 3-sphere. **Keywords:** Morse-Smale system, topology of the ambient manifold **MSC2010:** 37C15 ## Introduction and Formulation of the Results The class of dynamical systems, introduced by S. Smale in 1960 [15] and known today as Morse-Smale systems, played not the least role in the formation of the modern dynamical systems theory. The study of these systems remains an important part of it because they form a class of structurally stable systems which, in addition, have zero topological entropy [10], [9], [13], that makes them in this sense by "the simplest" structural stable systems. A close relation of the Morse-Smale diffeomorphisms (\(MS\)-diffeomorphisms) with the topology of the ambient manifold allows us to realize various topological effects in the dynamics of such systems. The classical example demonstrating such a relation is systems with exactly two points of extreme Morse indices. In this case, it follows from Reeb's theorem [12], the ambient manifold is homeomorphic to the sphere. Another brilliant illustration of researched relations is the decomposition of an orientable 3-manifold into a connected sum of \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) whose number of summands is completely determined by a structure of the non-wandering set of an MS-diffeomorphism without heteroclinic curves defined on it. This result was obtained in papers by H. Bonatti, V. Grines, and V. Medvedev [2], [3] and is based on the breakthrough result about the existence of a tame neighborhood of a 2-sphere with one point of wildness. The ideas that authors put into their proofs have been extremely helpful in our research. The present paper is a straightforward generalization of Reeb's Theorem on the following class of dieomorphisms. Let \(f\) be an \(MS\)-diffeomorphism defined on a closed connected 3-manifold \(M^{3}\) and all its saddle points have the same dimension of their unstable manifolds. Denote this class as \(\mathcal{G}.\) Then we can formulate the main result of this work. **Theorem 1**.: _Any closed connected \(3\)-manifold \(M^{3}\), admitting a diffeomorphism \(f\in\mathcal{G}\), is homeomorphic to the 3-sphere._ **Acknowledgement.** The work was supported by the Russian Science Foundation (Project No. 23-71-30008). ## 1 Auxiliary Information and Facts This section introduces basic concepts and facts from topology and dynamical systems theory. ### Some topological facts Let \(X\), \(Y\) be topological spaces, \(A\subset X\) and \(B\subset Y\) are their subsets and \(g:A\to B\) is a homeomorphism. Let \(\sim\) be the minimal equivalence relation on \(X\sqcup Y\) for which \(a\sim g(a)\) for all \(a\in A\). The factor space for this equivalence relation is said to be obtained by gluing the space \(Y\) to the space \(X\) by the map \(g\), written \(X\cup_{g}Y\). Let \(X\), \(Y\) be compact \(n\)-manifolds, \(D_{1}\subset X\), \(D_{2}\subset Y\) be subsets homeomorphic to \(\mathbb{D}^{n}\), \(h_{1}:\mathbb{D}^{n}\to D_{1}\), \(h_{2}:\mathbb{D}^{n}\to D_{2}\) be the corresponding homeomorphisms and \(g:\partial D_{1}\to\partial D_{2}\) be a homeomorphism such that the map \(h_{2}^{-1}gh_{1}\big{|}_{\partial\mathbb{D}^{n}}:\mathbb{S}^{n-1}\to\mathbb{S }^{n-1}\) reverses orientation. Then the space \(X\#Y=(X\setminus\operatorname{int}D_{1})\cup_{g}(Y\setminus\operatorname{ int}D_{2})\) is called _the connected sum of \(X\) and \(Y\)_. If \(X\subset Y\) then the map \(i_{X}:X\to Y\) such that \(i_{X}(x)=x\) for all \(x\in X\), is called _the inclusion map of X into Y_. Let \(X\) and \(Y\) be \(C^{r}\)-manifolds. Denote by \(C^{r}(X,Y)\) the set of all \(C^{r}\)-maps \(\lambda:X\to Y\). A map \(\lambda:X\to Y\) is said to be a \(C^{r}\)-embedding if it is a \(C^{r}\)-diffeomorphism onto the subspace \(\lambda(X)\). \(C^{0}\)-embedding is also called _a topological embedding_. A topological embedding \(\lambda:X\to Y\) of an \(m\)-manifold \(X\) into an \(n\)-manifold \(Y\) (\(m\leq n\)) is said to be _locally flat at the point \(\lambda(x)\), \(x\in X\),_ if the point \(\lambda(x)\) is in the domain of such a chart \((U,\psi)\) of the manifold \(Y\) that \(\psi(U\cap\lambda(X))=\mathbb{R}^{m}\), here \(\mathbb{R}^{m}\subset\mathbb{R}^{n}\) is the set of points for which the last \(n-m\) coordinates equal to \(0\) or \(\psi(U\cap\lambda(X))=\mathbb{R}^{m}_{+}\), here \(\mathbb{R}^{m}_{+}\subset\mathbb{R}^{m}\) is the set of points with the non-negative last coordinate. An embedding \(\lambda\) is said to be _tame_ and the manifold \(X\) is said to be _tamely embedded_ if \(\lambda\) is locally flat at every point \(x\in X\). Otherwise the embedding \(\lambda\) is said to be _wild_ and the manifold \(X\) is said to be _wildly embedded_. A point \(\lambda(x)\) which is not locally flat, is said to be _the point of wildness_. **Proposition 1** ([1], Theorem 4).: _Let \(T\) be a two-dimensional torus tamely embedded in the manifold \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) in such a way that \(i_{T*}(\pi_{1}(T))\neq 0\). Then \(T\) bounds in \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) a solid torus._ **Proposition 2** ([2], Proposition 0.1).: _Let \(M^{3}\) be a closed connected \(3\)-manifold and let \(\lambda:\mathbb{S}^{2}\to M^{3}\) be a topological embedding of the \(2\)-sphere which is smooth everywhere except one point. Let \(\Sigma=\lambda(\mathbb{S}^{2})\). Then any neighborhood \(V\) of the sphere \(\Sigma\) contains a neighborhood \(K\) diffeomorphic to \(\mathbb{S}^{2}\times[0,1]\).1_ Footnote 1: This fact was proven in [2] for an orientable manifold \(M^{3}\), but the proof don’t use the orientability anywhere. So we can use this result in our case too. Let us assume that the empty set and only the empty set _has a dimension \(-1\)_ (\(\dim\varnothing=-1\)). The separable metric space \(X\)_has a dimension \(\leq n\)_ (\(\dim X\leq n\)) if any neighborhood \(V_{p}\) of a point \(p\in X\) contains a neighborhood \(U_{p}\) such that \(\dim(\partial U_{p})\leq n-1\). The space \(X\)_has a dimension \(n\)_ (\(\dim X=n\)) if the statement \(\dim X\leq n\) is true and the statement \(\dim X\leq n-1\) is false. It is said that a subset \(D\) of a connected space \(X\)_divides it_ if the space \(X\setminus D\) is disconnected. **Proposition 3** ([8], Corollary 1, p.48).: _Any connected \(n\)-manifold cannot be divided by a subset of the dimension \(\leq n-2\)._ ### Morse-Smale diffeomorphisms Here and below, we assume that \(M^{n}\) is a closed connected \(3\)-manifold with a metric \(d\) and a map \(f:M^{n}\to M^{n}\) is a diffeomorphism. _The trajectory_ or _the orbit_ of a point \(x\in M^{n}\) is the set \(\mathcal{O}_{x}=\{f^{m}(x),m\in\mathbb{Z}\}\). A set \(A\subset M^{n}\) is said to be _\(f\)-invariant_ if \(f(A)=A\), that is \(A\) consists of hole orbits. A compact \(f\)-invariant set \(A\subset M^{n}\) is called _an attractor_ of the diffeomorphism \(f\) if it has a compact neighborhood \(U_{A}\) such that \(f(U_{A})\subset\operatorname{int}U_{A}\) and \(A=\bigcap\limits_{k\geq 0}f^{k}(U_{A})\). The neighborhood \(U_{A}\) in this case is said to be _trapping_. The basin of the attractor \(A\) is the set \[W^{s}_{A}=\{x\in M^{n}:\lim\limits_{k\to+\infty}d(f^{k}(x),A)=0\}.\] _A repeller_ and its basin are defined as an attractor and its basin for \(f^{-1}\). A point \(x\in M^{n}\) is said to be _a wandering_ for the diffeomorphism \(f\) if there is an open neighborhood \(U_{x}\) of x such that \(f^{k}(U_{x})\cap U_{x}=\varnothing\) for all \(k\in\mathbb{N}\). Otherwise, the point \(x\) is said to be _a non-wandering_. The set of all non-wandering points is called the _non-wandering set_ and it will be denoted by \(\Omega_{f}\). The non-wandering set \(\Omega_{f}\) is \(f\)-invariant and if \(\Omega_{f}\) is finite, then it consists only of periodic points, i.e. such points \(p\in M^{n}\) that there exists the natural number \(m\) for which \(f^{m}(p)=p\). If this equality is not satisfied for any natural number \(k<m\), then \(m\) is called _the period of a point \(p\)_, denote it by \(m_{p}\). For a periodic point \(p\), let us define sets \[W^{s}_{p}=\{x\in M^{n}:\lim_{k\to+\infty}d(f^{km_{p}}(x),p)=0\}\] and \[W^{u}_{p}=\{x\in M^{n}:\lim_{k\to-\infty}d(f^{km_{p}}(x),p)=0\},\] which are called, respectively, _stable_ and _unstable manifolds_ of the point \(p\). These sets are also known as _invariant manifolds_ of the point \(p\). A periodic point \(p\) with a period \(m_{p}\) is said to be _hyperbolic_ if the absolute values of each eigenvalues of the Jacobi matrix \(\left.\left(\frac{\partial f^{m_{p}}}{\partial x}\right)\right|_{p}\) is not equal to \(1\). If the absolute values of all the eigenvalues are less than \(1\), then \(p\) is called an _attracting_, a _sink point_ or a _sink_; if the absolute values of all the eigenvalues are greater than \(1\), then p is called a _repelling_, a _source point_ or a _sink_. Attracting and repelling fixed points are called _nodes_. A hyperbolic periodic point which is not a node is called a _saddle point_ or a _saddle_. The hyperbolic structure of the periodic point \(p\) and the finiteness of the non-wandering set implies that its the stable and the unstable manifolds are smooth submanifolds of \(M^{n}\) which are diffeomorphic to \(\mathbb{R}^{q_{p}}\) and \(\mathbb{R}^{n-q_{p}}\) respectively, where \(q_{p}\) is a _Morse index of \(p\)_, that is the number of the eigenvalues of Jacobi matrix whose the absolute value is greater than \(1\). A connected component \(\ell^{u}_{p}\left(\ell^{s}_{p}\right)\) of the set \(W^{u}_{p}\backslash p\left(W^{s}_{p}\backslash p\right)\) is called an unstable (stable) _separatrix_ of the periodic point \(p\). For \(p\) let \(\nu_{p}\) be \(+1(-1)\) if \(f^{m_{p}}|_{W^{u}_{p}}\) preserves (reverses) orientation and let \(\mu_{p}\) be \(+1(-1)\) if \(f^{m_{p}}|_{W^{s}_{p}}\) preserves (reverses) orientation. A diffeomorphism \(f:M^{3}\to M^{3}\) is called a _Morse-Smale diffeomorphism_ (\(f\in MS(M^{3})\)) if 1) the non-wandering set \(\Omega_{f}\) is finite and hyperbolic; 2) for every two distinct periodic points \(p,q\) the manifolds \(W^{s}_{p},W^{u}_{q}\) intersect transversally. Note that all the facts below are proved in the case when \(M^{n}\) is orientable, but the direct check allows us to verify the correctness of these results for non-orientable manifolds as well. **Proposition 4** ([5], Theorem 2.1).: _Let \(f\in MS(M^{3})\). Then_ 1. \(M^{n}=\underset{p\in\Omega_{f}}{\bigcup}W^{u}_{p}\)_,_ 2. \(W^{u}_{p}\) _is a smooth submanifold of the manifold_ \(M^{n}\) _diffeomorphic to_ \(\mathbb{R}^{q_{p}}\) _for every periodic point_ \(p\in\Omega_{f}\)_,_ 3. \(\mathrm{cl}(\ell^{u}_{p})\setminus(\ell^{u}_{p}\cup p)=\underset{r\in\Omega_ {f}:\ell^{u}_{p}\cap W^{u}_{r}\neq\varnothing}{\bigcup}W^{u}_{r}\) _for every unstable separatrix_ \(\ell^{u}_{p}\) _of a periodic point_ \(p\in\Omega_{f}\)_._ If \(\sigma_{1},\sigma_{2}\) are distinct periodic saddle points of a diffeomorphism \(f\in MS(M^{n})\) then the intersection \(W^{s}_{\sigma_{1}}\cap W^{u}_{\sigma_{2}}\neq\varnothing\) is called a _heteroclinic_. If \(\dim(W^{s}_{\sigma_{1}}\cap W^{u}_{\sigma_{2}})>0\) then a connected component of the intersection \(W^{s}_{\sigma_{1}}\cap W^{u}_{\sigma_{2}}\) is called a _heteroclinic manifold_ and if \(\dim(W^{s}_{\sigma_{1}}\cap W^{u}_{\sigma_{2}})=1\) then it is called a _heteroclinic curve_. If \(\dim(W^{s}_{\sigma_{1}}\cap W^{u}_{\sigma_{2}})=0\) then the intersection \(W^{s}_{\sigma_{1}}\cap W^{u}_{\sigma_{2}}\) is countable, each point of this set is called a _heteroclinic point_ and the orbit of a heteroclinic point is called the _heteroclinic orbit_. **Proposition 5** ([5], Proposition 2.3).: _Let \(f\in MS(M^{n})\) and \(\sigma\) be a saddle point of \(f\) such that the unstable separatrix \(\ell^{u}_{\sigma}\) has no heteroclinic intersections. Then_ \[\operatorname{cl}(\ell^{u}_{\sigma})\setminus(\ell^{u}_{\sigma}\cup\sigma)= \{\omega\},\] _where \(\omega\) is a sink point. If \(q_{\sigma}=1\) then \(\operatorname{cl}(\ell^{u}_{\sigma})\) is an arc topologically embedded into \(M^{n}\) and if \(q_{\sigma}\geq 2\) then \(\operatorname{cl}(\ell^{u}_{\sigma})\) is the sphere \(\mathbb{S}^{q_{\sigma}}\) topologically embedded into \(M^{n}\)._ A diffeomorphism \(f\in MS(M^{n})\) is called a _"source-sink"_ diffeomorphism if its non-wandering set consists of a unique sink and a unique source. **Proposition 6** ([5], Theorem 2.5).: _If a diffeomorphism \(f\in MS(M^{n})\), \(n>1\), has no saddle points then \(f\) is a "source-sink" diffeomorphism and the manifold \(M^{n}\) is homeomorphic to the \(n\)-sphere \(\mathbb{S}^{n}\).2_ Footnote 2: The second part of this statement can be known as a special case of the Reeb’s Theorem [12] **Proposition 7** ([4], Theorem 1).: _Let \(f\in MS(M^{n})\) and \(\Omega_{A}\) be such a subset of \(\Omega_{f}\) that the set_ \[A=\Omega_{0}\cup W^{u}_{\Omega_{A}}\] _is closed and \(f\)-invariant. Then_ 1. _the set_ \(A\) _is an attractor of the diffeomorphism_ \(f\)_;_ 2. \(W^{s}_{A}=\bigcup\limits_{p\in(A\cap\Omega_{f})}W^{s}_{p}\)_;_ 3. \(\dim A=\max\limits_{p\in(A\cap\Omega_{f})}\{q_{p}\}\)_._ For an orbit \(\mathcal{O}_{p}\) of a point \(p\), let \(m_{\mathcal{O}_{p}}=m_{p}\), \(q_{\mathcal{O}_{p}}=q_{p}\), \(\nu_{\mathcal{O}_{p}}=\nu_{p}\), \(\mu_{\mathcal{O}_{p}}=\mu_{p}\), \(W^{s}_{\mathcal{O}_{p}}=\bigcup\limits_{q\in\mathcal{O}_{p}}W^{s}_{q}\), \(W^{u}_{\mathcal{O}_{p}}=\bigcup\limits_{q\in\mathcal{O}_{p}}W^{u}_{q}\). Following the classic paper by S. Smale [14], we introduce on the set of periodic orbits of \(f\in MS(M^{n})\) a partial order \(\prec\): \[\mathcal{O}_{i}\prec\mathcal{O}_{j}\iff W^{s}_{\mathcal{O}_{i}}\cap W^{u}_{ \mathcal{O}_{j}}\neq\varnothing.\] According to Szpilrajn's theorem [16], any partial order (including the Smale order) can be extended to a total order. Let us consider a special kind of such total order on the set of all periodic orbits. We say that numbering of the periodic orbits \(\mathcal{O}_{1},\cdots,\mathcal{O}_{k_{f}}\) of the diffeomorphism \(f\in MS(M^{n})\) is a _dynamical_ if it satisfies the following conditions: 1. \(\mathcal{O}_{i}\prec\mathcal{O}_{j}\implies i\leqslant j\); 2. \(q_{\mathcal{O}_{i}}<q_{\mathcal{O}_{j}}\implies i<j\). **Proposition 8** ([5], Proposition 2.6).: _For any diffeomorphism \(f\in MS(M^{n})\) there is a dynamical numbering of its periodic orbits._ ### Orbit spaces In this section, we present concepts and facts whose detailed presentation and proof can be found in the monograph [5]. Let \(f:M^{n}\to M^{n}\) be a diffeomorphism and let \(X\subset M^{n}\) be an \(f\)-invariant set. It can be checked directly that the relation \(x\sim y\iff\exists k\in\mathbb{Z}:y=f^{k}(x)\) is an equivalence relation on \(X\). The quotient set \(X/f\) induced by this relation is called an _orbits space of the action of \(f\) on \(X\)_. Let us denote by \(p_{{}_{X/f}}:X\to X/f\) the natural projection. _A fundamental domain of the action of \(f\) on \(X\)_ is a closed set \(D_{X}\subset X\) such that there is a set \(\tilde{D}_{X}\) satisfying: 1. \(\mathrm{cl}(\tilde{D}_{X})=D_{X}\); 2. \(f^{k}(\tilde{D}_{X})\cap\tilde{D}_{X}=\varnothing\) for each \(k\in\mathbb{Z}\setminus\{0\}\); 3. \(\bigcup\limits_{k\in\mathbb{Z}}f^{k}(\tilde{D}_{X})=X\). If the projection \(p_{{}_{X/f}}\) is a cover and the orbits space \(X/f\) is connected then, by virtue of the Monodromy Theorem (see, for example, [5], p.60), for a loop \(\hat{c}\subset X/f\), closed at a point \(\hat{x}\), there exists its lift \(c\subset X\) which is a path joining points \(x\in p_{{}_{X/f}}^{-1}(\hat{x})\) and \(f^{k}(x)\). In this case, a map \(\eta_{{}_{X/f}}:\pi_{1}(X/f)\to\mathbb{Z}\), defined by the formula \(\eta_{{}_{X/f}}([\hat{c}])=k\), is a homomorphism which is called _induced by the cover_\(p_{{}_{X/f}}\). **Proposition 9**.: _Let \(f\) and \(f^{\prime}\) be diffeomorphisms defined on \(f\)- and \(f^{\prime}\)-invariant set \(X\). If \(\hat{h}:X/f\to X/f^{\prime}\) is a homeomorphism for which \(\eta_{X/f}=\eta_{X/f^{\prime}}\hat{h}\) then there is a homeomorphism \(h:X\to X\) which is a lift of \(\hat{h}\)\((p_{{}_{X/f^{\prime}}}h=\hat{h}p_{{}_{X/f}})\) and such that \(hf=f^{\prime}h\)._ **Proposition 10** ([5], Theorem 2.1.3).: _Let \(f\in MS(M^{n})\), \(A\) be an attractor of \(f\), \(\ell_{A}^{s}=W_{A}^{s}\setminus A\), \(\hat{\ell}_{A}^{s}=\ell_{A}^{s}/f\) and \(D_{\ell_{A}^{s}}\) be a fundamental domain of the action of \(f\) on \(\ell_{A}^{s}\). Then the projection \(p_{\hat{\ell}_{A}^{s}}\) is a cover and the orbits space \(\hat{\ell}_{A}^{s}\) is a smooth closed \(n\)-manifold homeomorphic to \(p_{\hat{\ell}_{A}^{s}}(D_{\ell_{A}^{s}})\). In particular, if the attractor \(A\) coincides with a sink orbit then the manifold \(\hat{\ell}_{A}^{s}\) is homeomorphic to following manifolds:_ * \(\mathbb{S}^{1}\) _for_ \(n=1\)_;_ * \(\mathbb{S}^{n-1}\tilde{\times}\mathbb{S}^{1}\) _for_ \(n>1\)_,_ \(\nu_{A}=-1\)_;_ * \(\mathbb{S}^{n-1}\times\mathbb{S}^{1}\) _for_ \(n>1\)_,_ \(\nu_{A}=+1\)_._ Topology of \(3\)-manifolds admitting diffeomorphisms from the class \(\mathcal{G}\) Recall that \(\mathcal{G}\) is a class of Morse-Smale diffeomorphisms \(f:M^{3}\to M^{3}\) defined on a closed connected \(3\)-manifold \(M^{3}\) (not necessarily orientable), with non-wandering set \(\Omega_{f}\) whose all saddle points have the same dimension of their unstable manifolds. This section is focused on the proof of the main result of this paper. **Theorem 1.** _Any closed connected \(3\)-manifold \(M^{3}\), admitting a diffeomorphism \(f\in\mathcal{G}\), is homeomorphic to the \(3\)-sphere._ To prove the main result let us state some auxiliary facts. **Remark 1**.: _Further, without loss of generality, up to the power of the diffeomorphism, one may assume that \(\Omega_{f}\) consists of fixed points only and for all \(p\in\Omega_{f}\) the numbers \(\nu_{p}\) and \(\mu_{p}\) equal to \(+1\). Moreover, for definiteness, we suppose that the set \(\Omega_{1}\) is empty._ **Lemma 1**.: _For any diffeomorphism \(f\in\mathcal{G}\), the set \(\Omega_{0}\) consists of a unique sink._ Proof.: Let \[R=W^{s}_{\Omega_{2}}\cup\Omega_{3}.\] By virtue of Statement 7, the set \(R\) is a repeller of the diffeomorphism \(f\) and \(\dim R=1\). It follows from Statement 3 that \(M^{3}\setminus R\) is connected. On the other hand, according to Statement 4, \(M^{3}\setminus R=W^{s}_{\Omega_{0}}\). From the above we conclude that the set \(\Omega_{0}\) consists of a unique sink. Let us denote by \(\omega\) the unique sink of the diffeomorphism \(f\in\mathcal{G}\). **Lemma 2**.: _In the non-wandering set of any diffeomorphism \(f\in\mathcal{G}\) there exists a saddle \(\sigma\) such that \(\ell^{u}_{\sigma}\subset\ell^{s}_{\omega}\)._ Proof.: By Lemma 1, the fixed points of the diffeomorphism \(f\) admit the following dynamical order: \[\begin{split}\omega\prec\sigma_{1}\prec\cdots\prec\sigma_{k} \prec\alpha_{1}\prec\cdots\prec\alpha_{s},\\ \text{where }\Omega_{2}=\{\sigma_{1},\cdots,\sigma_{k}\},\, \Omega_{3}=\{\alpha_{1},\cdots,\alpha_{s}\}.\end{split} \tag{1}\] Assume that \(\sigma=\sigma_{1}\). Then, it follows from order (1) that \[\forall\ p\in\Omega_{f}\setminus\omega\implies\ell^{u}_{\sigma}\cap W^{s}_{p} =\varnothing.\] In the other words, \(\ell^{u}_{\sigma}\) can only intersect with \(W^{s}_{\omega}\). By Statement 4 (1), any point \(x\in\ell^{u}_{\sigma}\) has to lie on the stable manifold of some fixed point. Hence, \(\ell^{u}_{\sigma}\subset\ell^{s}_{\omega}\). Further, let the saddle \(\sigma\in\Omega_{2}\) satisfies the conclusion of Lemma 2, and let \(\Sigma_{\sigma}=\mathrm{cl}(\ell_{\sigma}^{u})\). It follows from Statements 5 and 4 (2) that \(\Sigma_{\sigma}=\ell_{\sigma}^{u}\cup\{\omega\}\cup\{\sigma\}\) is an embedding of the two-dimensional sphere (see Fig. 1). This embedding is smooth everywhere except, maybe, the point \(\omega\). Let \(\mathcal{M}_{\sigma}=M^{3}\setminus\Sigma_{\sigma}\). **Lemma 3**.: _The manifold \(\mathcal{M}_{\sigma}\) is disconnected._ Proof.: Since for any manifold the notions of connectivity and path connectivity are equivalent, they will be used interchangeably hereafter. **Step 1.** First of all, let us proof that the set \(\mathcal{L}_{\sigma}=\ell_{\omega}^{s}\setminus\ell_{\sigma}^{u}\) is disconnected. Suppose the contrary: any two distinct points \(x,y\in\mathcal{L}_{\sigma}\) can be connected by a path in \(\mathcal{L}_{\sigma}\) (see Fig. 2). Consider the orbits space \(\hat{\ell}_{\omega}^{s}=\ell_{\omega}^{s}/f\) of the sink \(\omega\) and put \(p_{\omega}=p_{\hat{\ell}_{\omega}^{s}}:\ell_{\omega}^{s}\to\hat{\ell}_{\omega} ^{s}\), \(\eta_{\omega}=\eta_{\hat{\ell}_{\omega}^{s}}:\pi_{1}(\hat{\ell}_{\omega}^{s}) \to\mathbb{Z}\). By Statement 10, the map \(p_{\omega}\) is a cover, \(\hat{\ell}_{\omega}^{s}\) is homeomorphic to \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) and \(\hat{\ell}_{\sigma}^{u}\) is homeomorphic to the two-dimensional torus (see Fig. 3). Since \(\ell_{\sigma}^{u}\subset\ell_{\omega}^{s}\), then \(\hat{\ell}_{\sigma}^{u}\subset\hat{\ell}_{\omega}^{s}\). Moreover, \(\hat{\ell}_{\sigma}^{u}=p_{\omega}(\ell_{\sigma}^{u})\) that, by Statement 4, implies that \(\hat{\ell}_{\sigma}^{u}\) is a smooth embedding of the 2-torus into \(\hat{\ell}_{\omega}^{s}\) (see Fig. 3). By Statement 10, homomorphism \(\eta_{\omega}\) is non-trivial and it follows from its definition that \(i_{*}(\pi_{1}(\hat{\ell}_{\sigma}^{u}))\neq 0\). Then, using Statement 1, one may conclude that \(\hat{\ell}_{\sigma}^{u}\) bounds in \(\hat{\ell}_{\omega}^{s}\) a solid torus and, consequently, it divides this orbits space into two connected components. Let us choose a point in each component and denote them \(\hat{x}\) and \(\hat{y}\). From their pre-images we take two points \(x\in p_{\omega}^{-1}(\hat{x})\) and \(y\in p_{\omega}^{-1}(\hat{y})\). Since we assumed that \(\mathcal{L}_{\sigma}\) is path-connected then there exists a path \(\gamma:[0,1]\to\mathcal{L}_{\sigma}:\gamma(0)=x,\ \gamma(1)=y\). Then, by continuity of \(p_{\omega}\), the map \(\hat{\gamma}=p_{\omega}\gamma:[0,1]\to\hat{\ell}_{\omega}^{s}\setminus\hat{ \ell}_{\sigma}^{u}\) is a path between \(\hat{x}\) and \(\hat{y}\) in \(\hat{\ell}_{\omega}^{s}\setminus\hat{\ell}_{\sigma}^{u}\), that is a contradiction. Thus, \(\mathcal{L}_{\sigma}\) is disconnected. **Step 2.** Let us prove that \(\mathcal{M}_{\sigma}=M^{3}\setminus\Sigma_{\sigma}\) is not connected. Suppose the contrary: it is connected. Let us note that \(\dim\,\mathcal{M}_{\sigma}=3\), because it is an open subset of the manifold \(M^{3}\). Then, by Statement 3, \(\mathcal{M}_{\sigma}\setminus R\) is connected. On the other hand, \(\mathcal{M}_{\sigma}\setminus R=(M^{3}\setminus\Sigma_{\sigma})\setminus R= W_{\omega}^{s}\setminus\Sigma_{\sigma}=\mathcal{L}_{\sigma}\) and it contradicts the conclusion of the previous step. So, \(\mathcal{M}_{\sigma}\) is disconnected. Let us introduce a diffeomorphism \(a:\mathbb{R}^{3}\to\mathbb{R}^{3}\) by the rule \(a(x,y,z)=(\frac{\pi}{2},\frac{y}{2},\frac{z}{2})\). It has a unique non-wandering point, a sink \(O(0,0,0)\). Let \(\ell=\mathbb{R}^{3}\setminus O\). As well as before, let \(f\in\mathcal{G}\), \(\sigma\) satisfies the conclusion of Lemma 2 and \(\Sigma_{\sigma}=\operatorname{cl}(\ell_{\sigma}^{u})\). By Statement 7, sphere \(\Sigma_{\sigma}\) is an attractor of diffeomorphism \(f\) with the basin \(W_{\Sigma_{\sigma}}^{s}=W_{\sigma}^{s}\cup W_{\omega}^{s}\). Let \(\ell_{\Sigma_{\sigma}}^{s}=W_{\Sigma_{\sigma}}^{s}\setminus\Sigma_{\sigma}\). **Lemma 4**.: _The manifold \(\ell_{\Sigma_{\sigma}}^{s}\) consists of two connected components \(\ell_{1}\), \(\ell_{2}\), and for each of the components \(\ell_{i}\) there exists a diffeomorphism \(h_{i}:\ell_{i}\to\ell\), conjugating \(\left.f\right|_{\ell_{i}}\) with \(\left.a\right|_{\ell}\)._ Proof.: By virtue of Statement 10, the orbits space \(\hat{\ell}_{\Sigma_{\sigma}}^{s}=\ell_{\Sigma_{\sigma}^{s}}/f\) is a smooth closed 3-manifold. Let us prove that \(\hat{\ell}_{\Sigma_{\sigma}}^{s}\cong\mathbb{S}^{2}\times\mathbb{S}^{1}\sqcup \mathbb{S}^{2}\times\mathbb{S}^{1}\). By Statement 2, the attractor \(\Sigma_{\sigma}\) has a neighborhood \(K_{\sigma}\subset W_{\Sigma_{\sigma}}^{s}\) diffeomorphic to \(\mathbb{S}^{2}\times[0,1]\) (see Fig. 4). Let us show that there exists a natural number \(N\) such that \(f^{N}(x)\in\operatorname{int}K_{\sigma}\) for any \(x\in K_{\sigma}\). Since \(\partial K_{\sigma}\subset W_{\Sigma_{\sigma}}^{s}\), then for all \(x\in\partial K\) there exist such a closed neighborhood \(U_{x}\subset\partial K_{\sigma}\) and a natural number \(\nu_{x}\) that for any \(\nu\geq\nu_{x}\) it is true that \(f^{\nu}(U_{x})\subset\operatorname{int}K_{\sigma}\). Due to the compactness of \(\partial K_{\sigma}\), there exists a finite subcover of \(\partial K_{\sigma}\) in \(\{U_{x},x\in\partial K_{\sigma}\}\). Thus, one may choose the desired number \(N\) as the maximum of numbers \(\nu_{x}\) corresponding to the neighbourhoods of \(U_{x}\) in the chosen subcover. Without loss of generality, we assume the number \(N\) to be 1, then \(f(K_{\Sigma_{\sigma}})\subset\operatorname{int}K_{\sigma}\) (see Fig. 5). It follows from Lemma 3 that the sphere \(\Sigma_{\sigma}\) separates in \(K_{\sigma}\) the connected components of its boundary. Whence, according to [[3], Theorem 3.3], \(K_{\sigma}\setminus\operatorname{int}f(K_{\sigma})\cong\mathbb{S}^{2}\times[0,1 ]\sqcup\mathbb{S}^{2}\times[0,1]\). It follows from the construction that the manifold \(K_{\sigma}\setminus\operatorname{int}f(K_{\sigma})\) is a fundamental domain of the action of \(f\) on the space \(\ell_{\Sigma_{\sigma}}^{s}\). Then by Statement 10, \(\hat{\ell}_{\Sigma_{\sigma}}^{s}\cong\mathbb{S}^{2}\times\mathbb{S}^{1}\sqcup \mathbb{S}^{2}\times\mathbb{S}^{1}\). We denote by \(\hat{\ell}_{1}\), \(\hat{\ell}_{2}\) the connected components of the set \(\hat{\ell}_{\Sigma_{\sigma}}^{s}\) and by \(\eta_{\hat{\ell}_{i}}:\pi_{1}(\hat{\ell}_{i})\to\mathbb{Z},\,i=1,2\) the homomorphism induced by the cover \(p_{\hat{\ell}_{\Sigma_{\sigma}}^{s}}\). Let us assume \(\ell_{i}=p_{\ell_{\tilde{\mathbb{Z}}_{\sigma}}}^{-1}(\hat{\ell}_{i})\). It follows from the definition of the homomorphism \(\eta_{\ell_{i}}\) that it is an isomorphism, hence the set \(\ell_{i}\) is connected. Let \(\hat{\ell}=\ell/a\). Since the point \(O\) is a sink of the three-dimensional map \(a\), then, by Statement 10, \(\ell\cong\mathbb{S}^{2}\times\mathbb{S}^{1}\) and the homomorphism \(\eta_{\hat{\ell}}:\pi_{1}(\hat{\ell})\to\mathbb{Z}\) is an isomorphism. Therefore, the manifolds \(\hat{\ell}_{i}\) and \(\hat{\ell}\) are homeomorphic smooth \(3\)-manifolds, hence there exists a diffeomorphism \(\hat{h}_{i}:\hat{\ell}_{i}\to\hat{\ell}_{i}\) (see [6]). Without loss of generality, we assume that \(\eta_{\hat{\ell}}\hat{h}_{i}=\eta_{\tilde{\ell}_{i}}\) (otherwise, one may consider its composition with a diffeomorphism \(\theta:\mathbb{S}^{2}\times\mathbb{S}^{1}\to\mathbb{S}^{2}\times\mathbb{S}^{1}\), given by the formula \(\theta(s,r)=(s,-r)\)). By Statement 9, there exists a lift \(h_{i}:\ell_{i}\to\ell\) of the diffeomorphism \(\hat{h}_{i}\), smoothly conjugating \(\left.f\right|_{\ell_{i}}\) with \(\left.a\right|_{\ell}\). Now let \(\bar{M}^{\sigma}=\mathbb{R}^{3}\sqcup\mathcal{M}_{\sigma}\sqcup\mathbb{R}^{3}\), \(M^{\sigma}=\mathbb{R}^{3}\cup_{h_{1}}\mathcal{M}_{\sigma}\cup_{h_{2}}\mathbb{ R}^{3}\) and let \(p_{\sigma}:\bar{M}^{\sigma}\to M^{\sigma}\) be the diffeomorphism \(\theta\). Then, by Statement 10, \(\ell\cong\mathbb{S}^{2}\times\mathbb{S}^{1}\) and the homomorphism \(\eta_{\hat{\ell}}:\pi_{1}(\hat{\ell})\to\mathbb{Z}\) is an isomorphism. Therefore, the manifold \(\hat{\ell}_{i}\) and \(\hat{\ell}\) are homeomorphic smooth \(3\)-manifolds, hence there exists a diffeomorphism \(\hat{h}_{i}:\hat{\ell}_{i}\to\hat{\ell}_{i}\) (see [6]). Without loss of generality, we assume that \(\eta_{\hat{\ell}}\hat{h}_{i}=\eta_{\tilde{\ell}_{i}}\) (otherwise, one may consider its composition with a diffeomorphism \(\theta:\mathbb{S}^{2}\times\mathbb{S}^{1}\to\mathbb{S}^{2}\times\mathbb{S}^{1}\), given by the formula \(\theta(s,r)=(s,-r)\)). By Statement 9, there exists a lift \(h_{i}:\ell_{i}\to\ell\) of the diffeomorphism \(\hat{h}_{i}\), smoothly conjugating \(\left.f\right|_{\ell_{i}}\) with \(\left.a\right|_{\ell}\). Now let \(\bar{M}^{\sigma}=\mathbb{R}^{3}\sqcup\mathcal{M}_{\sigma}\sqcup\mathbb{R}^{3}\), \(M^{\sigma}=\mathbb{R}^{3}\cup_{h_{1}}\mathcal{M}_{\sigma}\cup_{h_{2}}\mathbb{ R}^{3}\) and let \(p_{\sigma}:\bar{M}^{\sigma}\to M^{\sigma}\) be the diffeomorphism \(\theta\). Then, by Statement 10, \(\ell\cong\mathbb{S}^{2}\times\mathbb{S}^{1}\) and the homomorphism \(\eta_{\hat{\ell}}:\pi_{1}(\hat{\ell})\to\mathbb{Z}\) is an isomorphism. Therefore, the manifold \(\hat{\ell}_{i}\) and \(\hat{\ell}\) are homeomorphic smooth \(3\)-manifolds, hence there exists a diffeomorphism \(\hat{h}_{i}:\hat{\ell}_{i}\to\hat{\ell}_{i}\) (see [6]). Without loss of generality, we assume that \(\eta_{\hat{\ell}}\hat{h}_{i}=\eta_{\tilde{\ell}_{i}}\) (otherwise, one may consider its composition with a diffeomorphism \(\theta:\mathbb{S}^{2}\times\mathbb{S}^{1}\to\mathbb{S}^{2}\times\mathbb{S}^{1}\), given by the formula \(\theta(s,r)=(s,-r)\)). By Statement 9, there exists a lift \(h_{i}:\ell_{i}\to\ell\) of the diffeomorphism \(\hat{h}_{i}\), smoothly conjugating \(\left.f\right|_{\ell_{i}}\) with \(\left.a\right|_{\ell}\). Now let \(\bar{M}^{\sigma}=\mathbb{R}^{3}\sqcup\mathcal{M}_{\sigma}\sqcup\mathbb{R}^{3}\), \(M^{\sigma}=\mathbb{R}^{3}\cup_{h_{1}}\mathcal{M}_{\sigma}\cup_{h_{2}}\mathbb{ R}^{3}\) and let \(p_{\sigma}:\bar{M}^{\sigma}\to M^{\sigma}\) be the diffeomorphism \(\theta\). Then, by Statement 10, \(\ell\cong\mathbb{S}^{2}\times\mathbb{S}^{1}\) and the homomorphism \(\eta_{\hat{\ell}}:\pi_{1}(\hat{\ell})\to\mathbb{Z}\) is an isomorphism. Therefore, the manifold \(\hat{\ell}_{i}\) and \(\hat{\ell}\) are homeomorphic smooth \(3\)-manifolds, hence there exists a diffeomorphism \(\hat{h}_{i}:\hat{\ell}_{i}\to\hat{\ell}_{i}\) (see [6]). be the natural projection. **Lemma 5**.: _The space \(M^{\sigma}\) consists of two connected components \(M_{1}^{\sigma},\,M_{2}^{\sigma}\) each of which is a closed smooth \(3\)-manifold such that_ \[M^{3}=M_{1}^{\sigma}\#M_{2}^{\sigma}.\] _Moreover, the manifold \(M_{i}^{\sigma},\,i=1,2\) admits a diffeomorhism \(f_{i}:M_{i}^{\sigma}\to M_{i}^{\sigma}\) belonging to the class \(\mathcal{G}\) and having less saddle points than \(f\)._ Proof.: It follows from Lemma 4 that the manifold \(\mathcal{M}_{\sigma}\) is a disjoint union of two manifolds, and hence the space \(M^{\sigma}\) has exactly the same number of connected components, let us denote them as \(M_{1}^{\sigma}\) and \(M_{2}^{\sigma}\). Since \(h_{i}\) glues open subsets of \(3\)-manifolds, then the projection \(p_{\sigma}\) induces the structure of a smooth \(3\)-manifold on \(M^{\sigma}\). Since the glued manifolds have no boundary, the manifold \(M^{\sigma}\) has no boundary as well. Due to the compactness of \(M^{3}\), the manifold \(M^{\sigma}\) is closed. Moreover, it follows directly from the definition of the connected sum that \(M^{3}=M_{1}^{\sigma}\#M_{2}^{\sigma}\). According to [[7], Theorem 18.3 (The pasting lemma)], the map \(f_{\sigma}:M^{\sigma}\to M^{\sigma}\), defined by the formula \[f_{\sigma}(x)=\begin{cases}p_{\sigma}(f(p_{\sigma}^{-1}(x))),\text{if}\,\,\,x \in p_{\sigma}(\mathcal{M}^{\sigma}),\\ p_{\sigma}(a(p_{\sigma}(x))),\text{if}\,\,\,x\in p_{\sigma}(\bar{M}^{\sigma} \setminus\mathcal{M}_{\sigma}),\end{cases}\] is a diffeomorphism. Let \(f_{i}=f_{\sigma}|_{M_{i}^{\sigma}}\)(see Fig. 7, 7). By the construction, the diffeomorphism \(f_{\sigma}\) is smoothly conjugated with \(f\) on \(p_{\sigma}(\mathcal{M}^{\sigma})\) and with \(a\) on \(p_{\sigma}(\bar{M}^{\sigma}\setminus\mathcal{M}^{\sigma})\) (\(O_{i}\,\,\,(i=1,2)\) is a point conjugated with the fixed sink \(O(0,0,0)\) of \(a\)). Hence, \(f_{\sigma}\in\mathcal{G}\) and its non-wandering set have one saddle point less than the non-wandering set of the diffeomorphism \(f\). Now let us prove the main result of this paper. **Theorem 1.** _Any closed connected \(3\)-manifold \(M^{3}\), admitting a diffeomorphism \(f\in\mathcal{G}\), is homeomorphic to the \(3\)-sphere._ Proof.: Let \(f:M^{3}\to M^{3}\) be from the class \(\mathcal{G}\). Also, we assume that \(f\) satisfies the Remark. We prove Theorem 1 by the induction on the number \(k\) of the saddle points of the diffeomorphism \(f\). **Base of induction.**\(k=0\). It follows from Statement 6 that \(M^{3}\) is homeomorphic to the \(3\)-sphere. **Step of induction.**\(k>0\). **Inductive hypotheses.**_Any diffeomorphism from the class \(\mathcal{G}\), the number of saddle points in which is less than some natural number \(k\), can be defined only on a manifold homeomorphic to the \(3\)-sphere._ The diffeomorphism \(f:M^{3}\to M^{3}\) lies in \(\mathcal{G}\) and have exactly \(k\) saddle points. Due to Lemma 2, there exists a saddle \(\sigma\) whose the unstable manifold has no heteroclinic intersections. This saddle was chosen according to the order \(1\). By Lemma 5, \(M^{3}=M_{1}^{\sigma}\#M_{2}^{\sigma}\) and the manifold \(M_{i}^{\sigma},\,i=1,2\) admits a diffeomorphism \(f_{i}:M_{i}^{\sigma}\to M_{i}^{\sigma}\) from the class \(\mathcal{G}\) which have less saddle points than \(f\). In this case, it follows from the inductive hypotheses that \(M_{i}^{\sigma}\cong\mathbb{S}^{3}\). Thus, \(M^{3}\) is a connected sum of the \(3\)-spheres and, consequently, \(M^{3}\cong\mathbb{S}^{3}\).
2302.07554
Signatures of van Hove singularities in the anisotropic in-plane optical conductivity of the topological semimetal Nb$_3$SiTe$_6$
We present a temperature-dependent infrared spectroscopy study on the layered topological semimetal Nb$_3$SiTe$_6$ combined with density-functional theory (DFT) calculations of the electronic band structure and optical conductivity. Our results reveal an anisotropic behavior of the in-plane ($ac$-plane) optical conductivity, with three pronounced excitations located at around 0.15, 0.28, and 0.41~eV for the polarization of the incident radiation along the $c$ axis. These excitations are well reproduced in the theoretical spectra. Based on the \textit{ab initio} results, the excitations around 0.15 eV and 0.28 eV are interpreted as fingerprints of van Hove singularities in the electronic band structure and compared to the findings for other topological semimetals.
J. Ebad-Allah, A. A. Tsirlin, Y. L. Zhu, Z. Q. Mao, C. A. Kuntscher
2023-02-15T09:46:01Z
http://arxiv.org/abs/2302.07554v1
Signatures of van Hove singularities in the anisotropic in-plane optical conductivity of the topological semimetal Nb\({}_{3}\)SiTe\({}_{6}\) ###### Abstract We present a temperature-dependent infrared spectroscopy study on the layered topological semimetal Nb\({}_{3}\)SiTe\({}_{6}\) combined with density-functional theory (DFT) calculations of the electronic band structure and optical conductivity. Our results reveal an anisotropic behavior of the in-plane (\(ac\)-plane) optical conductivity, with three pronounced excitations located at around 0.15, 0.28, and 0.41 eV for the polarization of the incident radiation along the \(c\) axis. These excitations are well reproduced in the theoretical spectra. Based on the _ab initio_ results, the excitations around 0.15 eV and 0.28 eV are interpreted as fingerprints of van Hove singularities in the electronic band structure and compared to the findings for other topological semimetals. ## I Introduction Novel layered Dirac materials hosting nontrivial band crossings in the vicinity of the Fermi level (E\({}_{F}\)) attract considerable attention in condensed-matter research due to their unusual physical properties and phenomena such as anisotropic electron transport [1], chiral anomaly [2], nodal chains [3], hourglass dispersions [4], drumhead-like states [5], van Hove singularities [6; 7; 8; 9; 10; 11], and surface superconductivity [12]. The layered ternary telluride compounds _M\({}_{3}\)_SiTe\({}_{6}\) (\(M\)= Nb and Ta) are one class of these materials, where the band structure calculations predicted several nontrivial band features near E\({}_{F}\)[4; 13; 14]. Nb\({}_{3}\)SiTe\({}_{6}\) is a van-der-Waals layered material with the crystal structure very similar to that of MoS\({}_{2}\)[15], and can be thinned down to atomically thin 2D crystals [1; 16]. Bulk Nb\({}_{3}\)SiTe\({}_{6}\) has an orthorhombic symmetry with the space group \(Pnma\)[17]. The layers stack via van-der-Waals forces, forming bundles of sandwich layers with the order Te-(Nb,Si)-Te. Each Te-(Nb,Si)-Te layer is composed of face- and edge-sharing NbTe\({}_{6}\) prisms with Si ions inserted into interstitial sites among these prisms, as illustrated in Fig. 1(a). We also depict in Fig. 1(b) the first Brillouin zone of bulk Nb\({}_{3}\)SiTe\({}_{6}\) with the high-symmetry points. In the absence of the spin-orbit coupling (SOC), the electronic band structure of \(M_{3}\)SiTe\({}_{6}\) contains (i) a nodal loop related to the linear-band-crossing points along the \(\Gamma-Y\) and \(\Gamma-Z\) paths, and (ii) a fourfold nodal line formed along the \(S-R\) path [4; 13; 14]. Adding SOC leads to several new features, for instance: (i) gapping the nodal loop around the \(\Gamma\) point, (ii) fourfold degeneracy of each band along the paths \(U-X\), \(R-U\), and \(Z-S\), and (iii) the emergence of an hourglass Dirac loop along the \(S-R\) path instead of the nodal line as well as along \(S-X\). The degeneracies of features (ii) and (iii) result from the nonsymmorphic space group symmetry. Furthermore, the charge carrier mobility along the \(a\) direction was predicted to be much higher due to the Dirac dispersion along this direction, leading to a strong anisotropy in the electronic properties. Temperature-dependent resistivity measurements on a bulk sample of Nb\({}_{3}\)SiTe\({}_{6}\) showed typical metallic behavior with an anisotropy along in-plane and out-of-plane directions, related to the specific bonding state of Nb ions. In particular, according to the projected band structure and density of states the conduction bands crossing the Fermi level are mainly derived from Nb \(4d\) orbitals [1]. Furthermore, an in-plane anisotropy is also expected for the \(M_{3}\)SiTe\({}_{6}\) compounds due to the large difference between the in-plane lattice parameters [17]. Consistently, an angle-resolved photoemission spectroscopy study reported a strong anisotropy of the Fermi surface of Ta\({}_{3}\)SiTe\({}_{6}\)[13]. In Nb\({}_{3}\)SiTe\({}_{6}\), both hole and electron pockets exist, hence in the bulk the presence of free charge carriers with different scattering rates is expected. The hole-type charge carriers are suggested to prevail in the transport properties of thin flakes [18], while their density will be affected by the deficiency of Te atoms [19]. Despite Nb\({}_{3}\)SiTe\({}_{6}\) revealing very interesting electronic properties, its optical conductivity has not been studied yet. In this paper we investigate the temperature-dependent in-plane (\(ac\)-plane) optical conductivity of bulk Nb\({}_{3}\)SiTe\({}_{6}\) single crystal, obtained by frequency-dependent reflectivity measurements for the polarization directions \(\mathbf{E}\|a\) and \(\mathbf{E}\|c\). The optical conductivity shows an anisotropic behaviour between the two in-plane polarization directions. Several pronounced interband excitations are observed in the optical conductivity along the axis, which sharpen as the temperature decreases. Based on density-functional theory (DFT) calculations we relate these excitation peaks to specific transitions between electronic bands and propose that they are related to van Hove singularities in the electronic band structure. ## II Sample preparation and experimental details Single crystals of Nb\({}_{3}\)SiTe\({}_{6}\) were grown using chemical vapor transport with a mixture of Nb, Si, ant Te at a molar ratio of 3:1:6. During synthesis, the temperature of the hot and cold ends of the double-zone tube furnace was set at 950\({}^{\circ}\)C and 850\({}^{\circ}\)C, respectively [1; 18]. The temperature-dependent reflectivity measurements at ambient pressure were performed between 295 and 6 K in the frequency range from 0.025 to 2.48 eV (200 to 20000 cm\({}^{-1}\)). Measurements have been conducted on a single crystal on a freshly cleaved \(\mathbf{E}\|ac\) surface. A silver layer was evaporated onto half of the sample surface to serve as a reference, and the obtained spectra have been corrected with the mirror reflectivity later on. The sample was mounted on a cold-finger microcrystal. The measurements were carried out for the in-plane polarization directions \(\mathbf{E}\|a\) and \(\mathbf{E}\|c\) using an infrared microscope (Bruker Hyperion), equipped with a 15\(\times\) Cassegrain objective, coupled to a Bruker Vertex 80v FT-IR spectrometer. The Kramers-Kronig (KK) relations were applied to transform the reflectivity spectra into the complex optical conductivity \(\sigma(\omega)=\sigma_{1}(\omega)+i\sigma_{2}(\omega)\) and the complex dielectric function \(\epsilon(\omega)=\epsilon_{1}(\omega)+i\epsilon_{2}(\omega)\). The extrapolation of the reflectivity data were done in a manner similar to our previous publications [20; 21; 22]. To this end, the reflectivity was extrapolated to low frequencies based on a Drude-Lorentz fit, while for the high-frequency extrapolation we used the x-ray atomic scattering functions [23]. To obtain the contributions to the optical conductivity, the reflectivity and optical conductivity spectra were simultaneously fitted with the Drude-Lorentz model. Band structure of Nb\({}_{3}\)SiTe\({}_{6}\) was calculated in the Wien2K code [24; 25] using the Perdew-Burke-Ernzerhof (PBE) type of the exchange-correlation potential [26]. Lattice parameters and atomic positions from Ref. [27] were employed without further optimization. The corresponding notation of the high-symmetry points is shown in Fig. 1(b). Charge density was converged on the 8\(\times\)4\(\times\)4 \(k\)-mesh. Consequently, optical conductivity was calculated with the internal routines of Wien2K [28] on the dense \(24\times 12\times 12\) mesh. ## III Results and discussion The temperature-dependent reflectivity spectra of Nb\({}_{3}\)SiTe\({}_{6}\) for the in-plane polarization directions \(\mathbf{E}\|a\) and \(\mathbf{E}\|c\) are shown in Fig. 2 (a) and (b), respectively, and in Fig. S1 in the Supplemental Material [29]. The plasma edge in the reflectivity and the increase of the low-frequency reflectivity level during cooling down, which almost approaches unity at 6 K for both polarization directions, reveals the metallic nature of the compound consistent with transport measurements [1; 18; 19; 13]. The plasma edge in the reflectivity depends not only on the temperature but also on the polarization direction as illustrated in the inset of Fig 2 (a). Figures 2 (c) and (d) display the temperature-dependent real part of the optical conductivity \(\sigma_{1}\) for both polarization directions, as derived from the reflectivity spectra through KK relations. Corresponding plots on a lin-log scale can be found in Fig. S1 in the Supplemental Material [29]. For both in-plane polarization directions, the \(\sigma_{1}\) spectrum shows intraband excitations at low frequencies described by Drude contributions, which become sharper during cooling down due to reduced scattering. For the further analysis and discussion of other excitations (besides the intraband transitions) we divide the measured energy range into two regions: (i) the low-energy region (energies between 0.08 eV and 0.7 eV) and (ii) the high-energy region (energies from 0.7 eV up to 2.25 eV). In the low-energy region, the \(\sigma_{1}\) spectrum along both axes shows a drop at around 0.1 eV followed by several polarization-dependent interband excitations at energies below 0.7 eV. Interestingly, most of the observed excitations in this energy range hardly shift with decreasing temperature but only sharpen. This temperature evolution of the low-energy excitations can be explained by a simple approach taking the temperature dependence of Figure 1: (a) Crystal structure of Nb\({}_{3}\)SiTe\({}_{6}\). (b) First Brillouin zone of Nb\({}_{3}\)SiTe\({}_{6}\) with the notation of the high-symmetry points. the Fermi-Dirac distribution function into account, causing a sharpening of the low-energy interband transitions with cooling. This approach was recently demonstrated for the Weyl semimetal TaP [30]. In the high-energy region, \(\sigma_{1}\) exhibits a monotonic increase with increasing frequency overlaid with various excitations, whose positions depend on the polarization direction. The overall changes in the optical conductivity in this energy region during cooling down are modest for both \(a\) and \(c\) axis. Focusing on the experimental spectra at the lowest studied temperature 6 K, one observes that the optical response contains several pronounced contributions for \(\mathbf{E}\|c\) compared to \(\mathbf{E}\|a\), as illustrated in the insets of Figs. 2(a) and (c). In order to distinguish the optical contributions of each axis, we fitted the \(\sigma_{1}\) spectra (and simultaneously the reflectivity spectra) using the Drude-Lorentz model. As an example, we display in the two panels of Fig. 3 the fitting of the \(\sigma_{1}\) spectrum together with the fitting contributions for \(\mathbf{E}\|a\) and \(\mathbf{E}\|c\) at 6 K (see also Figs. S2 and S3 in the Supplemental Material [29]). From the fitting we obtained the following contributions for the polarization \(\mathbf{E}\|c\): (i) Below 0.7 eV, the \(\sigma_{1}\) spectrum consists of two Drude contributions related to the itinerant charge carriers [19]. The presence of different types of free carriers is reasonable according to the reported electronic band structure, as already mentioned in the Introduction. In fact, two Drude components had to be included in the model for the \(\mathbf{E}\|c\) polarization direction to obtain a reasonable fit quality. Most interestingly, the E\(\|c\)\(\sigma_{1}\) spectrum shows a sharp peak-like excitation (L1) at around 0.15 eV, followed by two less sharp but still pronounced excitations at around 0.28 and 0.41 eV (L2 and L3, respectively), and a shoulder (L4) at around 0.49 eV. The main interband contributions (L1 - L3) to the \(\mathbf{E}\|c\) optical conductivity are also highlighted in Fig. 5(c). (ii) Above 0.7 eV, another three high-energy excitations overlay the monotonic increase, and are positioned at approximately 0.93 eV, 1.5 eV, and 2.0 eV [see Fig. 3(b)]. For the polarization direction \(\mathbf{E}\|a\) [see Fig. 3(a)], one Figure 2: Temperature-dependent reflectivity spectra of Nb\({}_{2}\)SiTe\({}_{6}\) for the polarization directions (a) \(\mathbf{E}\|a\) and (b) \(\mathbf{E}\|c\). Inset of (a): Comparison between the reflectivity spectra at 6 K for the two polarization directions over a broad frequency range. (c) and (d) Temperature-dependent optical conductivity \(\sigma_{1}\) spectra of Nb\({}_{2}\)SiTe\({}_{6}\) for \(\mathbf{E}\|a\) and \(\mathbf{E}\|c\), respectively. Inset of (c): Comparison between the \(\sigma_{1}\) spectra at 6 K for the two polarization directions in the entire measured range. Inset of (d): Difference spectra \(\Delta\sigma_{1}\) calculated according to \(\Delta\sigma_{1}(\omega,T)=\sigma_{1}(\omega,T)-\sigma_{1}(\omega,295~{}K)\). The gray area marks the full-width-at-half-maximum of the L1 peak at 295 K. Drude term was sufficient for obtaining a good fit quality. However, for consistency reasons, we included two Drude contributions for this polariziation direction as well, like for the \(\mathbf{E}\|c\) optical spectrum, accounting for different types of carriers. The observed interband excitations below 0.7 eV are smeared out compared to \(\mathbf{E}\|c\). In addition, most of the observed excitations below and above 0.7 eV are located at slightly different energies, namely at around 0.12 eV, 0.29 eV, 0.41 eV, and 0.52 eV in the low-energy region, and at around 0.9 eV, 1.5 eV, and 2.2 eV in the high-energy region [see Fig. 3(a) for the fitting contributions]. These differences between the observed excitations for the two directions give a clear evidence for the in-plane anisotropy of Nb\({}_{3}\)SiTe\({}_{6}\), consistent with the in-plane anisotropy of the crystal structure, which is further illustrated by the large difference between the in-plane lattice parameters \(a\) and \(c\) (\(a\)=6.353 \(\AA\), \(b\)=11.507 \(\AA\), \(c\)=13.938 \(\AA\) according to Ref. [27]). An in-plane anisotropy was also observed in the Fermi surface of the sister compound Ta\({}_{3}\)SiTe\({}_{6}\)[13]. The in-plane anisotropy is also revealed by the effective plasma frequency \(\omega_{p}\), which was calculated from the plasma frequencies \(\omega_{p1}\) and \(\omega_{p2}\) of the two Drude contributions according to \(\omega_{p}=\sqrt{\omega_{p1}^{2}+\omega_{p2}^{2}}\). The temperature-dependent plasma frequency for the two measured polarization directions is shown in Fig. 4. For \(\mathbf{E}\|a\)\(\omega_{p}\) decreases as the temperature decreases, namely from \(\omega_{p,300K}=1.18\) eV to \(\omega_{p,6K}=1.09\) eV. Such a temperature dependence is expected for a semimetal, with less free carriers at low temperatures [31]. Along the \(c\) axis, \(\omega_{p}\) exhibits an unusual behaviour, where it slightly increases from \(\omega_{p,300K}=1.05\) eV to \(\omega_{p,6K}=1.11\) eV upon cooling (see Fig. 4). Generally, a direct relation should be expected between the electronic kinetic energy and the Drude spectral weight, which is directly proportional to \(\omega_{p}^{2}\). An increase in \(\omega_{p}\) with cooling thus indicates an increase in electronic kinetic energy, which can be due to reduced electron correlation effects, i.e., reduced effective electronic mass [32; 33]. Another possible reason could be a temperature-induced shift of the Fermi level, as was recently demonstrated for the semimetal ZrTe\({}_{5}\)[34]. However, a Fermi level shift should affect the plasma frequency for both polarization directions in the same manner, which is inconsistent with our results. The temperature-dependent \(\sigma_{1}\) spectrum for \(\mathbf{E}\|c\) suggests a considerable redistribution of the spectral weight from low to high frequencies upon cooling [see Fig. 2(d)]. The difference spectra \(\Delta\sigma_{1}\), defined as \(\Delta\sigma_{1}(\omega,T)=\sigma_{1}(\omega,T)-\sigma_{1}(\omega,295K)\), illustrate the temperature-induced reshuffling of the spectral weight among various contributions. Such kind of analysis is in particular interesting for the \(\mathbf{E}\|c\) optical conductivity spectrum with the most pronounced excitations. We observe that the spectral weight redistribution mainly occurs between the Drude contributions and the mid-infrared excitations [see inset of Fig. 2 (d)]. At first sight, it seems that some Drude spectral weight is transferred to the L1 excitation during cooling down. However, when taking into account the full-width-at-half-maximum of Figure 3: Real part of the optical conductivity \(\sigma_{1}\) of Nb\({}_{3}\)SiTe\({}_{6}\) at 6 K for (a) \(\mathbf{E}\|a\) and (b) \(\mathbf{E}\|c\) together with the total Drude-Lorentz fit and the various fitting contributions (D: Drude term, L: Lorentz term). the L1 peak at room temperature, as indicated by the grey area, it is clear that the spectral weight of the L1 peak is decreased in certain energy ranges during cooling. For better illustration, we plot in Fig. S4 in the Supplemental Material [29] the difference spectra \(\Delta\sigma_{1}\) together with the temperature-dependent L1 Lorentz peak. From our detailed fitting analysis of the experimental data we find that the Drude spectral weight is slightly increasing with decreasing temperature (increasing plasma frequency \(\omega_{p}\) for \(\mathbf{E}\|c\), see Fig. 4), whereas the spectral weight of the L1 peak (as well as the L2 peak) is decreasing, as will be discussed in more detail later. The transfer of the spectral weight indicates the reconstruction of the electronic energy bands over the energy range below 0.7 eV. For the interpretation of the observed excitations in the experimental optical conductivity spectra, we carried out DFT electronic band structure calculations and calculations of the optical conductivity. Hereby, SOC was taken into account. The calculated electronic band structure, as depicted in Fig. 5(b), is in agreement with earlier results [4]. Namely, we observe the gapping of the nodal loop around \(\Gamma\) point, the appearance of an hourglass Dirac loop along the \(S-R\) and \(S-X\) paths, and the degeneracy of the bands along several paths. The electronic band structure contains several nontrivial bands near E\({}_{F}\), mainly originating from Nb \(4d\) orbitals [1], as evidenced by the partial density of states depicted in Fig. 6 (b). For the further discussion, we group adjacent electronic bands into pairs, labeled A, B, C, and D, with the C and D bands lying in direct vicinity of \(E_{F}\). The theoretical plasma frequencies from the DFT calculations amount to 0.138 eV for \(\mathbf{E}\|a\), 0.699 eV for \(\mathbf{E}\|b\), and 0.943 eV for \(\mathbf{E}\|c\). Accordingly, the one for \(\mathbf{E}\|c\) matches the experimental value, while the one for \(\mathbf{E}\|a\) is lower compared to the experimental result. Figure 5 (a) displays a comparison between the calculated interband conductivity \(\sigma_{xx}\) and \(\sigma_{zz}\) together with the corresponding experimental results \(\sigma_{1,interband}\) at 6 K for the polarization directions \(\mathbf{E}\|a\) and \(\mathbf{E}\|c\), respectively, over a broad energy range. As DFT provides only the interband contribution, we subtracted the Drude terms from the experimental \(\sigma_{1}\) spectra and show \(\sigma_{1,interband}\). Obviously, the overall experimental interband conductivity spectra for both polarization directions are well reproduced by theory, especially in the low-energy region below 1 eV, which will be the main focus in the following. The somewhat less favorable agreement above 1 eV is probably caused by inaccuracies in the description of the excited states within DFT. It is important to note that the agreement between theory and experiment is in particular obvious for the \(\mathbf{E}\|c\) polarization direction, since the theoretical \(\sigma_{zz}\) spectrum also shows a low-energy three-peak profile with peak energies 0.15, Figure 5: (a) Comparison between the experimental and DFT interband conductivity \(\sigma_{1,interband}\) for energies up to 2.2 eV for the polarization directions \(\mathbf{E}\|a\) and \(\mathbf{E}\|c\) at 6 K. (b) Calculated band structure of Nb\({}_{3}\)SiTe\({}_{6}\) with SOC. (c) Temperature-dependent experimental interband conductivity \(\sigma_{1,interband}\) below 1.0 eV for the polarization directions \(\mathbf{E}\|c\) compared with the DFT interband conductivity \(\sigma_{zz}\). The colored peaks are the fitting contributions for L1, L2, L3 using the Lorentz model. (d) Contributions of different band combinations to the optical conductivity \(\sigma_{zz}\). (e) The experimental \(\sigma_{1,interband}\) at 6 K for the polarization \(\mathbf{E}\|a\) compared with the DFT interband conductivity \(\sigma_{xx}\). (f) Contributions of different band combinations to the optical conductivity \(\sigma_{xx}\). 0.26, and 0.38 eV, corresponding to the L1, L2, and L3 peaks at positions 0.15, 0.28, and 0.41 eV in the measured \(\sigma_{1}\) spectrum. The calculated optical conductivity reveals a tiny peak below 0.1 eV for both polarization directions, which is absent in the experimental spectra and most likely hidden behind the Drude contribution. This tiny peak might be due to transitions within the hourglass Dirac loop very close to E\({}_{F}\). One furthermore observes that the experimental **E\(\parallel\)**\(\sigma_{1,interband}\) below the first sharp L1 excitation is approximately linear in frequency and temperature independent, although the L1 peak itself shows a significant temperature dependence in its intensity. A similar behavior was recently observed for the nodal-line semimetal BaNiS\({}_{2}\)[35]. Here, temperature-dependent peaks in the theoretical optical conductivity spectra, located at the high-energy limit of a temperature-independent isosbestic linear-in-frequency conductivity, were associated with van Hove singularities (VHS's). These VHS's are related to saddle points of the electronic bands which result from the connections between Dirac cones in the reciprocal space. Although the sharp peaks were not fully developed in the experimental optical data of BaNiS\({}_{2}\), it was suggested that such sharp VHS's in \(\sigma_{1}\) are signatures of open Dirac nodal lines [35]. In case of BaNiS\({}_{2}\) a pronounced transfer of the spectral weight from the Drude contributions to the sharp peaks via the temperature-independent isosbestic line was observed during cooling down, in contrast to our findings. It is important to note that the appearance of a VHS is not a generic feature of open Dirac nodal lines. For example, for dispersive open Dirac nodal lines the linear-in-frequency conductivity may end with a flat spectral response without any noticeable peaks [36; 37]. The presence of Dirac nodal lines or loops with corresponding two-dimensional Dirac electrons is expected to cause characteristic fingerprints in the optical response, namely a frequency-independent interband optical conductivity related to the transitions within the Dirac cones [36; 38; 39]. It was furthermore shown that energy-dispersive nodal lines (in contrast to flat nodal lines) can cause a linear-in-frequency behavior in the optical conductivity, similar to semimetals with separate Dirac nodal points [37]. As mentioned in the Introduction, the electronic band structure of Nb\({}_{3}\)SiTe\({}_{6}\) contains several nodal lines and loops in the vicinity of E\({}_{F}\). However, these nontrivial features are neither revealed in the experimental nor in the theoretical interband conductivity spectra [see Fig. 5(a)]. Based on the DFT calculations we can decompose the interband optical conductivity into contributions of different band combinations. In Figs. 5(d) and (f) we display the contributions of the interband transitions C-D, B-C, B-D, A-C, and A-D to the theoretical conductivity spectra \(\sigma_{zz}\) and \(\sigma_{xx}\), respectively. In the following, we will focus on understanding the origin of these excitations and relate them to the observed excitation features in \(\sigma_{1,interband}\). By using the Drude-Lorentz fit, we managed to separate each contribution to the experimental \(\sigma_{1}\) spectrum and to follow the respective temperature dependence. A direct comparison between the fitting components L1, L2, L3, and L4 in the experimental \(\sigma_{1,interband}\) spectra and the C-D, B-C, B-D, A-C and A-D excitations in the theoretical spectra reveals the following: (i) For **E\(\parallel\)**c**, the sharpest peak L1 originates from transitions between the C-D bands [Figs. 5 (c) and (d)]. The L2 peak is due to transitions between the B-C bands, while the L3 peak results from transitions between A-C and B-D bands. The shoulder [Lorentz contribution L4, see Fig. 3(b)] above the L3 peak can be associated with A-D transitions. However, based on the results of the DFT calculations we could not locate the positions in momentum space, where the transitions take place. (ii) For **E\(\parallel\)**\(a\)**, the lowest-energy peak can be attributed to C-D transitions, like for **E\(\parallel\)**c** [see Figs. 5 (e) and (f)]. However, the interpretation of the higher-energy contributions to **E\(\parallel\)**\(a\)**\(\sigma_{1,interband}\) is less straightforward, since the C-D transitions contribute spectral weight over a rather broad frequency range, namely up to 0.6 eV, hence overlaying the contributions of other band combinations B-C, B-D etc. Further insight into the origin of these spectral features can be obtained from the electronic density of states (DOS) shown in Fig. 6(b). There are two DOS peaks located below the Fermi level and manifest VHS's. Another VHS is located right above E\({}_{F}\). The accumulation of electronic states around these energies and the transitions between them [see arrows in Fig. 6(b)] could cause absorption peaks at energies 0.15 and 0.25 eV, which match the energy positions of the first two low-energy peaks in \(\sigma_{zz}\), and hence could serve as an explanation for the L1 and L2 peaks in the experimental \(\sigma_{1}\) spectrum [see Fig. 5(c)]. On the other hand, the fact that no DOS peak accompanies the higher-energy \(\sigma_{zz}\) peaks (and thus L3 and L4) suggests a different origin of these features. Figure 6: (a) Calculated electronic band structure of Nb\({}_{3}\)SiTe\({}_{6}\). (b) Total density of states together with the partial density of states for Nb 4\(d\), Si 3\(p\), and Te 5\(p\). The two vertical arrows mark the possible electronic transitions, which could explain the two low-frequency absorption peaks L1 and L2 in the **E\(\parallel\)**\(c\)** optical conductivity. As we mentioned above, the L1, L2, and L3 excitations get sharper on decreasing temperature, while their energy positions are almost frequency-independent. Thus, by comparing the temperature dependence of the scattering rate (width) and oscillator strength of these three excitations we can gain insight whether these excitations have the same origin or not. In Figs. 7 (a) and (b) we display the temperature-dependent scattering rate and oscillator strength, as extracted from the Drude-Lorentz fittings. The L3 excitation shows a different behaviour compared to L1 and L2: whereas its width is hardly affected (slight narrowing) during cooling down, its oscillator strength monotonically increases with decreasing temperature and saturates below 100 K. This suggests that the L3 excitation has a different origin than the L1 and L2 excitations. Interestingly, both L1 and L2 excitations display a very similar temperature behaviour: below 250 K their width as well as the oscillator strength monotonically decrease down to 100 K. The reduction in the width can be related to the reduced occupation of the bands D that lie immediately above E\({}_{F}\) and become less populated when temperature decreases. Below 100 K, both the scattering rate and oscillator strength for the L1 and L2 excitations are approximately constant. Since the L1 and L2 excitations show a similar temperature dependence, we conclude that they have a similar origin, contrasted with L3. This analysis underpins our assignment of L1 and L2 to the VHS features of the band structure. The VHS interpretation of the L1 and L2 excitations is further supported by the low-dimensional character of the material, which leads to an increasing probability for the existence of VHS's in the electronic band structure, like in the layered Dirac semimetal ZrTe\({}_{5}\)[6]. Additionally, the observed transfer of the spectral weight upon cooling, combined with the temperature-independent behaviour of \(\sigma_{1,interband}\) at energies below L1, also suggest that these excitations are due to VHS's related to the nodal lines. As already mentioned above, a recent optical conductivity study of the nodal-line semimetal BaNiS\({}_{2}\)[35] showed spectral weight transfer from low to high energies via a temperature-independent isosbestic line, ending at a VHS, which was proposed to result from the connections between Dirac cones in the reciprocal space. Pronounced features in the optical conductivity spectra related to VHS's have also been observed in other Dirac materials, such as ZrTe\({}_{5}\)[6; 40], PbTe\({}_{2}\)[8], \(T_{d}\)-MoTe\({}_{2}\)[41], and TaAs [42], and in the kagome metals Fe\({}_{3}\)Sn\({}_{2}\)[7], Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\)[32], and AV\({}_{3}\)Sb\({}_{5}\) with A = K, Rb, Cs [43; 44; 45]. These VHS's may have different origin. In ZrTe\({}_{5}\), the quasi-divergent peak observed in the optical conductivity was interpreted in terms of a VHS in the joint density of states. In AV\({}_{3}\)Sb\({}_{5}\), they are due to band saddle points, while in Fe\({}_{3}\)Sn\({}_{2}\) and Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) the observed sharp absorption peaks in the low-energy optical conductivity were attributed to the flat bands. In the case of the Dirac semimetal PbTe\({}_{2}\) the decrease and the collapse of the scattering rate of the low-energy charge carriers (i.e., the reduction in phase space for scattering) was interpreted as an experimental evidence for a VHS close to the Fermi level [8]. ## IV Conclusion In summary, we have performed a polarization-dependent in-plane infrared spectroscopy study of Nb\({}_{3}\)SiTe\({}_{6}\) at low temperature, combined with DFT calculations of the electronic band structure and optical conductivity. The comparative experimental and theoretical study revealed a similar profile of the interband optical conductivity for both in-plane polarization directions, namely, several peaks due to interband transitions followed by a monotonic increase overlaid with various high-energy excitations. We found that the interband excitations along the \(c\) axis lead to pronounced and sharp peaks, in contrast to the less pronounced excitations for \(\mathbf{E}\|a\), indicating an in-plane anisotropic behavior of Nb\({}_{3}\)SiTe\({}_{6}\). Based on calculations of the band structure and optical conductivity, we assign the pronounced peaks at around 0.15 and 0.28 eV in the \(\mathbf{E}\|c\) optical conductivity to van Hove singularities in the electronic density of states. Figure 7: Temperature-dependent (a) oscillator strength and (b) width (\(\gamma\)) of L1, L2, and L3 excitations for the polarization \(\mathbf{E}\|c\), obtained from Drude-Lorentz fittings of \(\sigma_{1}\). ## Acknowledgments C.A.K. acknowledges financial support by the Deutsche Forschungsgemeinschaft (DFG), Germany, through Grant No. KU 1432/15-1. Z.Q.M. acknowledges financial support by the US Department of Energy under grant DE-SC0019068.
2302.08667
Neutron skin thickness of $^{116,118,120,122,124}$Sn determined from reaction cross sections of proton scattering
The cross sections of SDR in the Sb isotopes have been measured. Within the model used, the neutron-skin thicknesses $r_{\rm skin}({\rm exp})$ deduced $0.12 \pm 0.06$fm for $^{116}$Sn, $0.13 \pm 0.06$fm for $^{118}$Sn, $0.18 \pm 0.07$fm for $^{120}$Sn, $0.22 \pm 0.07$fm for $^{122}$Sn, $0.19 \pm 0.07$fm for $^{124}$Sn. We tested the chiral (Kyushu) $g$-matrix folding model for $^{12}$C+$^{12}$C scattering, and found that the Kyushu $g$-matrix folding model is reliable for reaction cross sections $\sigma_{\rm R}$ in $30 < E_{\rm in} < 100 $MeV and $250 < E_{\rm in} < 400$MeV. We determine neutron skin thickness $r_{\rm skin}({\rm exp})$, using measured $\sigma_{\rm R}$ of $^{4}$He+$^{116,120,224}$Sn scattering. The results are $r_{\rm skin}({\rm exp})=0.242 \pm 0.140$fm for $^{116}$Sn, $r_{\rm skin}({\rm exp})=0.377 \pm 0.140$fm for $^{120}$Sn, $r_{\rm skin}({\rm exp})=0.180 \pm 0.142$fm for $^{124}$Sn. The $\sigma_{\rm R}$ are available for proton scattering on $^{116,118,120,122,124}$Sn with high accuracy. Our aim is to determine $r_{\rm skin}({\rm exp})$ for $^{116,118,120,122,124}$Sn with small errors by using the Kyushu $g$-matrix folding model. Our model is the folding model with the densities scaled from the D1S-GHFB+AMP neutron density. The proton radii of D1S-GHFB+AMP agree with those calculated with the isotope shift method based on the electron scattering. We then scale the neutron densities so as to reproduce the $\sigma_{\rm R}({\rm exp})$. In $30 < E_{\rm in} < 65$MeV, we determine $r_{\rm skin}({\rm exp})$ from measured $\sigma_{\rm R}$. The values are $r_{\rm skin}({\rm exp})=0.118 \pm 0.021$~fm for $^{116}$Sn, $0.112 \pm 0.021$fm for $^{118}$Sn, $0.124 \pm 0.021$fm for $^{120}$Sn, $0.156 \pm 0.022$fm for $^{124}$Sn. As for $^{122}$Sn, the skin value in $30 < E_{\rm in} < 50$MeV is $0.122 \pm 0.024$fm. Our results are consistent with the previous values.
Shingo Tagami, Tomotsugu Wakasa, Masanobu Yahiro
2023-02-17T03:18:49Z
http://arxiv.org/abs/2302.08667v1
Neutron skin thickness of \({}^{116,118,120,122,124}\)Sn determined from reaction cross sections of proton scattering ###### Abstract **Background:** The cross sections of the isovector spin-dipole resonances (SDR) in the Sb isotopes have been measured. Within the model used, the neutron-skin thicknesses \(r_{\rm skin}({\rm exp})\) deduced \(0.12\pm 0.06\) fm for \({}^{116}\)Sn, \(0.13\pm 0.06\) fm for \({}^{118}\)Sn, \(0.18\pm 0.07\) fm for \({}^{120}\)Sn, \(0.22\pm 0.07\) fm for \({}^{122}\)Sn, \(0.19\pm 0.07\) fm for \({}^{124}\)Sn. We tested the chiral (Kyushu) \(g\)-matrix folding model for \({}^{12}\)C+\({}^{12}\)C scattering, and found that the Kyushu's folding model is reliable for reaction cross sections \(\sigma_{\rm R}\) in \(30\lesssim E_{\rm in}\lesssim 100\) MeV and \(250\lesssim E_{\rm in}\lesssim 400\) MeV. We determine neutron skin thickness \(r_{\rm skin}({\rm exp})\), using measured \(\sigma_{\rm R}\) of \({}^{4}\)He+\({}^{116,120,224}\)Sn scattering. The results are \(r_{\rm skin}({\rm exp})=0.242\pm 0.140\) fm for \({}^{116}\)Sn, \(r_{\rm skin}({\rm exp})=0.377\pm 0.140\) fm for \({}^{120}\)Sn, \(r_{\rm skin}({\rm exp})=0.180\pm 0.142\) fm for \({}^{124}\)Sn. The \(\sigma_{\rm R}\) are available for proton scattering on \({}^{116,118,120,122,124}\)Sn with high accuracy of \(2\sim 3\%\) as a function of incident energy \(E_{\rm in}\). **Purpose:** Our aim is to determine \(r_{\rm skin}({\rm exp})\) for \({}^{116,118,120,122,124}\)Sn with small errors by using the Kyushu \(g\)-matrix folding model. **Methods:** Our model is the Kyushu \(g\)-matrix folding model with the densities scaled from the D1S-GHFB+AMP neutron density, where D1S-GHFB+AMP stands for Gogny-D1S HFB (D1S-GHFB) with the angular momentum projection (AMP). **Results:** The proton radii of D1S-GHFB+AMP agree with those calculated with the isotope shift method based on the electron scattering. We then scale the D1S-GHFB+AMP neutron densities so as to reproduce the \(\sigma_{\rm R}({\rm exp})\). In \(30\lesssim E_{\rm in}\lesssim 65\) MeV, we determine \(r_{\rm skin}({\rm exp})\) from measured \(\sigma_{\rm R}\). The values are \(r_{\rm skin}({\rm exp})=0.118\pm 0.021\) fm for \({}^{116}\)Sn, \(0.112\pm 0.021\) fm for \({}^{118}\)Sn, \(0.124\pm 0.021\) fm for \({}^{120}\)Sn, \(0.156\pm 0.022\) fm for \({}^{124}\)Sn. As for \({}^{122}\)Sn, the skin value in \(30\lesssim E_{\rm in}\lesssim 50\) MeV is \(0.122\pm 0.024\) fm. **Conclusions:** Our results are consistent with the previous values. **Neutron skin thickness of \({}^{116,118,120,122,124}\)Sn determined from reaction cross sections of proton scattering** ## I Introduction and conclusion _Background on experiments:_ Horowitz, Pollock and Souder proposed a direct measurement for neutron-skin thickness \(r_{\rm skin}=r_{\rm n}-r_{\rm p}\)[1], where \(r_{\rm p}\) and \(r_{\rm n}\) are proton and neutron radii, respectively. This direct measurement \(r_{\rm skin}\) consists of parity-violating and elastic electron scattering. In fact, as for \({}^{208}\)Pb, the PREX group has reported, \[r_{\rm skin}^{208}({\rm PREX2})=0.283\pm 0.071=0.212\sim 0.354\,{\rm fm}, \tag{1}\] combining the original Lead Radius EXperiment (PREX) result [2; 3] with the updated PREX2 result [4]. This is the most reliable skin value for \({}^{208}\)Pb. Very lately, as for \({}^{48}\)Ca, the CREX group has presented [5]. \[r_{\rm skin}^{48}({\rm CREX}) = 0.121\pm 0.026\,({\rm exp})\pm 0.024\,({\rm model}) \tag{2}\] \[= 0.071\sim 0.171\,{\rm fm}.\] The value is the most reliable skin value for \({}^{48}\)Ca. These skin values and the \(r_{\rm p}\) of Ref. [6; 7] allow us to deduce matter radii \(r_{\rm m}\). These values are tabulated in Table 1. As for the Sn isotopes, an indirect measurement on \(r_{\rm skin}\) was made [8]. In 1998, the cross sections of the isovector spin-dipole resonances (SDR) in the Sb isotopes excited by the (\({}^{3}\)He, t) charge-exchange reaction at 450 MeV for \(0^{\circ}\leq\theta_{t}\leq 1.15^{\circ}\) have been measured. In order to deduce \(r_{\rm n}\), they used the sum rule of Ref. [9] valid for the spin-dipole operator involving the difference between the \(\beta^{-}\) and \(\beta^{-}\) strengths and the energy-weighted sum rule for the SDR calculated in a model where the unperturbed particle-hole energies are degenerate with an energy. The skin values, \(r_{\rm n}\), \(r_{\rm m}\) and the \(r_{\rm p}\) of Ref. [6] are also shown in Table 1. As for \({}^{120}\)Sn, in 2018, the electric dipole strength distribution between 5 and 22 MeV was determined at RCNP from polarization transfer observables measured in proton inelastic scattering at \(E_{\rm lab}=295\) MeV and forward angles including \(0^{\circ}\)[10]. They extracted a highly precise electric dipole polarizability \(\alpha_{\rm D}=8.93(36)\,\,{\rm fm}^{3}\) by combined it with photoabsorption data. Within the model used, they yield \(r_{\rm skin}=0.148(34)\) fm. Their results are also shown in Table 1. The result has smaller error that that of Ref. [8]. _Background on model:_ The reaction cross section \(\sigma_{\rm R}\) is a standard way of determining matter radius \(r_{\rm m}\). One can evaluate \(r_{\rm skin}\) and \(r_{\rm n}\) deduced from the \(r_{\rm m}\) and the \(r_{\rm p}\). [6] calculated with the isotope shift method based on the electron scattering. We tested the chiral (Kyushu) \(g\)-matrix folding model [11] for \({}^{12}\)C+\({}^{12}\)C scattering and found that the Kyushu \(g\)-matrix folding model is reliable for reaction cross sections \(\sigma_{\rm R}\) in \(30\leq E_{\rm in}\leq 100\) MeV and \(250\stackrel{{<}}{{{}_{\sim}}}E_{\rm in}\stackrel{{<}}{{{}_{ \sim}}}400\) MeV [12]. The Kyushu \(g\)-matrix folding model were applied for measured \(\sigma_{\rm R}\) of \({}^{4}\)He\({}^{4}\)\({}^{116,120,224}\)Sn scattering [13]:, the results are \(r_{\rm skin}(\exp)=0.242\pm 0.140\) fm for \({}^{116}\)Sn, \(r_{\rm skin}(\exp)=0.377\pm 0.140\) fm for \({}^{120}\)Sn, \(r_{\rm skin}(\exp)=0.180\pm 0.142\) fm for \({}^{124}\)Sn. These values have larger errors than those shown in Table 1. As for \(p\)+\({}^{208}\)Pb scattering, we determined a value of \(r_{\rm skin}^{208}(\exp)\) from measured \(\sigma_{\rm R}\) in a range of incident energies, \(30\stackrel{{<}}{{{}_{\sim}}}E_{\rm lab}\stackrel{{< }}{{{}_{\sim}}}100\) MeV; the value is \(r_{\rm skin}^{208}(\exp)=0.278\pm 0.035\) fm [14]. Our result agrees with \(r_{\rm skin}^{208}({\rm PREX2})\). In this case, we used the D1S-GHFB+AMP proton and neutron densities, where D1S-GHFB+AMP stands for Gogny-D1S HFB (D1S-GHFB) with the angular momentum projection (AMP). The \(r_{\rm p}\) calculated with D1S-GHFB+AMP agrees with the experimental value of Ref [7]. Also for \({}^{116,118,120,122,124}\)Sn, the \(r_{\rm p}\) of D1S-GHFB+AMP agree with those [6] calculated with the isotope shift method based on the electron scattering. For this reason, we use the D1S-GHFB+AMP proton and neutron densities in this paper. The data [15; 16] on \(\sigma_{\rm R}\) with high accuracy of \(2\sim 3\%\) are available for p+\({}^{116,118,120,122,124}\)Sn. _Aim:_ Our aim is to determine \(r_{\rm skin}(\exp)\) for \({}^{116,118,120,122,124}\)Sn with small errors by using the Kyushu \(g\)-matrix folding model with the D1S-GHFB+AMP proton and neutron densities. _Results:_ Our values are \(r_{\rm skin}(\exp)=0.118\pm 0.021\) fm for \({}^{116}\)Sn, \(0.112\pm 0.021\) fm for \({}^{118}\)Sn, \(0.124\pm 0.021\) fm for \({}^{120}\)Sn, \(0.156\pm 0.022\) fm for \({}^{124}\)Sn, where the data are taken in \(30\stackrel{{<}}{{{}_{\sim}}}E_{\rm in}\stackrel{{<} }{{{}_{\sim}}}65\) MeV. As for \({}^{122}\)Sn, the skin value in \(30\stackrel{{<}}{{{}_{\sim}}}50\) MeV is \(0.122\pm 0.024\) fm. _Conclusion:_ Our results of Table 2 are consistent with those shown in Table 1. ## II Model Kohno calculated the \(g\) matrix for the symmetric nuclear matter, using the Brueckner-Hartree-Fock method with chiral N\({}^{3}\)LO 2NFs and NNLO 3NFs [17]. He set \(c_{D}=-2.5\) and \(c_{E}=0.25\) so that the energy per nucleon can become minimum at \(\rho=\rho_{0}\); see Fig. 1 for \(c_{D}\) and \(c_{E}\). Toyokawa _et al._ localized the non-local chiral \(g\) matrix into three-range Gaussian forms [11], using the localization method proposed by the Melbourne group [18; 19]. The resulting local \(g\) matrix is called "Kyushu \(g\)-matrix". Now, we show the folding model for nucleon-nucleus scattering. The potential \(U(\mathbf{R})\) consists of the direct and exchange parts [21], \(U^{\rm DR}(\mathbf{R})\) and \(U^{\rm EX}(\mathbf{R})\), defined by \[U^{\rm DR}(\mathbf{R}) = \sum_{\mu,\nu}\int\rho_{\rm T}^{\nu}(\mathbf{r}_{\rm T})g_{\mu\nu}^{ \rm DR}(s;\rho_{\mu\nu})d\mathbf{r}_{\rm T}\, \tag{3a}\] \[U^{\rm EX}(\mathbf{R}) = \sum_{\mu,\nu}\int\rho_{\rm T}^{\nu}(\mathbf{r}_{\rm T},\mathbf{r}_{\rm T }+\mathbf{s})\] (3b) \[\times g_{\mu\nu}^{\rm EX}(s;\rho_{\mu\nu})\exp\left[-i\mathbf{K}(\bm {R})\cdot\mathbf{s}/M\right]\!d\mathbf{r}_{\rm T}\,\] where \(\mathbf{R}\) is the coordinate between a projectile (P) and a target (T), \(\mathbf{s}=-\mathbf{r}_{\rm T}+\mathbf{R}\), and \(\mathbf{r}_{\rm T}\) is the coordinate of the interacting nucleon from the center-of-mass of T. Each of \(\mu\) and \(\nu\) denotes the \(z\)-component of isospin, i.e., \((1/2,-1/2)\) corresponds to (neutron, proton). The nonlocal \(U^{\rm EX}\) has been localized in Eq. (3b) with the local semi-classical approximation [22], where \(\mathbf{K}(\mathbf{R})\) is the local momentum between P and T, and \(M=A/(1+A)\) for the target mass number \(A\); see Ref. [23] for the validity of the localization. The direct and exchange parts, \(g_{\mu\nu}^{\rm DR}\) and \(g_{\mu\nu}^{\rm EX}\), of the \(g\)-matrix depend on the local density \[\rho_{\mu\nu}=\sigma^{\mu}\rho_{\rm T}^{\nu}(\mathbf{r}_{\rm T}+\mathbf{s}/2) \tag{4}\] at the midpoint of the interacting nucleon pair, where \(\sigma^{\mu}\) having \(\mu=-1/2\) is the Pauli matrix of an incident proton. As a way of taking the center-of-mass correction to the D1S-GHFB+AMP densities, we use the method of Ref. [24], since the procedure is quite simple. The direct and exchange parts, \(g_{\mu\nu}^{\rm DR}\) and \(g_{\mu\nu}^{\rm EX}\), of the \(g\) Figure 1: 3NFs in NNLO. Diagram (a) corresponds to the Fujita-Miyazawa \(2\pi\)-exchange 3NF [20], and diagrams (b) and (c) correspond to \(1\pi\)-exchange and contact 3NFs. The solid and dashed lines denote nucleon and pion propagations, respectively, and filled circles and squares stand for vertices. The strength of the filled-square vertex is often called \(c_{D}\) in diagram (b) and \(c_{E}\) in diagram (c). matrix, are described by [24] \[\begin{split}& g_{\mu\nu}^{\rm DR}(s;\rho_{\mu\nu})\\ &=\begin{cases}\frac{1}{4}\sum_{S}\hat{S}^{2}g_{\mu\nu}^{S1}(s; \rho_{\mu\nu})&;&\text{for }\mu+\nu=\pm 1\\ \frac{1}{8}\sum_{S,T}\hat{S}^{2}g_{\mu\nu}^{ST}(s;\rho_{\mu\nu}),&;&\text{for }\mu+\nu=0\end{cases}\\ & g_{\mu\nu}^{\rm EX}(s;\rho_{\mu\nu})\\ &=\begin{cases}\frac{1}{4}\sum_{S}(-1)^{S+1}\hat{S}^{2}g_{\mu\nu}^ {S1}(s;\rho_{\mu\nu})&;&\text{for }\mu+\nu=\pm 1\\ \frac{1}{8}\sum_{S,T}(-1)^{S+T}\hat{S}^{2}g_{\mu\nu}^{ST}(s; \rho_{\mu\nu})&;&\text{for }\mu+\nu=0\end{cases}\end{split} \tag{6}\] where \(\hat{S}=\sqrt{2S+1}\) and \(g_{\mu\nu}^{ST}\) are the spin-isospin components of the \(g\)-matrix; see Ref. [25] for the explicit form of \(g_{\mu\nu}^{\rm DR}\) and \(g_{\mu\nu}^{\rm EX}\). As for Sn isotopes, the proton and neutron densities, \(\rho_{\rm p}(r)\) and \(\rho_{\rm n}(r)\), are calculated with D1S-GHFB+AMP [12]. As a way of taking the center-of-mass correction to the D1S-GHFB+AMP densities, we use the method of Ref. [26], since the procedure is quite simple. ### Scaling procedure of neutron density The neutron density \(\rho_{p}(r)\) is scaled from the D1S-GHFB+AMP one. We can obtain the scaled density \(\rho_{\rm scaling}(\mathbf{r})\) from the original density \(\rho(\mathbf{r})\) as \[\rho_{\rm scaling}(\mathbf{r})=\frac{1}{\alpha^{3}}\rho(\mathbf{r}/\alpha) \tag{7}\] with a scaling factor \[\alpha=\sqrt{\frac{\langle\mathbf{r}^{2}\rangle_{\rm scaling}}{\langle\mathbf{r}^{2} \rangle}}. \tag{8}\] We scale the neutron density so that the \(f\times\sigma_{\rm R}({\rm D1S})\) may reproduce the data (\(\sigma_{\rm R}({\rm exp})\)), where \(\sigma_{\rm R}({\rm D1S})\) is the result of D1S-GHFB+AMP and \(f\) is the average of \(\sigma_{\rm R}({\rm exp})/\sigma_{\rm R}({\rm D1S})\) over \(E_{\rm lab}\). ## III Results Figure 2 shows reaction cross sections \(\sigma_{\rm R}\) for p+\({}^{120}\)Sn scattering as a function of \(E_{\rm lab}\). The \(\sigma_{\rm R}({\rm D1S})\) calculated wth D1S-GHFB+AMP undershoots the data [15; 16] (\(\sigma_{\rm R}({\rm exp})\)) in \(30.2\leq E_{\rm lab}\leq 65.5\) MeV, but \(f\times\sigma_{\rm R}({\rm D1S})\) almost agrees with the data within error bars, where \(f\) is the average of \(f(E_{\rm lab})\equiv\sigma_{\rm R}({\rm exp})/\sigma_{\rm R}({\rm D1S})\) over \(E_{\rm lab}\). In this case, \(f\) is 1.04711. As a result of the scaling procedure mentioned above, we can obtain \(r_{\rm m}=4.655\pm 0.021\) fm, leading to \(r_{\rm skin}({\rm exp})=0.124\pm 0.021\) fm; see Table 2. Figure 3 shows a skin vale \(r_{\rm skin}(E_{\rm lab})\) for each \(E_{\rm lab}\) for \(p\)+\({}^{120}\)Sn scattering in \(30.2\leq E_{\rm lab}\leq 65.5\) MeV The \(r_{\rm skin}(E_{\rm lab})\) fluctuate within 0.3 fm and -0.1 fm. The indicates that taking the weighted mean is important. Figure 4 shows \(E_{\rm lab}\) dependence of \(f(E_{\rm lab})\) for \(p\)+\({}^{120}\)Sn scattering. The \(E_{\rm lab}\) dependence of \(f(E_{\rm lab})\) is not smooth, because the \(\sigma_{\rm R}\) calculated with D1S-GHFB+AMP are smooth for \(E_{\rm lab}\) dependence but the central values of the data are not. Note that the factor \(f=1.04711\) is obtained by averaging \(f(E_{\rm lab})\) over \(30.2\leq E_{\rm lab}\leq 65.5\) MeV Figure 3: \(r_{\rm skin}({\rm exp})\) for each \(E_{\rm lab}\) for \(p\)+\({}^{120}\)Sn scattering. Closed circles with error-bar show \(r_{\rm skin}({\rm exp})\) for each \(E_{\rm lab}\) The same procedure is taken for p+\({}^{116,118,122,124}\)Sn scattering. Our results and \(f\) are shown in Table 2. The \(r_{\rm p}\) of D1S-GHFB+AMP agree with those of the electron scattering, where the charge radii are taken from Ref. [6]. The values of \(r_{\rm p}\) are shown in Table 1. Figure 5 shows skin values as a function of \(S_{\rm p}-S_{\rm n}\). Our skin values calculated with D1S-GHFB+AMP are compared with our previous work [13] with Sly7, where the SLy7 parameter set is an improved version of the widely used SLy4 [27]. The data of measured \(\sigma_{\rm R}\) for \({}^{4}\)He scattering on \({}^{116,120,124}\)Sn targets have larger errors than the data for \(p\)+\({}^{116,118,120,122,124}\)Sn scattering. Eventually, our results have small errors compared with the previous results. This indicates that the present values are more reliable. As for \({}^{120}\)Sn, in addition, the present value \(r_{\rm skin}=0.124\pm 0.021\) fm is consistent with \(r_{\rm skin}=0.148\pm 0.034\) fm [10] deduced from \(\alpha_{\rm D}\). Our values are near the lower bound of the previous result for \({}^{116}\)Sn, and near the central value of the previous result for \({}^{124}\)Sn. Finally, we summarize skin values determined from measured \(\sigma_{\rm R}\) and those by using electroweak interaction. Figure 6 shows skin values as a function of \(S_{\rm p}-S_{\rm n}\), where \(S_{\rm p}\) (\(S_{\rm n}\)) is the proton (neutron) separation energy. The skin values \(r_{\rm skin}(\sigma_{\rm R})\) determined from measured \(\sigma_{\rm R}\) for \({}^{116,118,120,122,124}\)Sn are compared with the data of PREX2 [4], \({}^{116,118,120,122,124}\)Sn [8; 10], CREX [5]. As for Sn isotopes, our results of Table 2 are consistent with the previous experimental skin-values of Refs. [8; 10]. Our value \(r_{\rm skin}^{208}(\exp)=0.278\pm 0.035\) fm of Ref. [14] agrees with \(r_{\rm skin}^{208}(\rm PREX2)\). Now we make qualitative discussion. We assume a linear relation with \(r_{\rm skin}\) and \(\delta=S_{\rm p}-S_{\rm n}\) and take the \(\chi^{2}\) fitting for our central skin-values for \({}^{116,118,120,122,124}\)Sn, we can get \(r_{\rm skin}=0.0091\delta+0.1116\). When we extrapolate our central skin-values for \({}^{116,118,120,122,124}\)Sn by using the linear relation, we can obtain \(r_{\rm skin}=0.165\) fm for \({}^{48}\)Ca. In fact, we have already determined \(r_{\rm skin}^{48}(\exp)=0.158\pm(0.023)_{\rm exp}\pm(0.012)_{\rm th}\) fm [28] from \(p\)+\({}^{48}\)Ca scattering and \({}^{48}\)Ca+\({}^{12}\)C scattering. These values are near the upper bound of CREX. As for \({}^{40}\)Ca, the linear relation yields \(r_{\rm skin}^{48}(\exp)=0.045\) fm. The value is near the upper bound of our previous value \(r_{\rm skin}=-0.035\pm 0.075\) fm [13] determined from \({}^{4}\)He+ \({}^{40}\)Ca scattering. The skin values determined from \(\sigma_{\rm R}\) for \({}^{116,118,120,122,124}\), \({}^{40,48}\)Ca are near the linear line; \begin{table} \begin{tabular}{c c c c c c} \hline \hline & Ref. of data & \(f\) & \(r_{\rm m}\) & \(r_{\rm n}\) & \(r_{\rm skin}\) \\ \hline \({}^{116}\)Sn & [15; 16] & \(1.02447\) & \(4.622\pm 0.021\) & \(4.672\pm 0.021\) & \(0.118\pm 0.021\) \\ \({}^{118}\)Sn & [15; 16] & \(1.05118\) & \(4.634\pm 0.021\) & \(4.681\pm 0.021\) & \(0.112\pm 0.021\) \\ \({}^{120}\)Sn & [15; 16] & \(1.04711\) & \(4.655\pm 0.021\) & \(4.706\pm 0.021\) & \(0.124\pm 0.021\) \\ \({}^{122}\)Sn & [16] & \(1.04881\) & \(4.667\pm 0.024\) & \(4.717\pm 0.024\) & \(0.122\pm 0.024\) \\ \({}^{124}\)Sn & [15; 16] & \(1.06002\) & \(4.699\pm 0.022\) & \(4.761\pm 0.022\) & \(0.156\pm 0.022\) \\ \hline \end{tabular} \end{table} Table 2: Values of \(f\), \(r_{\rm m}\), \(r_{\rm n}\), \(r_{\rm skin}\). The values of \(r_{\rm p}\) are shown in Table 1. The radii are shown in units of fm. Figure 5: Skin values as a function of \(S_{\rm p}-S_{\rm n}\). Open squares stand for the results of this work (TW) for \({}^{116,118,120,122,124}\)Sn. The symbol “\({}^{4}\)He scattering” stands for our previous work [13] for \({}^{4}\)He scattering on \({}^{116,120,124}\)Sn targets. ## Acknowledgments We would like to thank Toyokawa and Fukui for their contribution.
2308.05318
RLSAC: Reinforcement Learning enhanced Sample Consensus for End-to-End Robust Estimation
Robust estimation is a crucial and still challenging task, which involves estimating model parameters in noisy environments. Although conventional sampling consensus-based algorithms sample several times to achieve robustness, these algorithms cannot use data features and historical information effectively. In this paper, we propose RLSAC, a novel Reinforcement Learning enhanced SAmple Consensus framework for end-to-end robust estimation. RLSAC employs a graph neural network to utilize both data and memory features to guide exploring directions for sampling the next minimum set. The feedback of downstream tasks serves as the reward for unsupervised training. Therefore, RLSAC can avoid differentiating to learn the features and the feedback of downstream tasks for end-to-end robust estimation. In addition, RLSAC integrates a state transition module that encodes both data and memory features. Our experimental results demonstrate that RLSAC can learn from features to gradually explore a better hypothesis. Through analysis, it is apparent that RLSAC can be easily transferred to other sampling consensus-based robust estimation tasks. To the best of our knowledge, RLSAC is also the first method that uses reinforcement learning to sample consensus for end-to-end robust estimation. We release our codes at https://github.com/IRMVLab/RLSAC.
Chang Nie, Guangming Wang, Zhe Liu, Luca Cavalli, Marc Pollefeys, Hesheng Wang
2023-08-10T03:14:19Z
http://arxiv.org/abs/2308.05318v1
# RLSAC: Reinforcement Learning enhanced Sample Consensus ###### Abstract Robust estimation is a crucial and still challenging task, which involves estimating model parameters in noisy environments. Although conventional sampling consensus-based algorithms sample several times to achieve robustness, these algorithms cannot use data features and historical information effectively. In this paper, we propose RLSAC, a novel Reinforcement Learning enhanced SAmple Consensus framework for end-to-end robust estimation. RLSAC employs a graph neural network to utilize both data and memory features to guide exploring directions for sampling the next minimum set. The feedback of downstream tasks serves as the reward for unsupervised training. Therefore, RLSAC can avoid differentiating to learn the features and the feedback of downstream tasks for end-to-end robust estimation. In addition, RLSAC integrates a state transition module that encodes both data and memory features. Our experimental results demonstrate that RLSAC can learn from features to gradually explore a better hypothesis. Through analysis, it is apparent that RLSAC can be easily transferred to other sampling consensus-based robust estimation tasks. To the best of our knowledge, RLSAC is also the first method that uses reinforcement learning to sample consensus for end-to-end robust estimation. We release our codes at [https://github.com/IRMVLab/RLSAC](https://github.com/IRMVLab/RLSAC). ## 1 Introduction As a fundamental module in computer vision, robust estimation is crucial for many tasks, such as camera pose estimation [10, 33, 31, 32], motion segmentation [26, 19, 29, 30], short and wide baseline matching [22, 28], plane fitting [18], and line fitting [12, 20]. However, it is still difficult to exclude disturbances while estimating accurate models. To address this issue, sampling consensus-based algorithms are widely used, which are represented by the RANdom SAmple Consensus (RANSAC) [16] algorithm. It first samples the minimum set required for the task, _e.g._, a minimum set size of 2 for 2D line fitting. Then, the hypothesis is solved by the minimum set. Next, all data is divided into inliers and outliers, according to their residuals to the hypothesis. Finally, the above process is repeated and the best hypothesis is selected based on the highest inlier ratio. RANSAC Figure 1: **RLSAC remodel the sampling consensus process.** By modeling the sampling consensus as a reinforcement learning process, RLSAC can achieve end-to-end learning on various robust estimation tasks. can provide strong robustness and generalization, but as the outlier rate increases, the probability of sampling inliers decreases. As a result, the performance of RANSAC degrades rapidly. This is because RANSAC samples each data evenly, regardless of data features that can be used for classifying inliers and outliers. Additionally, the non-differentiable deterministic random sampling of RANSAC also limits it to be integrated into the learning-based pipeline. Since sampling the minimum set from the data is quite similar to the process of sampling an action from the action space in reinforcement learning [17, 13], the sampling consensus can be integrated into the reinforcement learning framework. Sampling in reinforcement learning can be achieved through a neural network, which extracts the data features. In addition, the reward from the environment can be used to train the reinforcement learning framework without differentiation. Therefore, to learn from data features and avoid differentiating, we propose the RLSAC: Reinforcement Learning enhanced SAmple Consensus for end-to-end robust estimation. As shown in Figure 1, RLSAC regards sampling consensus as the process of interaction between the agent and environment in reinforcement learning. Specifically, the agent uses a neural network to sample the minimal set from the data as an action. The environment then performs model generation and evaluation based on the action and outputs the next state, which is used in the next iteration. However, designing appropriate reward and state are very important and challenging in reinforcement learning. To achieve reinforcement learning enhanced sampling consensus, RLSAC proposes new state transition and reward modules. Specifically, the state is encoded by augmenting the original data features with memory features, including the current action, data residuals, and historical information. When the state is input into the agent, these features can provide more information about the quality of the previous action. This allows RLSAC to gradually explore a better hypothesis by utilizing this memory information. Additionally, the evaluation result of the generated hypothesis can be used as the reward signal to train the neural network without differentiation. The reward signal enables the learning-based sampling consensus for end-to-end robust estimation. Furthermore, the evaluation result is the feedback from the downstream task. Thus, the neural network can learn to effectively use the data features and optimize the output to meet the requirements of the downstream task. Moreover, instead of directly predicting the final result from the data in one shot [27], RLSAC employs multiple episodes, each containing several sampling processes. In addition, RLSAC performs one random sampling at the beginning of each episode to form the initial state. This approach preserves the robustness of multiple random sampling and provides basic performance for RLSAC. Besides, RLSAC can be extended to other robust estimation tasks since it is not restricted to any specific task. The proposed RLSAC is tested on two classic robust estimation tasks, which are the 2D line fitting task and the fundamental matrix estimation task. The experimental results show that RLSAC achieves great performance. Our main contributions are as follows: * We propose RLSAC: a novel Reinforcement Learning enhanced SAmple Consensus framework for end-to-end robust estimation. It learns data features to sample the minimum set. RLSAC retains the robustness of the multiple sampling process, while the initial random sampling can provide the basic performance. * RLSAC proposes an approach for state encoding, which includes both current and historical information. This enables the agent to assess the quality of the previous actions and gradually explore better hypotheses. RLSAC is trained unsupervised using the reward function, which avoids differentiating the sampling process and achieves end-to-end robust estimation. * RLSAC is evaluated on two robust estimation tasks. The 2D line fitting task demonstrates its robustness to disturbances and effective progressive exploration capability. In the fundamental matrix estimation task, RLSAC achieves state-of-the-art performance. Furthermore, RLSAC can be easily applied to other sampling consensus-based robust estimation tasks. ## 2 Related Work Robust estimation is a basic module for many tasks. Although the simple repetitive random sampling strategy of RANSAC [16] is robust and generalized, it has some limitations, such as no further optimization, inefficient sampling, and non-differentiability. There are several methods observing that local features help to optimize the sampling result. LO-RANSAC [15] continues to sample in the inliers with a smaller threshold after sampling the best model so far, aiming to find a better hypothesis near the current one. NAPSAC [25] uses a fixed hypersphere to acquire local data and samples from it, but this approach loses global features and may get stuck in local data. To address this, progressive NAPSAC [4] gradually expands the hypersphere to extend the sampling from local to global. GC-RANSAC [5] considers spatial continuity between inliers and surrounding points, modeling the connections of the data through graph-cut to divide inliers and outliers. MAGSAC++ [6] assesses the model quality by weighting various thresholds to reduce the sensitivity to the choice of a specific noise scale. To improve sampling efficiency, some methods use guided sampling instead of random sampling. PROSAC [14] sorts the data based on a quality function and then samples the sorted data sequentially to improve efficiency. USAC [24] integrates the advantages of various methods to achieve robustness and efficiency. To learn from the data, Yi et al. [36] are the first to use a neural network based on PointNet to directly classify inliers and outliers. Then, Zhang et al. [37] improve this work by proposing pooling and unpooling blocks to learn the local context of correspondences. NG-RANSAC [9] uses a neural network to calculate the sample probability for each point, achieving probability-based sampling. The neural network is trained using reinforcement learning, but it did not use reinforcement learning to achieve sampling consensus and progressive exploration. Barath et al. [2] propose the MQ-Net to learn from residual histograms and evaluate the quality of the model. Additionally, the authors design the MF-Net to learn to reject bad minimum sets early, further improving efficiency. NeFSAC [11] uses a neural network to reject the motion-inconsistent and poorly-conditioned minimal samples. The non-differentiability of RANSAC makes it challenging to integrate into an end-to-end learning pipeline. Some methods propose the differentiable RANSAC. For example, DSAC [8] replaces the deterministic selection process with probabilistic selection, allowing for differentiation with respect to the data. Wei et al. [35] achieve gradient propagation by predicting the probabilities of being inliers to guide sampling. To avoid differentiation, some methods use the loss as reward signals through reinforcement learning to train the network. Bhowmik et al. [7] use reinforcement learning to train the feature point detection network but not for the sampling consensus process. Truong et al. [27] iteratively delete points with reinforcement learning, seeking the maximum consensus model that meets the threshold. However, the authors encode the attributes of the points into the state, but does not include long-range historical information or the position of the hypothesis in the state space. In addition, this method does not include the sampling consensus process and may overlook better solutions. ## 3 Method ### Problem Formulation Robust estimation can be considered as generating a good hypothesis \(h\) given a set of data \(\chi=\left\{{{\rm{x}}_{i}}\right\}_{i=1}^{N}\), which may be disturbed by noise. For instance, the hypothesis \(h\) could represent the parameters of a 2D line, and \(\chi\) could be all the points in the 2D plane. Or \(h\) could be the fundamental matrix that represents the epipolar geometry of a pair of images, and \(\chi\) could be all the correspondences. Similarly, the data contained in \({\rm{x}}_{i}\) varies with the task. As a commonly used robust estimation method, the sampling consensus method samples \(n\) minimum sets \(\mathcal{M}\) from \(\chi\). The size \(m\) of a minimum set varies depending on the task. For example, when fitting a 2D line, the size \(m=2\). Then, these minimum sets \(\mathcal{M}\) are solved by the minimum solver \(S\) to output hypotheses: \[H=\left\{{S\left({{\mathcal{M}}_{j}}\right)\left|{{\mathcal{M}}_{j}\in M,j=1,2,...,n}\right.}\right\}. \tag{1}\] Next, the hypotheses \(H\) are evaluated using a scoring function \(f\). The residuals of all data points \(\chi\) can be computed and used to calculate the inlier ratio, which is commonly used as a metric for the hypothesis quality [16]. Finally, the hypothesis with the highest score is chosen as the best hypothesis \(h_{Best}\): \[h_{Best}=\operatorname*{arg\,max}_{h\in H}f\left({h,\chi}\right). \tag{2}\] Figure 2: **The pipeline of the proposed RLSAC.** The fundamental matrix estimation problem is used as an example. The black, green, and blue lines represent outliers, inliers, and the minimum set, respectively. The yellow arrows are only used once during initialization. The red and orange arrows indicate the loop in an episode. The initial states are randomly sampled. The best hypothesis for this scene is selected by scoring all hypotheses. The collected experience is recorded in the replay buffer for training. By solving the problem using several minimum sets \(\mathcal{M}\), the model can achieve robust estimation, which is less sensitive to outliers and can lead to more accurate results. The RANSAC [16] algorithm performs random sampling to sample the minimum sets \(\mathcal{M}\), which cannot fully exploit the data features. As shown in Figure 1, One alternative is to consider the sampling of a minimum set as an action taken by an agent based on the current state, within the reinforcement learning framework: \[a_{t+1}\sim\pi_{\phi}\left(a_{t}\mid s_{t}\right). \tag{3}\] Here, the policy network \(\pi_{\phi}\) depends on the weights \(\phi\). The action \(a_{t+1}\) is sampled by the policy \(\pi_{\phi}\) from the state \(s_{t}\) and \(a_{t}\) at time \(t+1\), which can also be viewed as the process of sampling a minimum set \(\mathcal{M}_{j}\) from all data \(\chi\). So the Eq. 2 can be rewritten that the best hypothesis is selected by a reinforcement learning enhanced sample consensus method: \[h_{Best}=\operatorname*{arg\,max}_{t=1,2,...,j}f\left(S\left(a_{t}\right), \chi\right). \tag{4}\] In addition, the policy \(\pi_{\phi}\) is a trainable neural network, and the state \(s_{t}\) contains the features of the data \(\chi\). Therefore, the minimum set can be selected based on sufficient learned knowledge of the data features, rather than being chosen randomly. Furthermore, the evaluation result \(f\left(S\left(a_{t}\right),\chi\right)\) of the minimum solver \(S\) can serve as the reward in reinforcement learning to achieve unsupervised learning. ### System Framework With the problem formulation of modeling the process of sampling consensus as a reinforcement learning problem, we introduce RLSAC, which is illustrated in Figure 2. Although the fundamental matrix estimation task is used as an example, the pipeline is also applicable to other robust estimation tasks. Firstly, RLSAC initiates multiple episodes for the correspondences of a pair of images \(\chi\) at 1. An episode contains many steps. At the start of each episode, a random sampling of the minimum set \(\mathcal{M}_{0}\) is performed to generate the initial state \(s_{0}\) through state transition at 2. Specifically, the initial state \(s_{0}\) is obtained by concatenating each data point with the memory features, which are consisting of action, residual, and historical features (see Section 3.5). Next, the initial state \(s_{0}\) is input into the sampling consensus loop. The agent receives the current state \(s_{t}\) and feeds it into the policy network \(\pi_{\phi}\) at 3, as shown in Section 3.3. The network generates a probability for each data point. The \(m\) points with the highest probabilities are selected as the minimum set \(\mathcal{M}_{t}\), instead of random sampling. Then, the minimum set \(\mathcal{M}_{t}\) serves as the action \(a_{t}\) for the agent, which generates a hypothesis \(h_{t}\) in the environment at 4, as shown in Section 3.4. The residuals of points to the hypothesis are compared with a threshold to classify points into inliers and outliers at 5. The ratio of inliers is utilized as the reward \(r_{t}\) for the current action \(a_{t}\) to train the policy network \(\pi_{\phi}\) at 6. Additionally, the action, inliers, and residuals are used for the state transition to generate the next state \(s_{t+1}\) at 7, as shown in Section 3.5. Finally, the environment outputs the next state \(s_{t+1}\) and the reward \(r_{t}\), which are used by the agent to start the next step at 8. Furthermore, the state \(s_{t}\), action \(a_{t}\), reward \(r_{t}\), and the next state \(s_{t+1}\) in every step are collected as experiences in the replay buffer [13] for training. After all episodes are complete, the hypothesis with the highest inlier ratio across all episodes is output as the best hypothesis \(h_{Best}\) for this pair of images. ### Sampling Minimum Set by Agent The agent receives the state \(s_{t}\in\mathbb{R}^{N\times C}\), where \(N\) is the number of the correspondences of a pair of images. The channel \(C=c+3\), with \(c\) denoting the dimension of data features and \(3\) representing the dimension of memory features (see Section 3.5). In addition, the first hypothesis \(h_{0}\) of each episode is generated using a randomly sampled minimum set, which provides a basic performance for RLSAC. Then the state \(s_{t}\) is fed into the policy network \(\pi_{\phi}\). To achieve permutation invariance for the input data, RLSAC uses edge convolution (EdgeConv) from the DGCNN [34] as the basic module to establish the policy network \(\pi_{\phi}\). The EdgeConv can model the interrelationship of correspondences by graph neural networks. It can extract features from neighboring nodes in a graph and aggregate them into a central node. The policy network \(\pi_{\phi}\) extracts data features and calculates the probabilities by softmax for each point of being in the minimum set. The probability can be interpreted as the probability of obtaining a greater return on the action for the long term, rather than the probability of achieving the best result in the current state. Consequently, the \(m\) points with the highest probability are selected as the minimum set \(\mathcal{M}_{t}\) in the current state \(s_{t}\). Importantly, all previously used minimum sets are recorded to avoid selecting a duplicate minimum set. Thus, when a new minimum set is selected, it is checked whether it has been used. If the new minimum set has already been used, then another \(m\) data set is selected following the probability, which will be checked as well. Until the unused data set is found, it is output as the minimum set. Ultimately, the minimum set is as the action \(a_{t}\in\mathbb{R}^{m\times C}\) output by the agent. ### Evaluating Hypothesis in Environment The environment generates a hypothesis \(h_{t}\) based on the received action \(a_{t}\) from the agent. Next, the residuals \(R\) are calculated for all data \(\chi\) with respect to the hypothesis: \[R_{t}=\left\{D\left(x_{i},h_{t}\right)\left|x_{i}\in\chi,i=1,2,...,N\right. \right\}, \tag{5}\] where the function \(D\) is for residual calculation. The scalar valued residuals \(R\) are calculated at all data \(\chi\). With the pre-defined threshold, the data \(\chi\) is divided into inliers \(I_{t}\in\mathbb{R}^{z\times C}\) and outliers \(O_{t}\in\mathbb{R}^{(N-z)\times C}\) based to the residuals. In the sampling consensus methods, the inlier ratio commonly represents the quality of the hypothesis. Since the hypothesis \(h_{t}\) is generated by the action \(a_{t}\) in RLSAC, the inlier ratio can be considered as the reward \(r_{t}\) of the current action \(a_{t}\) to train the policy network \(\pi_{\phi}\) without ground truth. With the hypotheses related-rewards, RLSAC can update weights \(\phi\) of the policy network \(\pi\) without differentiating the sampling process. By modeling sampling consensus as an interactive process in reinforcement learning, RLSAC achieves end-to-end robust estimation while avoiding differentiating. ### State Transition The state transition module allows RLSAC to encode the previous state and action, enabling RLSAC to explore the state space effectively. As illustrated in Figure 3, the next state \(s_{t+1}\) is obtained by concatenating the original data features \(N\times c\) with memory features \(\left\{A,R,\mathcal{H}\right\}\), which comprise action features \(N\times 1\), residual features \(N\times 1\) and historical features \(N\times 1\). The action features are derived from the current action \(a_{t}\). If a data point \(x_{i}\) is used in the action \(a_{t}\), the corresponding entry in the action features \(\alpha_{i}\) is set to \(1\), otherwise it is \(-1\): \[A_{t}=\left\{\alpha_{i}\right\},\alpha_{i}=\left\{\begin{array}{ll}1&\mathrm{ if}\ x_{i}\in a_{t}\\ -1&\mathrm{otherwise}\end{array}\right.,i=1,2,...,N. \tag{6}\] The action features could provide the agent with information on the current action, like the current location in the data for the next state. The residual features are obtained from the residuals \(R_{t}\) computed in Eq. 5. They encode the relative relationship between the hypothesis and the data, which can be thought of as the direction for the agent. The historical features collect how often is a data point used up to the current step. They are initialized to \(0\) and updated as follow: for each data point \(x_{i}\) used in the current action \(a_{t}\), the corresponding entry in the historical features \(\tau_{i}\) is incremented by \(1\): \[\mathcal{H}_{t}=\left\{\tau_{i}\right\},\tau_{i}=\tau_{i}+1\ \mathrm{if}\ x_{i}\in a _{t},i=1,2,...,N. \tag{7}\] The historical features can provide the agent with information on the actions taken so far, enabling it to keep track of its path through the data. By encoding the state in this way, the agent has access to both current and historical information, allowing it to reach the goal state. Furthermore, the memory features serve as a director of local navigation, guiding the agent to explore the data features and reach its destination gradually. ## 4 Experiments We evaluate the performance of RLSAC on classic tasks, including 2D line fitting and fundamental matrix estimation. The 2D line fitting task serves as a basic benchmark to assess the performance of the robust estimation algorithm and provides visualization of the sampling process. The fundamental matrix estimation task can show the performance of algorithms on complex camera pose estimation task in the real world, which is a critical task in computer vision. ### Settings RLSAC is built on the widely used reinforcement learning framework SAC-Discrete [13]. As shown in Figure 2, the collected replay buffer is retrieved for off-policy to update the actor and critic network in reinforcement learning. Since RLSAC uses the inlier ratio as the reward for training, it is an unsupervised method. To explore more images, only one episode with multiple steps is collected per image pair during training. The framework is trained for 100 epochs. During training, the episode termination conditions are set as follows: (i) if the number of inliers is unchanged for \(\kappa=2\) steps; (ii) if the inlier ratio does not exceed the maximum inlier ratio in the episode for \(\varsigma=3\) steps; (iii) if the maximum number of steps \(\psi=15\) is reached. During testing, only the third condition is valid, and each pair of images can be tested in \(\nu\) episodes. As with the standard reinforcement learning [13], RLSAC uses probabilistic sampling during training and max sampling during testing. For the EdgeConv module in the policy network, the value of k-nearest neighbors is set to \(k=15\). The network architecture and additional information can be found in the supplementary material. Moreover, we have included ablation experiments to analyze the impact of specific network details, also provided in the supplementary material. Similar to the RANSAC implementation, RLSAC also employs refinement using the inliers of the final best hypothesis as the final model polishing. The experiments are conducted on a Linux computer with an Intel i7 3.6GHz CPU and an NVIDIA RTX 2080Ti GPU. Our implementation is based on PyTorch. Figure 3: **The state transition in RLSAC. To form the next state, the data features are added with the memory features, which contain action features, residual features, and historical features.** ### Case Study 1: 2D Line Fitting The 2D line fitting task is a basic problem in robust estimation, which can visualize the process of robust estimation and assess the robustness quantitatively to different outlier rates. In this task, each data point contains only the coordinates, thus all data points can be expressed as: \(\chi=\{[x_{i},y_{i}]\:|\:i=1,2,...,N\}\), where \(\chi\) represents the data features \(N\times c\), with \(c=2\). Since two points are sufficient to determine a 2D line, the agent can generate a hypothesis by selecting two points from the set of all data points as the minimum set \(N\times m\), where \(m=2\). For comparison, the RANSAC [16] algorithm is evaluated with the same task. To evaluate the performance of RLSAC in the 2D line fitting task, we synthesize \(N=100\) data points for both training and testing. Specifically, a ground truth 2D line is randomly generated in a \(10\times 10\) picture. True inliers are then randomly generated on the line based on the set outlier rate. Next, the true inliers are uniformly disturbed within a range of 0.1 around the line. Thus, the inlier threshold could be set to \(\varepsilon=0.1\). Additionally, true outliers are randomly scattered throughout the picture. Moreover, it is possible for true outliers to be located within the inlier region, as is often the case in real-world scenes. For accurately evaluating and comparing the performance of different methods, we adopt the mean Average Accuracy (mAA) metric in [2]. The angular difference between the estimated line and the ground truth line is used as the error metric, which is used to calculate the mAA metric with a tolerance threshold of \(0.5^{\circ}\). The performance of RLSAC and RANSAC at different outlier rates with 150 iterations is evaluated and the results are presented in Table 1. When the outlier rate is lower than 0.5, RANSAC can achieve a similar performance to RLSAC. This is because the repeated random selection of the minimum set can achieve good performance in simple low outlier rate scenes. However, the performance of RANSAC degrades more quickly than that of RLSAC as the outlier rate increases. This suggests that RLSAC can explore hypotheses closer to the ground truth in noisy scenes. As the outlier rate increases beyond 0.5, RLSAC can still maintain a low error and high mAA score. This indicates that RLSAC is more robust to disturbances, providing a more stable performance compared to RANSAC. The qualitative results of RLSAC are shown in Figure 4, which demonstrates its ability to quickly find the better hypothesis even from a bad initial state. This means that the memory features can serve as a director of local navigation, allowing RLSAC to move towards a better state through outputting actions. Moreover, once RLSAC finds a high inlier ratio hypothesis that is close to the ground truth, it continues to explore the local state to further improve the result, rather than randomly transitioning to other states. The results of the 2D line fitting task show that RLSAC can effectively utilize both data features and memory features, maintaining robust and stable performance in noisy environments. In addition, it can gradually explore a better hypothesis with the help of memory features. ### Case Study 2: Fundamental Matrix Estimation The fundamental matrix estimation is a crucial robust estimation task in computer vision, which solves the fundamental matrix by correspondences in a pair of images. In this study, we use the data and settings in CVPR tutorial _RANSAC in 2020_[3], where the correspondences are detected by RootSIFT [1] and matched by the nearest neighbor. The training data comprises 12 scenes, each with 100k image pairs, while the test data includes 2 scenes, with 4950 image pairs. The dataset uses Second Nearest Neighbor (SNN) ratio as the matching score. The correspondences are sorted in descending order by SNN. Then the top \(N=150\) SNN correspondences are selected. Next, each point extracts a 128-dimensional descriptor \(desc\) through the SIFT algorithm. Thus, the data points are: \(\chi=\left\{\left[x_{1}^{i},y_{1}^{i},x_{2}^{i},y_{2}^{i},SNN^{i},desc_{1}^{i },desc_{2}^{i}\right]|i=1,2,...,N\right\}\in\mathbb{R}^{N\times c}\), where \(c=261\). The 8-point solver is used to solve \begin{table} \begin{tabular}{c||c c|c c|c c|c c|c c|c c|c c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{0.1} & \multicolumn{2}{c|}{0.2} & \multicolumn{2}{c|}{0.3} & \multicolumn{2}{c|}{0.4} & \multicolumn{2}{c|}{0.5} & \multicolumn{2}{c}{0.6} & \multicolumn{2}{c}{0.7} \\ \cline{2-13} & mAA \(\uparrow\) & Mid. \(\downarrow\) & mAA \(\uparrow\) & Mid. \(\downarrow\) & mAA \(\uparrow\) & Mid. \(\downarrow\) & mAA \(\uparrow\) & Mid. \(\downarrow\) & mAA \(\uparrow\) & Mid. \(\downarrow\) & mAA \(\uparrow\) & Mid. \(\downarrow\) & mAA \(\uparrow\) & Mid. \(\downarrow\) \\ \hline \hline RANSAC [16] & 0.870 & 0.049 & 0.863 & 0.052 & 0.850 & 0.056 & 0.829 & 0.061 & 0.796 & 0.071 & 0.746 & 0.087 & 0.608 & 0.135 \\ Ours & **0.875** & **0.047** & **0.874** & **0.049** & **0.872** & **0.048** & **0.865** & **0.050** & **0.858** & **0.052** & **0.845** & **0.056** & **0.824** & **0.062** \\ \hline \hline Ours-0.5 & 0.849 & 0.053 & 0.850 & 0.055 & 0.854 & 0.053 & 0.858 & 0.052 & **0.858** & **0.052** & **0.864** & **0.050** & **0.849** & **0.054** \\ \hline \end{tabular} \end{table} Table 1: **2D line fitting.** The mAA@\(0.5^{\circ}\) and median error(\({}^{\circ}\)) on various outlier rates are reported. Figure 4: **The qualitative results of RLSAC on 2D line fitting.** The hypothesis converges to the ground truth as the number of steps increases. The green points represent inliers, while the red points represent outliers. The sampled minimum set points are denoted by black edges. The ground truth is represented by a yellow line, while the hypothesis and inlier threshold are represented by blue and dashed lines respectively. the hypotheses [21]. The inlier threshold of RLSAC is set to \(\varepsilon=4\). The evaluation settings follow [3]. To compare with other methods, RANSAC [16], USAC [24], MAGSAC++ [6] are tested, which use the recommended settings in [3]. As shown in Figure 5, RLSAC outperforms other methods in estimating both the rotation matrix and translation vector. Remarkably, RLSAC can achieve high accuracy with only 100 iterations. Comparisons reveals that RLSAC possesses a higher upper bound in performance than MAGSAC. The quantitative results of the methods at 1k iterations are presented in Table 2. RLSAC achieves great performance with small errors, a little higher than MAGSAC++[6]. Specifically, RLSAC shows high performance in estimating the direction of the translation vector compared to other methods. Since RLSAC could effectively learn features from the data to exclude noise disturbances, it has significant performance on translation vector estimation. Figure 6 shows the qualitative results of the fundamental matrix estimation of RLSAC in each scene. The figure demonstrates that RLSAC selects rigid and fixed feature points on buildings as the minimum sets, which can help to find better poses. Furthermore, the step-by-step fundamental matrix estimation results of RLSAC are visualized in Figure 7. It demonstrates a gradual increase in the inlier ratio of the hypothesis and a decrease in the rotation and translation errors. This illustrates that RLSAC can progressively explore the state space to find a better hypothesis. ## 5 Ablation Study We perform ablation studies by modifying or removing modules to analyze their effectiveness. The experimental data and settings remain the same as in Section 4. **Robustness and Generalization of RLSAC in 2D Line Fitting:** The _Ours-0.5_ in Table 1 shows the result of evaluating the robustness and generalization of RLSAC in 2D Line Fitting. We train RLSAC on data with the outlier rate of 0.5 and then test it on various outlier rates. In most cases, _Ours-0.5_ can outperform RANSAC. Notably, when the outlier rate exceeds 0.5, _Ours-0.5_ performs even better than the model trained at that outlier rate. This is because, in scenes with high outlier rates, the point distribution is closer to a uniform distribution, which lacks the features of a dense distribution. As a result, the model \(RLSAC_{High}\) trained on high outlier rates may not have learned the dense distribution in low outlier rates. This results in a mediocre performance of \(RLSAC_{High}\) in low outlier rates scenes. Conversely, scenes of low outlier rates exhibit both sparse and dense point distributions in different regions, providing more diverse features for the model \(RLSAC_{Low}\) to learn. Consequently, the \(RLSAC_{Low}\) can predict better results when the dense distribution of points is occasionally present in the high outlier scene. These results demonstrate that RLSAC can effectively learn the distribution in point groups to classify inliers and outliers. **Effect of Descriptors:** To investigate the effect of descriptors with image semantic features, we conduct experiments as shown in Table 3 (a). This suggests that descriptors with \begin{table} \begin{tabular}{c||c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{mAA@\(10^{\circ}\uparrow\)} & \multicolumn{2}{c}{Median (\({}^{\circ}\)) \(\downarrow\)} \\ \cline{2-5} & **R** & **t** & \(\epsilon_{\textbf{R}}\) & \(\epsilon_{\textbf{t}}\) \\ \hline \hline RANSAC[16] & 0.644 & 0.488 & 2.307 & 5.100 \\ USAC[24] & 0.741 & 0.604 & 1.036 & 2.157 \\ MAGSAC++[6] & 0.753 & 0.614 & **0.924** & 1.895 \\ Ours & **0.760** & **0.622** & 0.926 & **1.751** \\ \hline \hline \end{tabular} \end{table} Table 2: **Fundamental matrix estimation.** The mAA@\(10^{\circ}\) and median error(\(\epsilon_{\textbf{R}}\) and \(\epsilon_{\textbf{t}}\)) of rotation and the direction of translation at 1k iterations are reported in degrees. Figure 5: **The mAA@\(10^{\circ}\) at different iterations.** The results on the fundamental matrix estimation task at different iterations. Figure 6: **The qualitative results of RLSAC on the fundamental matrix estimation task.** Inlier rate, rotation and translation errors are reported. The blue lines represent the sampled minimum set points, and the green lines represent the inliers. Figure 7: **The step results of RLSAC on the fundamental matrix estimation task.** The meaning of the lines and the evaluation metrics are consistent with Figure 6. semantic features are helpful for RLSAC to effectively learn and sample minimum sets. **Different Sampling Approaches:** In Table 3 (b), four different sampling strategies are compared. The results illustrate that the best performance can be achieved by probabilistic sampling during training and max sampling during testing. This is consistent with the sampling strategy for actions in reinforcement learning. Specifically, probabilistic sampling can provide randomness and exploratory during training, while max sampling can output the optimal strategy during testing to improve performance and efficiency. **Number of State Points:** In RLSAC, the state points are selected from the top \(N\) correspondences sorted by SNN. Different values of \(N\) are evaluated in Table 3 (c). The best performance is obtained when \(N=150\). We count the number of correspondences in images at \(SNN<0.8\) recommended in [3], which is also around 150 points. This is reasonable that a smaller number of points may exclude points, which would solve the better hypothesis. While more points will introduce more noise, making it more difficult for RLSAC to learn effective sampling strategies. ## 6 Discussion The methods of estimating results directly with neural networks in one shot [23] are not robust and generalized in the scenes that have not been learned. Moreover, their poor interpretability leads to poor practicality in engineering applications. In contrast, traditional methods based on mathematical theory, such as multiple sampling consensus for model estimation and noise covariance matrix estimation in simultaneous localization and mapping (SLAM), offer clear interpretability and well-defined scopes of application. However, many traditional methods cannot be integrated into a learning-based framework due to their non-differentiability. Therefore, combining these traditional methods with learning-based methods can provide both interpretability and high performance. RLSAC combines sampling consensus with reinforcement learning to avoid differentiation of sampling while better extracting data features and memory features. In addition, RLSAC can be easily transferred to other sampling consensus tasks due to: * The input data of RLSAC is not limited to coordinates and descriptors. Additional features such as depth estimation information and semantic segmentation information can also be used as input for the policy network. * RLSAC retains random sampling at the beginning of each episode and each sample result is de-duplicated. This provides basic performance for sampling consensus-based tasks. * The rewards in RLSAC are linked with the evaluation of the hypothesis. Therefore, RLSAC can be extended to other robust estimation tasks, which require different evaluation methods. Moreover, RLSAC does not require differentiation of the sampling process, the hypothesis solving process, and the evaluation process. In this way, RLSAC enables end-to-end learning that incorporates traditional methods. This allows for consistent optimization objectives across modules. ## 7 Conclusion In this paper, we propose RLSAC, a reinforcement learning enhanced sample consensus framework for end-to-end robust estimation. RLSAC models the sampling consensus process as a reinforcement learning task to achieve end-to-end robust estimation. The basic performance of RLSAC is provided by the initial random sampling in each episode. In RLSAC, a new state transition strategy is designed to effectively extract current and historical information, which can guide RLSAC to explore state space. Furthermore, the inlier ratio of the hypothesis is used as a reward to realize unsupervised policy network learning. In experiments, the 2D line fitting task illustrates that RLSAC is robust to various disturbances and exhibits strong generalization. The visualization of the sampling process shows that RLSAC can progressively explore to better models and continues to explore locally. Additionally, the fundamental matrix estimation task demonstrates that RLSAC outperforms other methods in the complex camera pose estimation problem. Notably, RLSAC is not limited to specific tasks and can easily perform end-to-end learning on various sampling consensus-based tasks. In future work, it is worthwhile to explore the integration of traditional and learning-based methods within this framework, while trying to maintain the strengths of each method. ## 8 Acknowledgement This work was supported in part by the Natural Science Foundation of China under Grant 62225309, 62073222, U21A20480 and U1913204. The authors gratefully appreciate the contribution of Yiqing Xu from CUMT. \begin{table} \begin{tabular}{l||l||c c|c c} \multirow{2}{*}{Method} & \multicolumn{2}{c|}{mAA@10\({}^{\circ}\) +} & \multicolumn{2}{c}{Median (\({}^{\circ}\)) \(\downarrow\)} \\ \cline{3-6} & **R** & **t** & **\(\epsilon_{\text{R}}\)** & \(\epsilon_{\text{I}}\)** \\ \hline \hline \multirow{4}{*}{(a)} & Ours (w/o descriptors) & 0.702 & 0.568 & 1.400 & 2.963 \\ & Ours (full, with 128 DIM descriptors) & **0.760** & **0.622** & **0.926** & **1.751** \\ \hline \multirow{4}{*}{(b)} & Ours (with max sampling in training & 0.730 & 0.591 & 1.132 & 2.331 \\ & and max sampling in testing) & 0.706 & 0.531 & 1.415 & 2.893 \\ \cline{1-1} & Ours (with probabilistic sampling in testing) & 0.720 & 0.581 & 1.150 & 2.508 \\ \cline{1-1} & Ours (full, with probabilistic sampling in training) & **0.760** & **0.622** & **0.926** & **1.751** \\ \cline{1-1} & Ours (with N=100 points) & 0.727 & 0.588 & 1.216 & 2.464 \\ \cline{1-1} & Ours (with N=200 points) & 0.747 & 0.604 & 1.050 & 2.108 \\ \cline{1-1} & Ours (with N=300 points) & 0.733 & 0.594 & 1.180 & 2.364 \\ \cline{1-1} & Ours (full, with N=150 points) & **0.760** & **0.622** & **0.926** & **1.751** \\ \hline \end{tabular} \end{table} Table 3: **The ablation study results of RLSAC on fundamental matrix estimation.**
2310.09768
Moments of non-normal number fields -- II
Suppose $K$ is a number field and $a_K(m)$ is the number of integral ideals of norm equal to $m$ in $K$, then for any integer $l$, we asymptotically evaluate the sum \[ \sum_{m\leqslant T} a_K^l(m) \] as $T\to\infty$. We also consider the moments of the corresponding Dedekind zeta function. We prove lower bounds of expected order of magnitude and slightly improve the known upper bound for the second moment in the non-Galois case.
Krishnarjun Krishnamoorthy
2023-10-15T08:02:12Z
http://arxiv.org/abs/2310.09768v2
# Moments of non-normal number fields - II ###### Abstract. Suppose \(K\) is a number field and \(a_{K}(m)\) is the number of integral ideals of norm equal to \(m\) in \(K\), then for any integer \(l\), we asymptotically evaluate the sum \[\sum_{m\leqslant T}a_{K}^{l}(m)\] as \(T\to\infty\). We also consider the moments of the corresponding Dedekind zeta function. We prove lower bounds of expected order of magnitude and slightly improve the known upper bound for the second moment in the non-Galois case. Key words and phrases:Moments, Dedekind zeta function, Artin \(L\) functions, Moments 2020 Mathematics Subject Classification: 11F66, 11F30, 11R42, 20C30 ## 1. Introduction Suppose that \(K\) is a field extension of degree \(d\) over \(\mathbb{Q}\) (that is, a number field). Let \(a_{K}(m)\) (for \(m\in\mathbb{N}\)) denote the number of integral ideals in \(K\) of norm equal to \(m\). Let \(s\) be a complex number. The Dedekind zeta function of \(K\) can be expressed as \[\zeta_{K}(s):=\sum_{m=1}^{\infty}\frac{a_{K}(m)}{m^{s}}. \tag{1.1}\] It can be shown that \(a_{K}(m)\) is a multiplicative function and satisfies the bound \[a_{K}(m)\ll_{\epsilon}m^{\epsilon} \tag{1.2}\] for any positive \(\epsilon\). Thus the series (1.1) converges absolutely in the half plane \(\Re(s)>1\) where it can be expressed as the following Euler product, \[\zeta_{K}(s)=\prod_{p}\left(1+\frac{a_{K}(p)}{p^{s}}+\frac{a_{K}(p^{2})}{p^{2s }}+\ldots\right). \tag{1.3}\] Furthermore \(\zeta_{K}(s)\) has a meromorphic continuation to the whole complex plane with a simple pole at \(s=1\) and satisfies a functional equation connecting values at \(s\) and \(1-s\) (see [11, SS5, Chapter VII]). Dedekind zeta functions are natural generalizations of the Riemann zeta function for number fields. As with the Riemann zeta function, the behavior of \(\zeta_{K}(s)\) inside the critical strip \(0<\Re(s)<1\) is quite mysterious. There are many aspects of the behavior of Dedekind zeta function inside the critical strip that are of interest. In this paper, we focus on understanding the "moments" along the critical line (that is \(\Re(s)=\frac{1}{2}\)) \[I_{K}^{(l)}(T):=\int\limits_{1}^{T}\left|\zeta_{K}\left(\frac{1}{2}+it\right) \right|^{l}dt. \tag{1.4}\] Obtaining precise asymptotic for the above integral is a very hard problem, and even the base case of \(K=\mathbb{Q}\) poses serious difficulties. Thus we turn to the discrete analogue of the above problem, which maybe more accessible. Namely, we ask if we can estimate \[M_{K}^{(l)}(T):=\sum_{m\leqslant T}a_{K}^{l}(m) \tag{1.5}\] for positive integral values of \(l\). The case when \(l=1\) is classical and maybe deduced as a consequence of the meromorphic continuation of \(\zeta_{K}(s)\). This is also analogous to obtaining estimates for averages of the higher order divisor function (often called the Piltz divisor problem). This problem was first considered by Chandrasekharan and Narasimhan for the case when \(K\) was Galois over \(\mathbb{Q}\) and when \(l=2\)[12] and was generalized by Chandrasekharan and Good for arbitrary \(l\)[13] (but still when \(K\) was Galois over \(\mathbb{Q}\)). A particular non-Galois case was settled by Fomenko [14] and later improved upon by Lu [15]. By different methods, this was further generalized in a recent work of the author (along with Kalyan Chakraborty) for many families of non-Galois number fields [14]. However, the general problem still remained unsolved. The purpose of this paper is to estimate \(M_{K}^{l}(T)\) for any number field \(K\) and any positive integer \(l\) thereby completing the solution to this problem. We also provide a unified treatment which reproduces many of the special cases treated in previous work. The final result conforms to expectations in that the main term is of the order \(T\) times a power of \(\log(T)\). Before we state the main theorem, we introduce and fix the following notation throughout the paper. Every representation that we consider will be over \(\mathbb{C}\). Let \(K\) be as above and \(L\) be its Galois closure. The degree of \(K\) shall be denoted by \(d\). Denote the Galois groups \(Gal(L/\mathbb{Q})\) as \(G\) and its subgroup \(Gal(L/K)\) as \(H\). Let \(1_{H}\) denote the trivial representation of \(H\). Denote the corresponding induction to \(G\) as \(\rho_{H}\) and its character as \(\chi_{\rho_{H}}\). We now have the following theorem. **Theorem 1**.: _Suppose that \(l\) is a natural number. There exists an integer \(\mathsf{m}_{l}\) such that_ \[\sum_{m\leqslant T}a_{K}^{l}(m)\sim c(l,K)T\log^{\mathsf{m}_{l}}(T) \tag{1.6}\] _for some constant \(c(l,K)\) depending on \(l\) and \(K\), as \(T\to\infty\). Moreover, we have_ \[\mathsf{m}_{l}=\left(\frac{1}{|G|}\sum_{g\in G}\chi_{\rho_{H}}^{l}(g)\right)-1.\] As we have mentioned before, it is of interest to find asymptotics for \(I_{K}^{(l)}(T)\) and often such estimates are tied to the estimates for \(M_{K}^{(l)}(T)\). When \(K\) is Galois over \(\mathbb{Q}\) (equivalently \(H\) is the trivial subgroup), lower bounds on \(I_{K}^{(l)}(T)\) of the expected order of magnitude are known unconditionally [1]. Conditionally on the generalized Riemann hypothesis, upper bounds of the correct order of magnitude (except for an \(\epsilon\)) are also known [10]. Below, we provide analogous lower bounds for the non-Galois case. It would be convenient to define \[\beta_{K}:=|H\setminus G/H|\,. \tag{1.7}\] **Theorem 2**.: _For any rational \(k\geqslant 0\), we have_ \[\int\limits_{1}^{T}\left|\zeta_{K}\left(\frac{1}{2}+it\right)\right|^{2k} \gg T\log^{\beta_{K}k^{2}}(T).\] The above lower bound is of the expected order of magnitude. To see this we briefly recall some definitions regarding the Selberg class [14] and direct the reader to [10] for more details. Define the class \(\mathcal{S}\) to consist of Dirichlet series \(F(s):=\sum_{n=1}^{\infty}\frac{a_{F}(n)}{n^{s}}\) which satisfy the following properties: 1. (Region of convergence) The series defining \(F(s)\) converges absolutely for \(\Re(s)>1\). 2. (Analytic continuation) \(F(s)\) extends to a meromorphic function so that for some integer \(m\geqslant 0\), \((s-1)^{m}F(s)\) is an entire function of finite order. 3. (Functional equation) There are numbers \(Q>0,\alpha_{i}>0,\Re(r_{i})\geqslant 0\) such that \[\Phi(s):=Q^{s}\prod_{i=1}^{d}\Gamma(\alpha_{i}s+r_{i})F(s)\] satisfies \(\Phi(s)=w\overline{\Phi(1-\overline{s})}\) for some complex number \(w\) with \(|w|=1\). 4. (Euler product) \(F(s)\) can be written as the product \(\prod_{p}F_{p}(s)\) where \(F_{p}(s)=\exp\left(\sum_{k=1}^{\infty}b_{p^{k}}/p^{ks}\right)\) where \(b_{p^{k}}=\mathcal{O}(p^{k\theta})\) for some \(\theta<1/2\). 5. (Ramanujan hypothesis) \(a_{F}(n)=\mathcal{O}(n^{\epsilon})\) for any fixed \(\epsilon>0\). A function \(F\in\mathcal{S}\) is called _primitive_ if \(F\) cannot be written as a product of any two elements of \(\mathcal{S}\) except for \(F=1\cdot F\). Selberg made the following conjectures about the elements in \(\mathcal{S}\). **Conjecture** (Conjecture A).: _For all \(F\in\mathcal{S}\), there exists a positive integer \(n_{F}\) such that_ \[\sum_{p\leqslant X}\frac{|a_{F}(p)|^{2}}{p}=n_{F}\log\log(X)+\mathcal{O}(1).\] **Conjecture** (Conjecture B).: 1. _For any primitive function_ \(F\)_,_ \(n_{F}=1\)_._ 2. _For two distinct primitive functions_ \(F,F^{\prime}\)_,_ \[\sum_{p\leqslant T}\frac{a_{F}(p)\overline{a_{F^{\prime}}(p)}}{p}=\mathcal{O} (1).\] It is expected that for an irreducible representation \(\xi\) of the Galois group \(G\), the Artin \(L\) function \(L(s,\xi)\) is a primitive element of the Selberg class. If \(\{\xi_{i}\}\) is a complete list of irreducible representations of \(G\), then \(\zeta_{K}(s)=\prod_{i}L(s,\xi_{i})^{e_{i}}\) is a decomposition of \(\zeta_{K}(s)\) into primitive elements (inside the Selberg class), where we have set \(e_{i}:=\langle\rho_{H},\xi_{i}\rangle_{G}\). In this situation [1, Conjecture 5] along with Lemmas 4 and 7 leads to the conjecture \[\int\limits_{1}^{T}\left|\zeta_{K}\left(\frac{1}{2}+it\right)\right|^{2k}dt \sim c(k,K)T\log^{\beta_{K}k^{2}}(T), \tag{1.8}\] for \(k>0\) and some constant \(c(k,K)\) depending on \(k\) and \(K\). Finally, if \(K\) is Galois over \(\mathbb{Q}\), \(H\) will be the trivial subgroup and \(\beta_{K}=d\). The only case of (1.8) known to be true is when \(K\) is a quadratic extension of \(\mathbb{Q}\) and when \(k=1\)[10]. Regarding upper bounds for the moments, very little is known unconditionally. From their approximate functional equation, Chandrasekharan and Narasimhan ([13, see pg. 61]) were able to deduce that \[\frac{1}{T}\int\limits_{1}^{T}\left|\zeta_{K}\left(\frac{1}{2}+it\right) \right|^{2}dt=\sum_{m\leqslant cT^{\frac{d}{2}}}\frac{a_{K}^{2}(m)}{m}+ \mathcal{O}\left(T^{\frac{d}{2}-1}\log^{d}T\right)=\mathcal{O}\left(T^{\frac{ d}{2}-1}\log^{d}T\right) \tag{1.9}\] for some constant \(c>0\) and \(d>2\). As a consequence of Theorem 1, we may improve this as follows. **Theorem 3**.: _With notation as above, we have_ \[\frac{1}{T}\int\limits_{1}^{T}\left|\zeta_{K}\left(\frac{1}{2}+it\right) \right|^{2}dt=\sum_{m\leqslant cT^{\frac{d}{2}}}\frac{a_{K}^{2}(m)}{m}+ \mathcal{O}\left(T^{\frac{d}{2}-1}\log^{\beta_{K}}T\right). \tag{1.10}\] _In particular,_ \[\frac{1}{T}\int\limits_{1}^{T}\left|\zeta_{K}\left(\frac{1}{2}+it\right)\right|^{ 2}dt=\mathcal{O}\left(T^{\frac{d}{2}-1}\log^{\beta_{K}}T\right) \tag{1.11}\] _whenever \(d>2\)._ The fact that this is indeed an improvement follows from Lemma 6. ## 2. Preliminaries For the convenience of the reader, we compile some basic facts which we shall use throughout the proofs. ### Character theory Given an \(n\) dimensional complex representation \(\xi\) of a finite group \(G\), we denote its character (trace) as \(\chi_{\xi}\). The characters associated to irreducible representations of \(G\) form an orthonormal basis for the class functions on \(G\) with the inner product defined as \[\langle f_{1},f_{2}\rangle_{G}:=\frac{1}{|G|}\sum_{g\in G}f_{1}(g)\overline{f_ {2}(g)}. \tag{2.1}\] Given two representations \((\xi_{1},V_{1})\) and \((\xi_{2},V_{2})\) of a group \(G\), we may consider the representation \((\xi_{1}\otimes\xi_{2},V_{1}\otimes V_{2})\) defined as \((\xi_{1}\otimes\xi_{2})(g)(v_{1}\otimes v_{2})=\xi_{1}(g)(v_{1})\otimes\xi_{2 }(g)(v_{2})\) for any \(v_{1}\in V_{1}\) and \(v_{2}\in V_{2}\) and extended linearly. This is well-defined and satisfies \[\chi_{\xi_{1}\otimes\xi_{2}}(g)=\chi_{\xi_{1}}(g)\cdot\chi_{\xi_{2}}(g) \tag{2.2}\] for any \(g\in G\). In general the tensor product of two irreducible representations is not irreducible. Understanding the decomposition of tensor products of representations into irreducibles is often referred to as the Clebsch-Gordon problem. ### Artin \(L\) functions We start with a number field \(L\), which we shall assume is Galois over \(\mathbb{Q}\). Suppose \(G=Gal(L/\mathbb{Q})\). Let \(v\) (associated with the rational prime \(p\)) denote a finite place of \(\mathbb{Q}\) and let \(w\) be a place of \(L\) above \(v\). Let \(G_{w}\) and \(I_{w}\) denote the corresponding decomposition and inertia subgroups. We may define an element \(\sigma_{w}\) of \(G_{w}/I_{w}\) called the Frobenius element at \(w\). Except for finitely many places \(v\), the inertia subgroup \(I_{w}\) is trivial and thus in those cases \(\sigma_{w}\) is an element of the Galois group \(G\). In any case, as \(w\) runs through the places over \(v\), the corresponding Frobenius elements (defined modulo inertia) are conjugates of one another. Thus, by abuse of notation, we shall consider the Frobenius at \(v\) (or \(p\)) and denote it by \(\sigma_{v}\) (or \(\sigma_{p}\)). This is justified because we shall be primarily interested in functions of \(\sigma_{w}\) which are invariant under conjugation (such as trace). Suppose that \(\xi:G\to Aut(V)\) is a representation over a (finite dimensional) complex vector space \(V\). For every \(w\), \(\xi\) maybe considered as a representation of \(G_{w}\) on \(V\) and thus yields a representation of \(G_{w}/I_{w}\) on the fixed subspace \(V^{I_{w}}\). The Artin \(L\) function attached to the representation \(\xi\) is defined by \[L(s,\xi):=\prod_{v<\infty}\frac{1}{\det\left((Id-p^{-s}\xi(\sigma_{w}))|\,V^{I _{w}}\right)}=:\prod_{p}L_{p}(\xi,s). \tag{2.3}\] The product is absolutely convergent for \(\Re(s)>1\). We collect some of the important properties of the Artin \(L\) function for future reference. **Proposition 1**.: _The Artin \(L\) functions defined above has the following properties._ 1. \(L(s,\xi_{1}\oplus\xi_{2})=L(s,\xi_{1})L(s,\xi_{2})\) _for any two representations_ \(\xi_{1}\) _and_ \(\xi_{2}\)_._ 2. _Suppose_ \(\mathbb{Q}\subset K\subset L\) _is an intermediate field which is Galois over_ \(\mathbb{Q}\)_. Let_ \(H=\text{Gal}\left(L/K\right)\)_. Then a representation of_ \(\xi\) _of_ \(G/H\) _may be lifted to a representation_ \(\tilde{\xi}\) _via the canonical projection_ \(G\to G/H\)_. Then_ \(L(s,\xi)=L(s,\tilde{\xi})\)_, where the first_ \(L\) _function is considered in the setting of_ \(L\) _over_ \(\mathbb{Q}\) _and the second_ \(L\) _function is considered in the setting_ \(K\) _over_ \(\mathbb{Q}\)_._ 3. _Suppose_ \(K\) _is an intermediate field, not necessarily Galois over_ \(\mathbb{Q}\)_. Let_ \(H\) _denote the Galois group of_ \(L\) _over_ \(K\)_. For a representation_ \(\xi\) _of_ \(H\)_, we have_ \(L(s,\xi)=L(s,\text{Ind}_{H}^{G}\xi)\)_, where_ \(\text{Ind}_{H}^{G}\xi\) _denotes the representation induced from_ \(H\) _to_ \(G\)_._ 4. _With notation as in the previous statement. Then_ \[\zeta_{K}(s)=L(s,\text{Ind}_{H}^{G}1_{H})\] _where_ \(1_{H}\) _is the trivial representation of_ \(H\)_._ _Remark_.: In the sequel, we shall use Proposition 1 repeatedly at various steps without referring back to it every time. For an irreducible non-trivial representation \(\xi\) of \(G\), the Artin holomorphy conjecture asserts that \(L(s,\xi)\) continues holomorphically to the whole complex plane. This is unknown at the moment but we have a very general partial result. Brauer's induction theorem is an important result in representation theory of groups which is particularly consequential in the study Artin \(L\) functions. Suppose that \(\xi\) is an irreducible non-trivial representation of \(G\). Brauer's theorem establishes the meromorphic continuation of \(L(s,\xi)\) to the whole complex plane. The following slightly stronger consequence of Brauer's theorem shall be useful for us in the sequel (see [14, Corollary 5.47]). **Lemma 1**.: _If \(\xi\) is a non-trivial irreducible representation of \(G\), then \(L(s,\xi)\) has neither zeros nor poles in the region \(\Re(s)\geqslant 1\)._ ## 3. Proof of Theorem 1 In the remainder of this paper, it is convenient, in many instances, to restrict ourselves to unramified primes. As we are omitting only finitely many primes, this does not affect the nature of our results but for the exact constants. The order of growth shall remain the same. The strategy of the proof is essentially that of [13] adopted and generalized with modifications for the current needs. Define \[D_{l}(s):=\sum_{m=1}^{\infty}\frac{a_{K}^{l}(m)}{m^{s}}=\prod_{p}\left(1+\frac {a_{K}^{l}(p)}{p^{s}}+\frac{a_{K}^{l}(p^{2})}{p^{2s}}\ldots\right). \tag{3.1}\] From (1.2), \(D_{l}(s)\) is absolutely convergent for \(\Re(s)>1\) where the Euler product expression is valid. ### Meromorphic Continuation of \(D_{l}(s)\) We wish to establish a meromorphic continuation of \(D_{l}(s)\) to a larger domain. As \(\zeta_{K}(s)=L(s,\rho_{H})\), for every unramified prime \(p\), we have \[a_{K}(p)=\chi_{\rho_{H}}(\sigma_{p})\] where \(\sigma_{p}\) is a choice of Frobenius at \(p\). From (2.2), it follows that \[a_{K}^{l}(p)=\chi_{\rho_{H}^{\otimes l}}(\sigma_{p}),\] where \[\rho_{H}^{\otimes l}=\underbrace{\rho_{H}\otimes\rho_{H}\otimes\ldots\otimes \rho_{H}}_{l\text{ times}}.\] Therefore from definition, \[L(s,\rho_{H}^{\otimes l})=E(s)\prod_{p\text{ unramified}}\left(1-\frac{a_{K}^{l} (p)}{p^{s}}+\ldots\right)^{-1},\] where \(E(s)\) is the product of Euler factors at the ramified primes. Furthermore, for \(\Re(s)>1\), we have \[L(s,\rho_{H}^{\otimes l}) =\prod_{p}\left(1-\frac{a_{K}^{l}(p)}{p^{s}}\right)^{-1}U_{1}(s)\] \[=\prod_{p}\left(1+\frac{a_{K}^{l}(p)}{p^{s}}+\frac{a_{K}^{2l}(p)} {p^{2s}}+\ldots\right)U_{1}(s), \tag{3.2}\] where \(U_{1}(s)\) is holomorphic in the region \(\Re(s)>\frac{1}{2}\). Comparing Euler factors with (3.1), we see that \[D_{l}(s)=L(s,\rho_{H}^{\otimes l})U_{2}(s) \tag{3.3}\] where \(U_{2}(s)\) is again a function holomorphic in the region \(\Re(s)>\frac{1}{2}\). The region of holomorphy of \(U_{1}(s)\) and \(U_{2}(s)\) can be deduced using (1.2). In particular, from the meromorphic continuation of \(L(s,\rho_{H}^{\otimes l})\) to the whole complex plane, we may conclude that \(D_{l}(s)\) continues meromorphically to the region \(\Re(s)>\frac{1}{2}\). ### Completing the proof Let \(\{\xi\}_{i=1}^{n}\) denote the complete set of irreducible representations of \(G\) with \(\xi_{1}\) being the trivial representation. Let \[\chi_{\rho_{H}^{\otimes l}}=\sum_{i=1}^{n}\mathsf{m}_{i}^{(l)}\chi_{\xi_{i}} \tag{3.4}\] denote the decomposition of \(\rho_{H}^{\otimes l}\) into irreducible representations. Translating this into Artin \(L\) functions gives us \[L(s,\rho_{H}^{\otimes l})=\prod_{i=1}^{n}L^{\mathsf{m}_{i}^{(l)}}(s,\xi_{i})= \zeta^{\mathsf{m}_{1}^{(l)}}(s)\prod_{i=2}^{n}L^{\mathsf{m}_{i}^{(l)}}(s,\xi_{ i}).\] Thus \(L(s,\rho_{H}^{\otimes l})\) has a pole of order \(\mathsf{m}_{1}^{(l)}\) at the point \(s=1\) and is otherwise continuous on the half plane \(\Re(s)\geqslant 1\) (from Lemma 1). Hence from (3.3), \(D_{l}(s)\) is a continuous function in the region \(\Re(s)\geqslant 1\) but for a pole of order \(\mathsf{m}_{1}^{(l)}\) at the point \(s=1\). Now we apply the Delange-Ikehara Tauberian theorem (see [22, Corollary, Pg. 121]) to the Dirichlet series \(D_{l}(s)\) and get \[M_{K}^{(l)}(T)\sim cT\log^{\mathsf{m}_{1}^{(l)}-1}(T)\] for some constant \(c\). In fact, \(c\) maybe expressed in terms of the leading coefficient in the Laurent series expansion of \(D_{l}(s)\) about the point \(s=1\). Finally from definition, we have \[\mathsf{m}_{1}^{(l)}=\langle\rho_{H}^{\otimes l},1_{G}\rangle_{G}=\frac{1}{|G |}\sum_{g\in G}\chi_{\rho_{H}}^{l}(g). \tag{3.5}\] Setting \(\mathsf{m}_{l}=\mathsf{m}_{1}^{(l)}-1\) completes the proof. ## 4. Proof of Theorem 2 We first note two applications of the Chebotarev density theorem. **Lemma 2**.: _For any two representations \(\rho_{1},\rho_{2}\) of \(G\),_ \[\sum_{\begin{subarray}{c}p\leqslant T\\ p\text{ unramified}\end{subarray}}\frac{\chi_{\rho_{1}}(\sigma_{p})\overline{ \chi_{\rho_{2}}(\sigma_{p})}}{p}=\langle\rho_{1},\rho_{2}\rangle_{G}\log \log(T)+\mathcal{O}(1).\] Proof.: We shall restrict ourselves to unramified primes throughout the proof and remove it from notation. From the Chebotarev density theorem, for any conjugacy class \(C\) of \(G\), \[\sum_{\begin{subarray}{c}p\leqslant T\\ \sigma_{p}\in C\end{subarray}}\frac{1}{p}=\frac{|C|}{|G|}\log\log(T)+\mathcal{ O}(1).\] Therefore, \[\sum_{p\leqslant T}\frac{\chi_{\rho_{1}}(\sigma_{p})\overline{ \chi_{\rho_{2}}(\sigma_{p})}}{p} =\sum_{C}\sum_{\begin{subarray}{c}p\leqslant T\\ \sigma_{p}\in C\end{subarray}}\frac{\chi_{\rho_{1}}(\sigma_{p})\overline{ \chi_{\rho_{2}}(\sigma_{p})}}{p}\] \[=\sum_{C}\chi_{\rho_{1}}(g_{C})\overline{\chi_{\rho_{2}}(g_{C})} \sum_{\begin{subarray}{c}p\leqslant T\\ \sigma_{p}\in C\end{subarray}}\frac{1}{p}\] \[=\frac{1}{|G|}\sum_{C}\chi_{\rho_{1}}(g_{C})\overline{\chi_{\rho _{2}}(g_{C})}|C|\log\log(T)+\mathcal{O}(1)\] \[=\left(\frac{1}{|G|}\sum_{g\in G}\chi_{\rho_{1}}(g)\overline{ \chi_{\rho_{2}}(g)}\right)\log\log(T)+\mathcal{O}(1)\] \[=\langle\rho_{1},\rho_{2}\rangle_{G}\log\log(T)+\mathcal{O}(1).\] Here \(\sum_{C}\) denotes the sum over the various conjugacy classes of \(G\) and \(g_{C}\) denotes an arbitrary element in each conjugacy class. This completes the proof. **Corollary 1**.: _With notation as above_ \[\sum_{n\leqslant T}\frac{a_{K}^{l}(p)}{p}=(\mathsf{m}_{l}+1)\log\log(T)+ \mathcal{O}(1).\] Proof.: We may replace the sum over all the primes with the sum over the unramified primes. We note that, for an unramified prime \(p\), \(a_{K}^{l}(p)=\chi_{\rho_{H}^{\otimes l}}(\sigma_{p})\cdot\chi_{1_{G}}(\sigma_{p})\). Therefore from the previous lemma, \[\sum_{p\leqslant T}\frac{a_{K}^{l}(p)}{p}=\langle\rho_{H}^{\otimes l},1_{G} \rangle_{G}\log\log(T)+\mathcal{O}(1).\] From definition \(\mathsf{m}_{1}^{(l)}=\langle\rho_{H}^{\otimes l},1_{G}\rangle_{G}=\mathsf{m}_ {l}+1\). This completes the proof. The proof of the following lemma and corollary follow almost verbatim to the proofs above and hence we omit the details. We use the Chebatorev density theorem in the following form \[\sum_{\begin{subarray}{c}p\leqslant T\\ \sigma_{p}\in C,\ p\text{ unramified}\end{subarray}}1\sim\frac{|C|}{|G|}\frac{T}{ \log(T)}.\] **Lemma 3**.: _For any two representations \(\rho_{1},\rho_{2}\) of \(G\),_ \[\sum_{\begin{subarray}{c}p\leqslant T\\ p\text{ unramified}\end{subarray}}\chi_{\rho_{1}}(\sigma_{p})\overline{\chi_{ \rho_{2}}(\sigma_{p})}\sim\langle\rho_{1},\rho_{2}\rangle_{G}\frac{T}{\log(T )}.\] **Corollary 2**.: _With notation as above_ \[\sum_{p\leqslant T}a_{K}^{l}(p)\sim(\mathsf{m}_{l}+1)\frac{T}{\log(T)}\] **Lemma 4**.: _With notation as above_ \[\mathsf{m}_{1}^{(2)}=\beta_{K}.\] Proof.: Observe that \[\langle\rho_{H}^{\otimes 2},1_{G}\rangle_{G}=\frac{1}{|G|}\sum_{g\in G}\chi_{ \rho_{H}}^{2}(g)=\langle\rho_{H},\overline{\rho_{H}}\rangle_{G}.\] From Frobenius reciprocity \[\langle\rho_{H},\overline{\rho_{H}}\rangle_{G} =\langle Ind(1_{H}),\overline{Ind(1_{H})}\rangle_{G}\] \[=\left\langle 1_{H},Res\left(\overline{Ind(1_{H})}\right)\right\rangle _{H}.\] From Mackey's theorem, we have \[Res\left(\overline{Ind(1_{H})}\right)=\overline{Res\left(Ind(1_{H})\right)}= \bigoplus_{g\in H\setminus G/H}\overline{Ind_{H_{g}}^{H}1_{H_{g}}},\] where \(H_{g}:=gHg^{-1}\cap H\). Therefore, \[\Big{\langle}1_{H},Res\left(\overline{Ind(1_{H})}\right)\Big{\rangle}_{H}=\sum_{g \in H\setminus G/H}\langle 1_{H},\overline{Ind_{H_{g}}^{H}1_{H_{g}}}\rangle_{H}.\] The induced representation of \(1_{H_{g}}\) contains precisely one copy of \(1_{H}\) for every \(g\) and hence \(\langle\rho_{H}^{\otimes 2},1_{G}\rangle_{G}=|H\setminus G/H|=\beta_{K}\) completing the proof. Proof of Theorem 2.: Our strategy is to apply [1, Theorem 2.3] to the Dedekind zeta function \(\zeta_{K}(s)\). In their notation, \(\zeta_{K}(s)\) satisfies the necessary conditions of their theorem with \(\beta=\mathsf{m}_{2}+1=\mathsf{m}_{1}^{(2)}\). This follows from Lemmas 2 and 3 above. From Lemma 4, \(\mathsf{m}_{1}^{(2)}=\beta=\beta_{K}\) and we are done. ## 5. Proof of Theorem 3 The proof follows along the same lines as [1, SS6], where we use Theorem 1 in place of their [1, Theorem 3]. The key improvement of the bound on the off-diagonal terms is given in the following lemma. First we observe from Theorem 1 and the results in the previous section, that \[\sum_{m\leqslant T}a_{K}^{2}(m)\ll T\log^{\beta_{K}}(T). \tag{5.1}\] **Lemma 5**.: _Define_ \[S(T):=\sum_{m<n\leqslant T}\frac{a_{K}(m)a_{K}(n)}{\sqrt{mn}\log\left(\frac{n }{m}\right)}.\] _Then \(S(T)=\mathcal{O}\left(T\log^{\beta_{K}}(T)\right)\)._ Proof.: In the range \(n>m\), we have \(\log(\frac{n}{m})>1-\frac{m}{n}\). We therefore have \[S(T)<\sum_{m<n\leqslant T}\frac{a_{K}(m)a_{K}(n)}{\sqrt{mn}}+\sum_{m<n \leqslant T}\frac{a_{K}(m)a_{K}(n)}{n-m}.\] As in [1, proof of Lemma 11], the first term is \(\mathcal{O}(T)\) and the second term is bounded above by \(\sum_{m\leqslant T}a_{K}^{2}(m)\). The lemma now follows from (5.1). Now we may follow [1, SS6] almost verbatim and complete the proof of Theorem 3. The following lemma applied to \(G=Gal(L/\mathbb{Q})\) and \(H=Gal(L/K)\) implies that Theorem 3 is indeed an improvement over (1.9). **Lemma 6**.: _If \(G\) is a group and \(H\) is a subgroup of \(G\), then \(|H\setminus G/H|=|G/H|\) if and only if \(H\) is a normal subgroup of \(G\)._ Proof.: Consider the action of \(H\) on the left on the set of left cosets \(G/H\) (the action of \(h\) maps \(gH\) to \((hg)H\)). Then the set of double cosets is in bijection with the orbits under this action (\(HgH\mapsto\) orbit of \(gH\)). It can be shown that the size of the orbit containing \(HgH\) under this action is the index \([H:H\cap gHg^{-1}]\). If \(H\) were normal in \(G\), then each of the above index is \(1\) and hence \(|H\setminus G/H|=|G/H|\). If \(H\) were not normal in \(G\), then there exists a \(g\in G\) such that \(H\neq gHg^{-1}\). It follows that the size of the orbit containing \(gH\) has at least two elements and thus the number of orbits (which equals \(|H\setminus G/H|\)) should be strictly less than the size of the set on which \(H\) is acting (which is \(|G/H|\)). ## 6. Further remarks We first have the following lemma. **Lemma 7**.: _If_ \[\rho_{H}=\bigoplus_{i=1}^{n}\xi_{i}^{e_{i}}\] _is the decomposition of \(\rho_{H}\) into irreducible representations, then_ \[\mathfrak{m}_{1}^{(2)}=\sum_{i}e_{i}^{2}.\] Proof.: Proceeding as in the previous proof, we have \(\mathfrak{m}_{1}^{(2)}=\langle\rho_{H},\overline{\rho_{H}}\rangle_{G}\). But \[\chi_{\rho_{H}}(g)=\frac{1}{|H|}\sum_{r\in G}\tau_{H}(r^{-1}gr)\in\mathbb{R}.\] Here \(\tau_{H}\) is the characteristic function of \(H\). Hence \(\rho_{H}=\overline{\rho_{H}}\) giving us \(\mathfrak{m}_{1}^{(2)}=\langle\rho_{H},\rho_{H}\rangle_{G}=\sum_{i=1}^{n}e_{i }^{2}\). This completes the proof. As mentioned above, from the general conjectures regarding the primitivity of Artin \(L\) functions, \(\zeta_{K}(s)=\prod_{i=1}^{n}L(s,\xi_{i})^{e_{i}}\) is a decomposition of \(\zeta_{K}\) into primitive elements of \(\mathcal{S}\) (where as before \(\{\xi_{i}\}\) is a complete set of irreducible representations of \(G\)). If this is the case, Lemma 7 would follow from Selberg's Conjecture B (see [10, Proposition 2.5(a)] and [10, Corollary 3.2]). In any case the results of this paper would follow if we assume Selberg's conjectures. As was shown in [10], Selberg's conjectures imply the Langlands reciprocity conjecture (or the strong Artin conjecture) from where we may show the required asymptotic in Theorem 1 with a power saving error term. In fact it is expected that the right hand side of (1.6) should be of the form \(TP_{l}(\log(T))+\mathcal{O}(T^{1-\theta})\) for some polynomial \(P_{l}\) of degree \(\mathfrak{m}_{l}\) and for some \(\theta>0\) (as in the main theorem of [1]). ## Acknowledgement The author wishes to thank Prof. Sudhir Pujahari for his encouragement.
2304.13877
Short-range baryon-baryon potentials in constituent quark model revisited
We revisit the short-range baryon-baryon potentials in the flavor SU(3) sector, using the constituent quark model. We employ the color Coulomb, linear confining, and color magnetic forces between two constituent quarks, and solve the three-quark Schr\"{o}dinger equation using the Gaussian expansion method to evaluate the wave functions of the octet $( N , \Lambda , \Sigma , \Xi )$ and decuplet $( \Delta , \Sigma ^{\ast} , \Xi ^{\ast} , \Omega )$ baryons. We then solve the six-quark equation using the resonating group method and systematically calculate equivalent local potentials for the $S$-wave two-baryon systems which reproduce the relative wave functions of two baryons in the resonating group method. As a result, we find that the flavor antidecuplet states with total spin $J = 3$, namely, $\Delta \Delta$, $\Delta \Sigma ^{\ast}$, $\Delta \Xi ^{\ast}$-$\Sigma ^{\ast} \Sigma ^{\ast}$, and $\Delta \Omega$-$\Sigma ^{\ast} \Xi ^{\ast}$ systems, have attractive potentials sufficient to generate dibaryon bound states as hadronic molecules. In addition, the $N \Omega$ system with $J = 2$ in coupled channels has a strong attraction and forms a bound state. We also make a comparison with the baryon-baryon potentials from lattice QCD simulations and try to understand the behavior of the potentials from lattice QCD simulations.
Takayasu Sekihara, Taishi Hashiguchi
2023-04-26T23:49:22Z
http://arxiv.org/abs/2304.13877v2
# Short-range baryon-baryon potentials in constituent quark model revisited ###### Abstract We revisit the short-range baryon-baryon potentials in the flavor SU(3) sector, using the constituent quark model. We employ the color Coulomb, linear confining, and color magnetic forces between two constituent quarks, and solve the three-quark Schrodinger equation using the Gaussian expansion method to evaluate the wave functions of the octet \((N,\Lambda,\Sigma,\Xi)\) and decuplet \((\Delta,\Sigma^{*},\Xi^{*},\Omega)\) baryons. We then solve the six-quark equation using the resonating group method and systematically calculate equivalent local potentials for the \(S\)-wave two-baryon systems which reproduce the relative wave functions of two baryons in the resonating group method. As a result, we find that the flavor antidecuplet states with total spin \(J=3\), namely, \(\Delta\Delta\), \(\Delta\Sigma^{*}\), \(\Delta\Xi^{*}\)-\(\Sigma^{*}\Sigma^{*}\), and \(\Delta\Omega\)-\(\Sigma^{*}\Xi^{*}\) systems, have attractive potentials sufficient to generate dibaryon bound states as hadronic molecules. In addition, the \(N\Omega\) system with \(J=2\) in coupled channels has a strong attraction and forms a bound state when the meson exchange assists. We also make a comparison with the baryon-baryon potentials from lattice QCD simulations and try to understand the behavior of the potentials from lattice QCD simulations. ## I Introduction Understanding the baryon-baryon interactions has been an interesting topic in hadron physics, as they provide important clues to the quark dynamics inside baryons. The nuclear force, being the most extensively studied case, has been investigated through low-energy nucleon-nucleon (\(NN\)) scattering data and the properties of the \(NN\) bound state, _i.e._, the deuteron. Phenomenological \(NN\) potentials, which precisely reproduce the \(NN\) data, are known to have a short-range (relative distance \(r<1\,\)fm) repulsive core and medium-range (\(1\,\)fm \(<r<2\,\)fm) and long-range (\(r>2\,\)fm) attractive parts [1]. While meson exchanges can explain the medium- and long-range parts of the nuclear force, quark degrees of freedom are expected to be significant in the short range. In fact, constituent quark model calculations indicate that the short-range repulsive core of the nuclear force is governed by two factors [2]: the Pauli exclusion principle among valence quarks, and the spin-spin interaction of the quarks that causes the mass splitting between the nucleon and the \(\Delta\) baryon. To confirm this scenario in more general cases, studies of baryon-baryon interactions with different quark content are desired. Recently, due to experimental and numerical developments, much attention has been paid to interactions between two baryons belonging to the octet (\(N\), \(\Lambda\), \(\Sigma\), and \(\Xi\)) and decuplet (\(\Delta\), \(\Sigma^{*}\), \(\Xi^{*}\), and \(\Omega\)). For example, high statistics \(\Sigma^{-}p\) and \(\Sigma^{+}p\) scattering experiments were performed in Refs. [3] and [4], respectively, and the nuclear \(1s\) state of the \(\Xi\) hypernucleus \(\frac{15}{5}\)C was discovered in Ref. [5]. Both of these provide us with some information on the \(N\Sigma\) and \(N\Xi\) interactions. In addition to scattering experiments, we can now use lattice quantum chromodynamics (QCD) simulations and relativistic ion collisions to study baryon-baryon interactions. In lattice QCD simulations, we can extract baryon-baryon local potentials directly from the quark-gluon dynamics of QCD using the HAL QCD method [6], which has been applied to various systems including decuplet baryons, _e.g._, \(\Omega\Omega\)[7], \(N\Omega\)[8], and \(\Delta\Delta\) (with heavy pion mass) [9]. Such baryon-baryon potentials, especially for unstable baryons, are studied through the analysis of the correlation functions for any pair of baryons in relativistic ion collisions [10], in which the large number of baryons, together with theoretical predictions for the correlation functions [11], enables a detailed determination of the baryon-baryon interactions. Furthermore, besides phenomenological models, baryon-baryon interactions are now theoretically calculated through chiral effective field theory, in which the degrees of freedom are tied to QCD symmetries and their realization: hyperon-nucleon interactions [12] and interactions involving decuplet baryons [13], as well as the nuclear force [14]. Motivated by these studies, in the present paper we aim to systematically study the baryon-baryon interactions, particularly focusing on the short-range part, by using a precise wave function for baryons composed of three nonrelativistic constituent quarks. In this sense, our study is an extension of the quark model studies in Refs. [15; 16], but our calculation covers the interactions of any pair of the ground-state baryons, _i.e._, the octet and decuplet baryons. Similar studies are found in, _e.g._, Refs. [17; 18; 19; 20; 21; 22; 23; 24; 25; 26]. Our study will provide some clues to understand the mechanism that generates attractive/repulsive force suggested in experiments and lattice QCD simulations. Furthermore, because there are more than one hundred channels of the \(S\)-wave two-baryon systems from the ground-state baryons, we may expect attractive two-baryon potentials that are sufficient to generate dibaryon bound states in the systematic study. We evaluate the relative wave function of constituent quarks inside each baryon as the solution of the three-quark Schrodinger equation in the Gaussian expansion method [27]. We take into account the color Coulomb, linear confining, and color magnetic forces between two constituent quarks. Then, we employ the resonating group method (RGM) to calculate the relative wave function of two baryons in \(S\) wave. We translate the relative wave function of two baryons into the equivalent local potentials that reproduce the relative wave functions of two baryons in the RGM. Thanks to this approach, we can compare our results with the baryon-baryon local potentials deduced in the lattice QCD simulations, and any research group can utilize the local potentials for further investigations of two-baryon systems. The paper is organized as follows. In Sec. II we formulate the constituent quark model for one-baryon and two-baryon systems. We also explain the method used to evaluate the equivalent local potentials in this section. Next, in Sec. III we present our numerical results of the baryon-baryon potentials and dibaryon bound states in the present model. Section IV is devoted to the conclusion of the present study. ## II Formulation ### Baryons in the Gaussian expansion method First of all, we construct the wave function of each baryon in the three-body dynamics of constituent quarks. In the present study, we employ the color Coulomb, linear confining, and color magnetic forces between two constituent quarks. In general, the \(i\)-th and \(j\)-th quarks at a distance \(r\) interact via the potential \[V_{ij}(r)=\frac{\vec{\lambda}_{i}}{2}\cdot\frac{\vec{\lambda}_{j}}{2}\left[ \frac{\alpha_{\rm s}}{r}-\frac{3}{4}kr+D-\frac{2\pi\alpha_{\rm ss}}{3}\frac{ \vec{\sigma}_{i}\cdot\vec{\sigma}_{j}}{m_{i}m_{j}}\bar{\delta}(r)\right], \tag{1}\] where \(\vec{\lambda}_{i}\) and \(\vec{\sigma}_{i}\) are sets of the Gell-Mann and Pauli matrices, respectively, acting on the \(i\)-th quark, \(\alpha_{\rm s}\) and \(\alpha_{\rm ss}\) are the coupling constants, \(k\) is the confining string tension, \(D\) is a constant to reproduce the physical baryon masses, \(m_{i}\) is the \(i\)-th constituent quark mass, and \(\bar{\delta}(r)\) is a three-dimensional delta-like function \[\bar{\delta}(r)\equiv\left(\frac{\sigma}{\sqrt{\pi}}\right)^{3}\exp\left(- \sigma^{2}r^{2}\right), \tag{2}\] with a range parameter \(\sigma\). We treat \(\alpha_{\rm s}\), \(\alpha_{\rm ss}\), \(m_{i}\), \(D\), and \(\sigma\) as model parameters, while fixing the string tension to a typical value of \(k=0.89\,\)GeV/fm. In addition, we assume isospin symmetry, so that the up and down quark masses satisfy \(m_{u}=m_{d}\). We apply the potential (1) to the one-baryon system (\(B\)) in the constituent quark model, in which the internal configurations of three quarks can be described by the Jacobi coordinates shown in Fig. 1. Inside a baryon, owing to the color configuration of constituent quarks, the \(i\)-th and \(j\)-th quarks satisfy the following relation: \[\frac{\vec{\lambda}_{i}}{2}\cdot\frac{\vec{\lambda}_{j}}{2}=-\frac{2}{3}. \tag{3}\] Hence, two quarks at a distance \(\rho\) inside the baryon \(B\) interact via the potential \[V_{ij}^{(B)}(\rho)=-\frac{2}{3}\frac{\alpha_{\rm s}}{\rho}+\frac{1}{2}k\rho- \frac{2}{3}D+\frac{4\pi\alpha_{\rm ss}}{9}\frac{\vec{\sigma}_{i}\cdot\vec{ \sigma}_{j}}{m_{i}m_{j}}\bar{\delta}(\rho). \tag{4}\] Then, the Schrodinger equation for the quarks inside the baryon \(B\) in the constituent quark model becomes \[\left[m_{1}+m_{2}+m_{3}-\frac{1}{2\mu_{B}}\frac{\partial^{2}}{ \partial\mathbf{\lambda}^{2}}-\frac{1}{2\mu_{B}^{\prime}}\frac{\partial^{2}}{ \partial\mathbf{\rho}^{2}}\right.\] \[\left.\qquad+V_{23}^{(B)}(\rho_{1})+V_{31}^{(B)}(\rho_{2})+V_{12}^ {(B)}(\rho_{3})\right]\Psi^{(B)}(\mathbf{\lambda},\mathbf{\rho})\] \[=M_{B}\Psi^{(B)}(\mathbf{\lambda},\mathbf{\rho}), \tag{5}\] where \(\Psi^{(B)}(\mathbf{\lambda},\mathbf{\rho})\) is the wave function of the relative motion of three quarks in the baryon \(B\), \(\mathbf{\lambda}\equiv\mathbf{\lambda}_{1}\), \(\mathbf{\rho}\equiv\mathbf{\rho}_{1}\), \(M_{B}\) is the mass of the baryon \(B\), and \[\mu_{B}\equiv\frac{m_{1}(m_{2}+m_{3})}{m_{1}+m_{2}+m_{3}},\quad\mu_{B}^{\prime} \equiv\frac{m_{2}m_{3}}{m_{2}+m_{3}}. \tag{6}\] In the present study, we focus on the ground-state baryons. Therefore, both the \(\lambda\) and \(\rho\) modes of the three constituent quarks have zero orbital angular momenta: \(l_{\lambda}=l_{\rho}=0\). To describe this, we employ the Gaussian expansion method [27] for the wave function \(\Psi^{(B)}(\mathbf{\lambda},\mathbf{\rho})\): \[\Psi^{(B)}(\mathbf{\lambda},\mathbf{\rho})=\sum_{c=1}^{3}\sum_{n=1}^{N}\sum_{n^{\prime }=1}^{N}C_{c,n,n^{\prime}}^{(B)}\exp\left(-\frac{\lambda_{c}^{2}}{r_{n}^{2}}- \frac{\rho_{c}^{2}}{r_{n^{\prime}}^{2}}\right). \tag{7}\] Here, the index \(c\) specifies the Jacobi coordinates in Fig. 1 and range parameters \(r_{n}\) (\(n=1,\ldots,N\)) form a geometric progression: \[r_{n}=r_{\rm min}\times\left(\frac{r_{\rm max}}{r_{\rm min}}\right)^{(n-1)/(N-1) }, \tag{8}\] where the minimal and maximal ranges, \(r_{\rm min}\) and \(r_{\rm max}\), respectively, are fixed according to the physical condition of the system. Then, by using the method summarized Figure 1: Jacobi coordinates of a three-body system. in Ref. [27], we numerically solve the Schrodinger equation (5) and obtain the eigenvector \(C^{(B)}_{c,n,n^{\prime}}\) as well as the eigenvalue \(M_{B}\). In this study, the model parameters are determined by fitting the ground-state baryon masses. The fitted parameters are listed in Table 1, and the resulting baryon masses are listed in the second column of Table 2, along with their experimental values [28] in parenthesis. The convergence of the results is found to be good with the number of the expansion \(N=10\) and the ranges \(r_{\rm min}=0.01\,\)fm and \(r_{\rm max}=2\,\)fm. To evaluate the spatial extension of quarks inside each baryon, we calculate the mean squared radius of the baryon using the formula \[\langle r_{B}^{2}\rangle\equiv \frac{1}{3(m_{1}+m_{2}+m_{3})^{2}}\left[(m_{2}+m_{3})^{2}\,\langle \lambda_{1}^{2}\rangle\right. \tag{9}\] \[\left.+(m_{3}+m_{1})^{2}\,\langle\lambda_{2}^{2}\rangle+(m_{1}+m _{2})^{2}\,\langle\lambda_{3}^{2}\rangle\right],\] where \(\langle\lambda_{c}^{2}\rangle\) is the expectation value of \(\lambda_{c}^{2}\): \[\langle\lambda_{c}^{2}\rangle\equiv\int d^{3}\rho\int d^{3}\lambda\,\lambda_ {c}^{2}\left|\Psi^{(B)}(\mathbf{\lambda},\mathbf{\rho})\right|^{2}. \tag{10}\] The resulting root mean squared radii of baryons, listed in the third column of Table 2, are smaller than the experimental values: for instance, the experimental value of the proton charge radius is about \(0.84\,\)fm [28]. This discrepancy is attributed to the fact that we only consider the spatial extension of the "quark core" and do not take into account the meson clouds of baryons. It is instructive to show the distribution of quark-quark distance inside the baryons, which we define as \[P(\rho)=\rho^{2}\int d\Omega_{\rho}\int d^{3}\lambda\left|\Psi^{(B)}(\mathbf{ \lambda},\mathbf{\rho})\right|^{2}. \tag{11}\] Here we choose the distribution of the distance between the \(s\)-\(s\) quarks in the \(\Xi^{0}\) baryon and the \(u\)-\(u\) quarks in the \(\Delta^{++}\) baryon, because the root mean squared radius \(\sqrt{\langle r_{B}^{2}\rangle}\) of the \(\Xi\) baryon has the minimal value \(0.40\,\)fm, while that of the \(\Delta\) baryon has the maximal value \(0.53\,\)fm. The resulting distribution is shown in Fig. 2 as the solid and dashed lines, respectively. From the figure, we can see that the quark-quark distance in a baryon is about less than \(2\,\)fm. Because three quarks in a baryon form an almost equilateral triangle, the distribution indicates that the quarks are distributed within a range \(\sim(2/\sqrt{3})\,\)fm\(\approx 1.2\,\)fm from the center of mass of the baryon. ### Two-baryon systems in the resonating group method #### ii.2.1 Creation and annihilation operators of quarks Next, we formulate the two-baryon systems in terms of the six-quark degrees of freedom. For this purpose, we introduce creation and annihilation operators of a quark with quantum numbers \(\mu\equiv(f,s,c)\), where \(f\), \(s\), and \(c\) represent the flavor, spin, and color, respectively: \[\hat{a}_{\mu}^{\dagger},\quad\hat{a}_{\mu}. \tag{12}\] These operators satisfy the anticommutation relations: \[\{\hat{a}_{\mu^{\prime}},\hat{a}_{\mu}^{\dagger}\}=\delta_{\mu^{\prime},\mu}, \quad\{\hat{a}_{\mu^{\prime}},\hat{a}_{\mu}\}=0,\quad\{\hat{a}_{\mu^{\prime}} ^{\dagger},\hat{a}_{\mu}^{\dagger}\}=0. \tag{13}\] We also introduce creation and annihilation operators of a quark at the coordinate \(\mathbf{r}\): \[\hat{b}_{\mu}^{\dagger}(\mathbf{r}),\quad\hat{b}_{\mu}(\mathbf{r}), \tag{14}\] which satisfy the anticommutation relations: \[\begin{split}\{\hat{b}_{\mu^{\prime}}(\mathbf{r}^{\prime}),\hat{b}_ {\mu}^{\dagger}(\mathbf{r})\}&=\delta_{\mu^{\prime},\mu}\delta(\mathbf{r }^{\prime}-\mathbf{r}),\\ \{\hat{b}_{\mu^{\prime}}(\mathbf{r}^{\prime}),\hat{b}_{\mu}(\mathbf{r}) \}&=0,\quad\{\hat{b}_{\mu^{\prime}}^{\dagger}(\mathbf{r}^{\prime}), \hat{b}_{\mu}^{\dagger}(\mathbf{r})\}=0.\end{split} \tag{15}\] \begin{table} \begin{tabular}{l c c c} Baryon & \(M_{B}\) [MeV] & \(\sqrt{\langle r_{B}^{2}\rangle}\) [fm] \\ \hline \(N\) & 939 & (939) & 0.44 \\ \(\Lambda\) & 1084 & (1116) & 0.42 \\ \(\Sigma\) & 1184 & (1193) & 0.44 \\ \(\Xi\) & 1300 & (3181) & 0.40 \\ \(\Delta\) & 1248 & (1232) & 0.53 \\ \(\Sigma^{\star}\) & 1391 & (1385) & 0.50 \\ \(\Xi^{\star}\) & 1525 & (1533) & 0.46 \\ \(\Omega\) & 1651 & (1672) & 0.41 \\ \end{tabular} \end{table} Table 2: Properties of the baryons in the present model. The baryon masses reported by the Particle Data Group [28] are written in parenthesis. Figure 2: Examples of the density distribution \(P(\rho)\). By using the creation operators of quarks, we can express the ket vector of the one-baryon \(B\) state as: \[\ket{B}= \sum_{\vec{\mu}}w_{\vec{\mu}}^{(B)}\int d^{3}r_{1}d^{3}r_{2}d^{3}r_{ 3}\psi_{1}^{(B)}(\mathbf{r}_{1})\psi_{2}^{(B)}(\mathbf{r}_{2})\psi_{3}^{(B)}(\mathbf{r}_{3})\] \[\times\hat{b}_{\mu_{1}}^{\dagger}(\mathbf{r}_{1})\hat{b}_{\mu_{2}}^{ \dagger}(\mathbf{r}_{2})\hat{b}_{\mu_{3}}^{\dagger}(\mathbf{r}_{3})\ket{0}, \tag{16}\] where \(\vec{\mu}\equiv(\mu_{1},\mu_{2},\mu_{3})\) is the set of the quantum numbers of the three quarks, \(w_{\vec{\mu}}^{(B)}\) is the weight for the set \(\vec{\mu}\), \(\psi_{i}^{(B)}(\mathbf{r}_{i})\) is the spatial wave function of the \(i\)-th quark, and \(\ket{0}\) is the vacuum. The weight \(w_{\vec{\mu}}^{(B)}\) takes, for example \(\Delta^{++}\) with the third component of the spin \(s=3/2\), value: \[w_{\vec{\mu}}^{(\Delta^{++}(3/2))}=\begin{cases}1&\vec{\mu}=((u,\uparrow, \mathrm{R}),(u,\uparrow,\mathrm{G}),(u,\uparrow,\mathrm{B})),\\ 0&\text{others}.\end{cases} \tag{17}\] We assume that the weight is normalized: \[\sum_{\vec{\mu}}\left[w_{\vec{\mu}}^{(B)}\right]^{2}=1. \tag{18}\] Weights for other baryons are summarized in Table 7 in Appendix. Then, we decompose the product of the spatial wave functions into the center-of-mass part \(\Phi^{(B)}(\mathbf{R})\) with the center-of-mass coordinate \(\mathbf{R}\) and the relative part \(\Psi^{(B)}(\mathbf{\lambda},\mathbf{\rho})\) as \[\psi_{1}^{(B)}(\mathbf{r}_{1})\psi_{2}^{(B)}(\mathbf{r}_{2})\psi_{3}^{(B)}(\mathbf{r}_{3}) =\Phi^{(B)}(\mathbf{R})\Psi^{(B)}(\mathbf{\lambda},\mathbf{\rho}) \tag{19}\] where \(\mathbf{R}\), \(\mathbf{\lambda}\), \(\mathbf{\rho}\) are expressed as \[\mathbf{R} \equiv\frac{m_{1}\mathbf{r}_{1}+m_{2}\mathbf{r}_{2}+m_{3}\mathbf{r}_{3}}{m_{1 }+m_{2}+m_{3}}, \tag{20}\] \[\mathbf{\lambda} \equiv\mathbf{r}_{1}-\frac{m_{2}\mathbf{r}_{2}+m_{3}\mathbf{r}_{3}}{m_{2}+m_ {3}},\quad\mathbf{\rho}\equiv\mathbf{r}_{3}-\mathbf{r}_{2}.\] Because the measure of the coordinates satisfies the relation \[d^{3}r_{1}d^{3}r_{2}d^{3}r_{3}=d^{3}Rd^{3}\lambda d^{3}\rho, \tag{21}\] we rewrite the ket vector of the one-baryon state as \[\ket{B}= \int d^{3}R\Phi^{(B)}(\mathbf{R})\int d^{3}\lambda d^{3}\rho\Psi^{(B )}(\mathbf{\lambda},\mathbf{\rho}) \tag{22}\] \[\times\hat{W}^{(B)\dagger}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3})\ket{ 0},\] where we introduced an operator \[\hat{W}^{(B)\dagger}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3})\equiv\sum_{\vec{\mu}}w_ {\vec{\mu}}^{(B)}\hat{b}_{\mu_{1}}^{\dagger}(\mathbf{r}_{1})\hat{b}_{\mu_{2}}^{ \dagger}(\mathbf{r}_{2})\hat{b}_{\mu_{3}}^{\dagger}(\mathbf{r}_{3}). \tag{23}\] Provided the normalization of the wave functions \[\int d^{3}R\left|\Phi^{(B)}(\mathbf{R})\right|^{2}=\int d^{3}\lambda d^{3}\rho \left|\Psi^{(B)}(\mathbf{\lambda},\mathbf{\rho})\right|^{2}=1, \tag{24}\] together with the normalization of the weight (18), the one-baryon vector is normalized: \[\langle B|B\rangle=1. \tag{25}\] #### ii.2.2 Hamiltonian By using the creation and annihilation operators, we can express the Hamiltonian of the system of quarks. The Hamiltonian \(\hat{H}\) is composed of the kinetic part \(\hat{K}\) and potential part \(\hat{V}\): \[\hat{H}=\hat{K}+\hat{V}. \tag{26}\] The kinetic part is expressed as \[\hat{K}=\sum_{\mu}\int d^{3}r\hat{b}_{\mu}^{\dagger}(\mathbf{r})\left(m_{f}-\frac{ 1}{2m_{f}}\frac{\partial^{2}}{\partial\mathbf{r}^{2}}\right)\hat{b}_{\mu}(\mathbf{r}), \tag{27}\] where \(m_{f}\) is the quark mass of the flavor \(f\) and the differential operator \(\partial^{2}/\partial\mathbf{r}^{2}\) acts on the wave functions of quarks. The potential part, on the other hand, is composed of the color Coulomb plus linear confining potential and color magnetic potential: \[\hat{V}=\hat{V}_{\mathrm{CL}}+\hat{V}_{\mathrm{ss}}. \tag{28}\] They are respectively expressed as \[\hat{V}_{\mathrm{CL}}= \frac{1}{2}\sum_{f,s}\int d^{3}r\int d^{3}r^{\prime}V_{\mathrm{CL} }(|\mathbf{r}-\mathbf{r}^{\prime}|) \tag{29}\] \[\times\left[\hat{b}^{\dagger}(\mathbf{r})\frac{\vec{\lambda}}{2}\hat {b}(\mathbf{r})\right]_{f,s}\cdot\left[\hat{b}^{\dagger}(\mathbf{r}^{\prime})\frac{ \vec{\lambda}}{2}\hat{b}(\mathbf{r}^{\prime})\right]_{f,s},\] \[\hat{V}_{\mathrm{ss}}= \frac{1}{2}\sum_{f}\int d^{3}r\int d^{3}r^{\prime}V_{\mathrm{ss}}( |\mathbf{r}-\mathbf{r}^{\prime}|) \tag{30}\] \[\times\left[\hat{b}^{\dagger}(\mathbf{r})\frac{\vec{\lambda}}{2}\frac {\vec{\sigma}}{m}\hat{b}(\mathbf{r})\right]_{f}\cdot\left[\hat{b}^{\dagger}(\mathbf{r} ^{\prime})\frac{\vec{\lambda}}{2}\frac{\vec{\sigma}}{m}\hat{b}(\mathbf{r}^{ \prime})\right]_{f},\] where \[V_{\mathrm{CL}}(r)\equiv\frac{\alpha_{\mathrm{s}}}{r}-\frac{3}{4}kr+D,\quad V_ {\mathrm{ss}}(r)\equiv-\frac{2\pi\alpha_{\mathrm{ss}}}{3}\bar{\delta}(r), \tag{31}\] \[\left[\hat{b}^{\dagger}(\mathbf{r})\frac{\vec{\lambda}}{2}\hat{b}(\mathbf{r})\right]_{ f,s}\equiv\sum_{c^{\prime},c}\hat{b}_{f,s,c^{\prime}}^{\dagger}(\mathbf{r})\frac{ \vec{\lambda}_{c^{\prime}c}}{2}\hat{b}_{f,s,c}(\mathbf{r}), \tag{32}\] \[\left[\hat{b}^{\dagger}(\mathbf{r})\frac{\vec{\lambda}}{2}\frac{\vec{\sigma}}{m} \hat{b}(\mathbf{r})\right]_{f}\equiv\sum_{s^{\prime},s,c^{\prime},c}\hat{b}_{f,s^{ \prime},c^{\prime}}^{\dagger}(\mathbf{r})\frac{\vec{\lambda}_{c^{\prime}c}}{2}\frac{ \vec{\sigma}_{s^{\prime}s}}{m_{f}}\hat{b}_{f,s,c}(\mathbf{r}). \tag{33}\] Then, we can show that the Hamiltonian \(\hat{H}\) acting on the ket vector \(\ket{B}\) becomes \[\hat{H}\ket{B}= \int d^{3}R\left(M_{B}-\frac{1}{2M_{B}}\frac{\partial^{2}}{ \partial\mathbf{R}^{2}}\right)\Phi^{(B)}(\mathbf{R}) \tag{34}\] \[\times\int d^{3}\lambda d^{3}\rho\Psi^{(B)}(\mathbf{\lambda},\mathbf{\rho} )\hat{W}^{(B)\dagger}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3})\ket{0},\] where we used the Schrodinger equation (5) and the relation of the differential operators \[\frac{1}{2m_{1}}\frac{\partial^{2}}{\partial\mathbf{r}_{1}^{2}}+\frac{1} {2m_{2}}\frac{\partial^{2}}{\partial\mathbf{r}_{2}^{2}}+\frac{1}{2m_{3}}\frac{ \partial^{2}}{\partial\mathbf{r}_{3}^{2}}\] \[=\frac{1}{2(m_{1}+m_{2}+m_{3})}\frac{\partial^{2}}{\partial\mathbf{R }^{2}}+\frac{1}{2\mu_{B}}\frac{\partial^{2}}{\partial\mathbf{\lambda}^{2}}+\frac{1 }{2\mu_{B}^{\prime}}\frac{\partial^{2}}{\partial\mathbf{\rho}^{2}}\] \[\simeq\frac{1}{2M_{B}}\frac{\partial^{2}}{\partial\mathbf{R}^{2}}+ \frac{1}{2\mu_{B}}\frac{\partial^{2}}{\partial\mathbf{\lambda}^{2}}+\frac{1}{2\mu _{B}^{\prime}}\frac{\partial^{2}}{\partial\mathbf{\rho}^{2}}. \tag{35}\] In the last line we used an approximation \(m_{1}+m_{2}+m_{3}\simeq M_{B}\). #### ii.2.3 Two-baryon states and the resonating group method We can straightforwardly extend the ket vector to express the two-baryon \(B_{a}B_{b}\) state as \[\ket{B_{a}B_{b}}= \int d^{3}R_{a}\Phi^{(B_{a})}(\mathbf{R}_{a})\int d^{3}\lambda_{a}d^ {3}\rho_{a}\Psi^{(B_{a})}(\mathbf{\lambda}_{a},\mathbf{\rho}_{a})\] \[\times\int d^{3}R_{b}\Phi^{(B_{b})}(\mathbf{R}_{b})\int d^{3}\lambda_ {b}d^{3}\rho_{b}\Psi^{(B_{b})}(\mathbf{\lambda}_{b},\mathbf{\rho}_{b})\] \[\times\hat{W}^{(B_{a})\dagger}(\mathbf{r}_{a1},\mathbf{r}_{a2},\mathbf{r}_{a3 })\hat{W}^{(B_{b})\dagger}(\mathbf{r}_{b1},\mathbf{r}_{b2},\mathbf{r}_{b3})\ket{0}. \tag{36}\] In the usual manner, we can decompose the product of the spatial wave functions of the two baryons \(\Phi^{(B_{a})}(\mathbf{R}_{a})\Phi^{(B_{b})}(\mathbf{R}_{b})\) into the center-of-mass part \(\phi(\mathbf{R}_{\rm tot})\) and the relative part \(\psi(\mathbf{r})\) as \[\Phi^{(B_{a})}(\mathbf{R}_{a})\Phi^{(B_{b})}(\mathbf{R}_{b})=\phi(\mathbf{R}_{\rm tot}) \psi(\mathbf{r}) \tag{37}\] with \[\mathbf{R}_{\rm tot}\equiv\frac{M_{B_{a}}\mathbf{R}_{B_{a}}+M_{B_{b}}\mathbf{R}_{B_{b}}}{ M_{B_{a}}+M_{B_{b}}},\quad\mathbf{r}\equiv\mathbf{R}_{B_{b}}-\mathbf{R}_{B_{a}}. \tag{38}\] Then we rewrite the two-baryon state as \[\ket{B_{a}B_{b}}\] \[=\int d^{3}R_{\rm tot}\phi(\mathbf{R}_{\rm tot})\int d^{3}r\psi(\mathbf{r})\] \[\quad\times\int d^{3}\lambda_{a}d^{3}\rho_{a}\Psi^{(B_{a})}(\mathbf{ \lambda}_{a},\mathbf{\rho}_{a})\int d^{3}\lambda_{b}d^{3}\rho_{b}\Psi^{(B_{b})}( \mathbf{\lambda}_{b},\mathbf{\rho}_{b})\] \[\quad\times\hat{W}^{(B_{a})\dagger}(\mathbf{r}_{a1},\mathbf{r}_{a2},\mathbf{r }_{a3})\hat{W}^{(B_{b})\dagger}(\mathbf{r}_{b1},\mathbf{r}_{b2},\mathbf{r}_{b3})\ket{0}, \tag{39}\] where we used the relation of the measure \[d^{3}R_{a}d^{3}R_{b}=d^{3}R_{\rm tot}d^{3}r. \tag{40}\] In addition, we introduce the two-baryon vector in which the separation is fixed to be \(\mathbf{r}=\mathbf{r}_{0}\): \[\ket{B_{a}B_{b}(\mathbf{r}_{0})}\] \[=\int d^{3}R_{\rm tot}\phi(\mathbf{R}_{\rm tot})\int d^{3}r\delta( \mathbf{r}-\mathbf{r}_{0})\] \[\quad\times\int d^{3}\lambda_{a}d^{3}\rho_{a}\Psi^{(B_{a})}(\mathbf{ \lambda}_{a},\mathbf{\rho}_{a})\int d^{3}\lambda_{b}d^{3}\rho_{b}\Psi^{(B_{b})}( \mathbf{\lambda}_{b},\mathbf{\rho}_{b})\] \[\quad\times\hat{W}^{(B_{a})\dagger}(\mathbf{r}_{a1},\mathbf{r}_{a2},\bm {r}_{a3})\hat{W}^{(B_{b})\dagger}(\mathbf{r}_{b1},\mathbf{r}_{b2},\mathbf{r}_{b3})\ket{0}. \tag{41}\] Now, we derive an equation which the two-baryon \(B_{a}B_{b}\) system obeys. Suppose that the two-baryon state \(\ket{B_{a}B_{b}}\) is an eigenstate of the Hamiltonian \(\hat{H}\) with the eigenenergy \(E\): \[\hat{H}\ket{B_{a}B_{b}}=E\ket{B_{a}B_{b}}. \tag{42}\] When we multiply the bra vector \(\bra{B_{c}B_{d}(\mathbf{r})}\) to the both sides of this equation, we obtain \[\bra{B_{c}B_{d}(\mathbf{r})}\hat{H}\ket{B_{a}B_{b}}=E\bra{B_{c}B_{d}(\mathbf{r})}\ket{B _{a}B_{b}}. \tag{43}\] The braket \(\bra{B_{c}B_{d}(\mathbf{r})}\ket{B_{a}B_{b}}\) can be calculated in the usual manner for the creation and annihilation operators: \[\bra{B_{c}B_{d}(\mathbf{r})}\ket{B_{a}B_{b}}\] \[=\int d^{3}R_{\rm tot}\ket{\phi(\mathbf{R}_{\rm tot})}^{2}\] \[\quad\times\sum_{\vec{\mu}_{a},\vec{\mu}_{b},\vec{\mu}_{c},\vec{ \mu}_{d}}(-1)^{P}w_{\vec{\mu}_{a}}^{(B_{a})}w_{\vec{\mu}_{b}}^{(B_{b})}w_{\vec{ \mu}_{c}}^{(B_{c})}w_{\vec{\mu}_{d}}^{(B_{d})}\] \[\quad\times\int d^{3}\lambda_{c}d^{3}\rho_{c}d^{3}\lambda_{d}d^{3} \rho_{d}\left[\Psi^{(B_{a})}(\lambda_{a},\rho_{a})\Psi^{(B_{b})}(\lambda_{b}, \rho_{b})\right.\] \[\quad\quad\left.\times\Psi^{(B_{c})}(\lambda_{c},\rho_{c})^{*}\Psi^ {(B_{d})}(\lambda_{d},\rho_{d})^{*}\psi(\mathbf{r}^{\prime})\right]_{B_{a}B_{b}\to B_{c} B_{d}}, \tag{44}\] where \(P\) is the total number of permutations of the creation and annihilation operators, and the subscript "\(B_{a}B_{b}\to B_{c}B_{d}\)" restricts the summation to the case where the six creation operators from \(B_{a}B_{b}\) are exactly removed by the six annihilation operators from \(B_{c}B_{d}\). In such a case, the coordinates \(\mathbf{\lambda}_{a}\), \(\mathbf{\rho}_{a}\), \(\mathbf{\lambda}_{b}\), \(\mathbf{\rho}_{b}\), and \(\mathbf{r}^{\prime}\) in the \(B_{a}B_{b}\) system are fixed by the coordinates \(\mathbf{\lambda}_{c}\), \(\mathbf{\rho}_{c}\), \(\mathbf{\lambda}_{d}\), \(\mathbf{\rho}_{d}\), which are the integral variables, and \(\mathbf{r}\) in the \(B_{c}B_{d}\) system. The wave function for the center-of-mass motion is normalized as \[\int d^{3}R_{\rm tot}\ket{\phi(\mathbf{R}_{\rm tot})}^{2}=1. \tag{45}\] Then, we can express the braket \(\bra{B_{c}B_{d}(\mathbf{r})}\ket{B_{a}B_{b}}\) by using the normalization kernel \(N(\mathbf{r},\mathbf{r}^{\prime})\) as \[\bra{B_{c}B_{d}(\mathbf{r})}\ket{B_{a}B_{b}}\equiv\int d^{3}r^{\prime}N(\mathbf{r},\mathbf{r}^ {\prime})\psi(\mathbf{r}^{\prime}). \tag{46}\] On the other hand, to calculate \(\bra{B_{c}B_{d}(\mathbf{r})}\hat{H}\ket{B_{a}B_{b}}\) we use the relation (34). Namely, when all the annihilation operators in \(\hat{H}\) act on \(B_{a}\) (or on \(B_{b}\)), we can use the relation (34). Additionally, because the potential part \(\hat{V}\) contains the product of two annihilation operators, \(\hat{V}\) can simultaneously act on both \(B_{a}\) and \(B_{b}\) as well. Therefore, we have \[\hat{H}\left|B_{a}B_{b}\right>\] \[=\int d^{3}R_{\rm tot}\int d^{3}r\left[M_{B_{a}}+M_{B_{b}}-\frac{1} {2\mu_{ab}}\frac{\partial^{2}}{\partial\mathbf{r}^{2}}\right.\] \[\quad\left.-\frac{1}{2(M_{B_{a}}+M_{B_{b}})}\frac{\partial^{2}}{ \partial\mathbf{R}_{\rm tot}^{2}}\right]\phi(\mathbf{R}_{\rm tot})\psi(\mathbf{r})\] \[\times\int d^{3}\lambda_{a}d^{3}\rho_{a}\Psi^{(B_{a})}(\mathbf{ \lambda}_{a},\mathbf{\rho}_{a})\int d^{3}\lambda_{b}d^{3}\rho_{b}\Psi^{(B_{b})}( \mathbf{\lambda}_{b},\mathbf{\rho}_{b})\] \[\times\hat{W}^{(B_{a})\dagger}(\mathbf{r}_{a1},\mathbf{r}_{a2},\mathbf{r}_{a3 })\hat{W}^{(B_{b})\dagger}(\mathbf{r}_{b1},\mathbf{r}_{b2},\mathbf{r}_{b3})\left|0\right>\] \[+\hat{V}\left|B_{a}B_{b}\right>_{\rm int}. \tag{47}\] Here we used the relation \[\frac{1}{2M_{B_{a}}}\frac{\partial^{2}}{\partial\mathbf{R}_{a}^{2}}+ \frac{1}{2M_{B_{b}}}\frac{\partial^{2}}{\partial\mathbf{R}_{b}^{2}}\] \[=\frac{1}{2\mu_{ab}}\frac{\partial^{2}}{\partial\mathbf{r}^{2}}+ \frac{1}{2(M_{B_{a}}+M_{B_{b}})}\frac{\partial^{2}}{\partial\mathbf{R}_{\rm tot}^ {2}}, \tag{48}\] where \(\mu_{ab}\) is the reduced mass for the \(B_{a}B_{b}\) system \[\mu_{ab}=\frac{M_{B_{a}}M_{B_{b}}}{M_{B_{a}}+M_{B_{b}}}, \tag{49}\] and the subscript "int" of \(\hat{V}\left|B_{a}B_{b}\right>_{\rm int}\) denotes the inter-baryon contributions to the potential term, _i.e._, the potential between one quark from \(B_{a}\) and the other from \(B_{b}\). We are not interested in the center-of-mass motion, so we neglect the center-of-mass kinetic energy in Eq. (47). Because the first term in Eq. (47) has the same structure of operators as the \(\left|B_{a}B_{b}\right>\) state, we have \[\left<B_{c}B_{d}(\mathbf{r})\right|\hat{H}\left|B_{a}B_{b}\right>\] \[=\int d^{3}r^{\prime}N(\mathbf{r},\mathbf{r}^{\prime})\left(M_{B_{a}}+M_ {B_{b}}-\frac{1}{2\mu_{ab}}\frac{\partial^{2}}{\partial\mathbf{r}^{\prime\,2}} \right)\psi(\mathbf{r}^{\prime})\] \[\quad+\left<B_{c}B_{d}(\mathbf{r})\right|\hat{V}\left|B_{a}B_{b} \right>_{\rm int}. \tag{50}\] The potential term \(\left<B_{c}B_{d}(\mathbf{r})\right|\hat{V}\left|B_{a}B_{b}\right>_{\rm int}\) can be calculated in the usual manner for the creation and annihilation operators as well, and can be expressed by the non-local potential \(V_{\rm int}(\mathbf{r},\mathbf{r}^{\prime})\) as \[\left<B_{c}B_{d}(\mathbf{r})\right|\hat{V}\left|B_{a}B_{b}\right>_{\rm int}\equiv \int d^{3}r^{\prime}V_{\rm int}(\mathbf{r},\mathbf{r}^{\prime})\psi(\mathbf{r}^{\prime}). \tag{51}\] We note that contributions without quark shuffling between baryons [Fig. 3(a)] amount to zero for the non-local potential \(V_{\rm int}(\mathbf{r},\mathbf{r}^{\prime})\) in the quark model due to the properties of quark color inside baryons and Gell-Mann matrices \(\vec{\lambda}\). Physically, this means that the gluon cannot mediate between color singlet states. Therefore, \(V_{\rm int}(\mathbf{r},\mathbf{r}^{\prime})\) necessarily contains the shuffling of quarks between baryons, such as shown in Fig. 3(b). As a consequence, we obtain the equation which the \(B_{a}B_{b}\to B_{c}B_{d}\) process should satisfy: \[\int d^{3}r^{\prime}\left[N(\mathbf{r},\mathbf{r}^{\prime})\left(-\frac{1 }{2\mu_{ab}}\frac{\partial^{2}}{\partial\mathbf{r}^{\prime\,2}}\right)+V_{\rm int }(\mathbf{r},\mathbf{r}^{\prime})\right]\psi(\mathbf{r}^{\prime})\] \[=\mathcal{E}\int d^{3}r^{\prime}N(\mathbf{r},\mathbf{r}^{\prime})\psi(\bm {r}^{\prime}) \tag{52}\] with the eigenenergy \[\mathcal{E}\equiv E-M_{B_{a}}-M_{B_{b}}. \tag{53}\] This integro-differential equation is the resonating group method (RGM) equation. Because all the parameters in the present model are fixed to reproduce the baryon masses, the RGM equation has no free parameters. The RGM equation (52) automatically covers coupled-channels cases, but we will not take into account the coupled-channels effects unless explicitly stated. #### iii.1.4 Equivalent local potentials While the RGM equation contains the non-local potential \(V_{\rm int}(\mathbf{r},\mathbf{r}^{\prime})\) between two baryons together with the normalization kernel \(N(\mathbf{r},\mathbf{r}^{\prime})\), local potentials are desired for practical studies. In fact, from the RGM equation, we can extract equivalent local potentials between two baryons. Our strategy is to calculate a local potential that generates the same wave function as that of the two baryon state in the RGM equation. However, the wave function \(\psi(\mathbf{r})\) in the RGM equation (52) may contain unphysical forbidden states by the Pauli exclusion principle, which are zero eigenstates of the normalization kernel \(N(\mathbf{r},\mathbf{r}^{\prime})\). To eliminate these zero modes, we "reduce" the wave function in the following manner: \[\psi_{\rm R}(\mathbf{r})=\int d^{3}r^{\prime}N^{1/2}(\mathbf{r},\mathbf{r}^{\prime})\psi( \mathbf{r}) \tag{54}\] where \(N^{1/2}(\mathbf{r},\mathbf{r}^{\prime})\) satisfies \[N(\mathbf{r},\mathbf{r}^{\prime})=\int d^{3}r^{\prime\prime}N^{1/2}(\mathbf{r},\mathbf{r}^{ \prime\prime})N^{1/2}(\mathbf{r}^{\prime\prime},\mathbf{r}^{\prime}). \tag{55}\] Now, we can calculate the local potentials for two baryons as follows: 1. Calculate \(N(\mathbf{r},\mathbf{r}^{\prime})\), \(V_{\rm int}(\mathbf{r},\mathbf{r}^{\prime})\) and solve the RGM equation. Because we are interested in the low-energy behavior of the two-baryon system, we focus Figure 3: Examples of diagrams depicting the baryon-baryon interactions in our model. Solid lines represent quarks, and curled lines denote quark-quark interactions. (a) Contributions without quark shuffling amount to zero. (b) Quark shuffling contributes to the baryon-baryon interactions. on the ground state in \(S\) wave. The eigenenergy of the two-baryon system \(\mathcal{E}\) is given by \[\mathcal{E}_{\rm G}=\begin{cases}-B&\text{if a bound state exists},\\ 0&\text{else},\end{cases}\] (56) where \(B\) is the binding energy of the bound state. Note that the angular dependence of \(N(\mathbf{r},\mathbf{r}^{\prime})\) and \(V_{\rm int}(\mathbf{r},\mathbf{r}^{\prime})\) is irrelevant in the present study, because we focus on the \(S\)-wave state. For the reduced mass \(\mu_{ab}\) in the RGM equation and baryon masses in the eigenenergy \(\mathcal{E}\) (53), we use the physical and isospin-averaged baryon masses rather than the eigenvalues in Eq. (5). 2. From the wave function \(\psi(r)\) in the RGM equation, calculate the reduced wave function \(\psi_{\rm R}(r)\) according to Eq. (54). 3. Calculate \[\chi_{\rm R}(r)\equiv r\psi_{\rm R}(r)\] (57) and derive the equivalent local potential \(V_{\rm eq}(r)\) that generates the same wave function \(\chi_{\rm R}\) with the eigenenergy \(\mathcal{E}_{\rm G}\)[15]: \[V_{\rm eq}(r)\equiv\mathcal{E}_{\rm G}+\frac{1}{2\mu_{ab}\chi_{\rm R}(r)} \frac{d^{2}\chi_{\rm R}}{dr^{2}}.\] (58) We note that the equivalent local potential (58) depends on the energy \(\mathcal{E}\). In the present study, we fix the energy to be the ground-state energy, because we are interested in the low-energy behavior of the two-baryon system, and evaluate the potential at this energy. We also note that this strategy works when the wave function \(\chi_{\rm R}\) has no nodes. However, if the wave function has a node \(\chi_{\rm R}=0\) and at this point \(d^{2}\chi_{\rm R}/dr^{2}\neq 0\), the equivalent local potential becomes singular. Indeed, the wave function in the RGM equation may have nodes so as to make the wave function orthogonal to the unphysical forbidden states of the RGM equation due to the Pauli exclusion principle for quarks (see Ref. [16]). One could remove such contributions to obtain nonsingular potentials as in, _e.g._, Ref [29] and references therein for the local \(\alpha\)-\(\alpha\) potential. Still, in the present study, we simply discard singular equivalent local potentials and show only nonsingular ones. ## III Numerical results and discussions In this section, we present our numerical results for the baryon-baryon potentials in our model and discuss their properties. We first focus on the single-channel cases without coupled-channels effects in Sec. III.1, and then we consider the coupled-channels effects in several systems in Sec. III.2. Before presenting the results, we would like to mention two technical details that allow us to speed up the numerical calculations. First, we reduce the number of terms in the Gaussian expansion, denoted by \(N\). In Sec. II.1 we used \(N=10\) to achieve certain convergence. However, we have verified that, for all the ground-state baryons, the wave function \(\Psi^{(B)}(\mathbf{\lambda},\mathbf{\rho})\) with \(N=2\) (with \(r_{\rm min}\) and \(r_{\rm max}\) tuned) deviates from that with \(N=10\) by only about \(1\,\%\).1 Therefore, in this section, we employ \(N=2\) instead of \(N=10\), which leads to potential errors of about \(\sim 1\,\%\). Second, we approximate the color Coulomb plus linear confining potential \(V_{\rm CL}(r)\) (31) by a sum of 14 Gaussians: Footnote 1: Tuned values of \((r_{\rm min},r_{\rm max})\) in the \(N=2\) case are: \((0.452,0.928)\) for \(N\), \((0.432,0.865)\) for \(\Lambda\), \((0.454,0.901)\) for \(\Sigma\), \((0.445,0.833)\) for \(\Xi\), \((0.659,1.078)\) for \(\Delta\), \((0.626,1.026)\) for \(\Sigma^{*}\), \((0.595,0.976)\) for \(\Xi^{*}\), and \((0.555,0.891)\) for \(\Omega\), in units of fm. \[V_{\rm CL}(r)=\sum_{n=1}^{14}A_{n}\exp\left(-\frac{r^{2}}{x_{n}^{2}}\right). \tag{59}\] The range parameters \(x_{n}\) are chosen in a geometric progression \[x_{n}=60^{(n-1)/13}\times 0.05\,{\rm fm}, \tag{60}\] while the coefficients \(A_{n}\) are fixed by fitting to \(V_{\rm CL}(r)\). By using the parameter values listed in Table 3, we can reproduce \(V_{\rm CL}(r)\) reasonably well over the range \([0.05\,{\rm fm},3\,{\rm fm}]\), which covers the range of the baryon-baryon interactions of interest in this study. \begin{table} \begin{tabular}{l c c} \(n\) & \(x_{n}\) [fm] & \(A_{n}\) [MeV] \\ \hline 1 & 0.05 & 2553.74 \\ 2 & 0.068510 & \(-1517.80\) \\ 3 & 0.093871 & 2382.08 \\ 4 & 0.128621 & \(-1663.50\) \\ 5 & 0.176236 & 2217.24 \\ 6 & 0.241476 & \(-2009.90\) \\ 7 & 0.330868 & 2717.37 \\ 8 & 0.453353 & \(-3177.19\) \\ 9 & 0.621179 & 4887.11 \\ 10 & 0.851134 & \(-7632.80\) \\ 11 & 1.166215 & 14029.60 \\ 12 & 1.597937 & \(-22830.20\) \\ 13 & 2.189477 & 27113.50 \\ 14 & 3 & \(-12993.60\) \\ \end{tabular} \end{table} Table 3: Parameters for the color Coulomb plus linear confining potential. ### Single-channel cases #### iii.1.1 Two-baryon channels Firstly, we summarize in Table 4 the two-baryon channels composed of the ground-state baryons in \(S\) wave. In this study, we specify the quantum numbers of the two-baryon channels using the total spin \(J\) and isospin \(I\) as \((J,I)\). We perform projection onto the \((J,I)\) state using the Clebsch-Gordan coefficients in the usual manner. The same Table also lists the values of the spin-flavor [33] components for the two-baryon channels, denoted as \(N_{33}\). This measures the contribution of totally antisymmetric states of six quarks for two ground-state baryons in \(S\) wave. Therefore, a smaller \(N_{33}\) indicates stronger repulsion due to the Pauli exclusion principle for \begin{table} \begin{tabular}{c c c} \(S=0\) & & \(S=-3\) & \(S=-4\) \\ \(NN\) (1878 MeV) & \(\Delta\Sigma\) (2425 MeV) & \(N\Xi^{*}\) (2472 MeV) & \(\Delta\Xi\) (2434 MeV) \\ \hline \((1,0)\) & 5/9 & \((2,5/2)\dagger\) & 0 \\ \((0,1)\) & 5/9 & \((2,3/2)\dagger\) & 5/36 \\ \((2,1/2)\) & 8/9 & \((1,1)\) & 14/27 \\ \(N\Delta\) (2171 MeV) & \((1,5/2)\) & 4/9 \\ \hline \((2,2)\dagger\) & 0 & \((1,3/2)\) & 7/12 \\ \((2,1)\) & 4/9 & \((1,1/2)\) & 4/9 \\ \((1,2)\) & 4/9 & \((1,1/2)\) & 4/9 \\ \((1,1)\dagger\) & 0 & \(\Delta\Sigma^{*}\) (2617 MeV) & \((3,5/2)\dagger\) & 0 \\ \(\Delta\Delta\) (2464 MeV) & \((3,3/2)\dagger\) & 0 \\ \hline \((3,2)\dagger\) & 0 & \((3,1/2)\) & 1 \\ \((3,0)\) & 1 & \((2,5/2)\dagger\) & 0 \\ \((2,3)\dagger\) & 0 & \((2,3/2)\) & 5/9 \\ \((2,1)\) & 5/9 & \((1,1)\) & 17/27 \\ \((1,2)\) & 5/9 & \((1,5/2)\) & 5/9 \\ \((1,0)\) & 4/9 & \((1,3/2)\) & 5/9 \\ \((0,3)\) & 1 & \((1,1/2)\) & 4/9 \\ \((0,1)\) & 4/9 & \((0,5/2)\) & 1 \\ \((0,3/2)\) & 4/9 & \((2,0)\) & 7/9 \\ \((0,1/2)\) & 4/9 & \((1,2)\dagger\) & 1/3 \\ \(N\Lambda\) (2055 MeV) & & \((1,1)\) & 11/27 \\ \hline \((1,1/2)\) & 1/2 & \(S=-2\) & \(\Lambda\Lambda\) (2231 MeV) \\ \hline \((0,1/2)\) & 1/2 & \(\Lambda\Sigma\) (2309 MeV) \\ \hline \((2,1/2)\) & 5/9 & \((1,1)\) & 1/3 \\ \((1,3/2)\dagger\) & 1/9 & \((2,2)\) & 1/3 \\ \((1,1/2)\dagger\) & 1/18 & \((0,0)\) & 2/3 \\ \hline \((2,3/2)\dagger\) & 1/18 & \(\Lambda\Sigma\) (2309 MeV) \\ \hline \((2,1/2)\) & 5/9 & \((1,1)\) & 1/3 \\ \((1,3/2)\dagger\) & 1/9 & \((2,2)\) & 2/9 \\ \((1,1/2)\dagger\) & 1/9 & \(\Sigma\Sigma\) (2386 MeV) \\ \hline \(\Delta\Lambda\) (2348 MeV) & \((1,1)\) & 11/27 \\ \((2,3/2)\dagger\) & 1/4 & \((0,0)\) & 7/18 \\ \hline \end{tabular} \begin{tabular}{c c} \(S=0\) & & \(S=-3\) & \(S=-4\) \\ \(N\Xi^{*}\) (2472 MeV) & \(\Lambda\Xi\) (2434 MeV) \\ \hline \((2,1)\) & 2/9 & \((1,1/2)\) & 5/18 \\ \((2,0)\) & 2/3 & \((0,1/2)\) & 1/2 \\ \((1,1)\) & 14/27 & \(\Sigma\Xi\) (2511 MeV) \\ \hline \((1,3/2)\) & 5/9 & \((1,3/2)\dagger\) & 0 \\ \((1,1/2)\) & 5/18 \\ \((1,1/2)\dagger\) & 5/18 \\ \((1,1/2)\dagger\) & 1/18 \\ \hline \((2,1/2)\) & 1/3 & \(\Xi\Xi^{*}\) (2652 MeV) \\ \hline \((2,1/2)\) & 1/3 \\ \((2,1/2)\dagger\) & 2/9 \\ \((1,1/2)\) & 2/9 \\ \((1,1/2)\) & 2/3 \\ \hline \(\Xi^{*}\) (2727 MeV) & \(\Sigma^{*}\) (2732 MeV) \\ \hline \((2,3/2)\dagger\) & 2/9 \\ \((2,1/2)\) & 1/3 \\ \((2,1/2)\) & 2/9 \\ \((1,1/2)\) & 2/9 \\ \((1,1/2)\) & 2/9 \\ \((1,1/2)\) & 2/9 \\ \((0,1)\) & 2/3 \\ \hline \(\Delta\) (2727 MeV) & \(\Sigma^{*}\) (2727 MeV) \\ \hline \((2,3/2)\dagger\) & 2/9 \\ \((2,3/2)\dagger\) & 2/9 \\ \((2,1/2)\) & 1/3 \\ \((2,1/2)\dagger\) & 1/4 \\ \hline \(\Delta\Omega\) (2904 MeV) \\ \hline \((3,3/2)\) & 1/2 \\ \((2,3/2)\dagger\) & 1/2 \\ \((1,3/2)\dagger\) & 1/2 \\ \((1,1/2)\) & 4/9 \\ \hline \(\Xi^{*}\) (2918 MeV) & \(\Xi^{*}\) (2918 MeV) \\ \hline \((3,3/2)\) & 1/2 \\ \((3,1/2)\dagger\) & 0 \\ \((2,3/2)\dagger\) & 1/18 \\ \((2,1/2)\) & 5/9 \\ \((1,1/2)\) & 5/9 \\ \((2,1/2)\) & 1/2 \\ \((1,1/2)\) & 5/9 \\ \((0,1/2)\) & 1 \\ \hline \end{tabular} \begin{tabular}{c c} \(S=-3\) & \(S=-4\) \\ \(\Xi\Xi\) (2637 MeV) \\ \hline \((2,1/2)\) & 5/18 \\ \((0,1/2)\) & 1/2 \\ \((1,1/2)\) & 5/9 \\ \((1,1/2)\) & 5/18 \\ \((1,1/3)\) & 5/9 \\ \((1,1/2)\) & 5/18 \\ \((1,1/3)\) & 5/9 \\ \((1,1/2)\dagger\) & 1/2 \\ \hline \((2,1/2)\) & 1/4 \\ \hline \((2,1/2)\dagger\) & 1/3 \\ \((2,1/2)\dagger\) & 1/3 \\ \((1,1)\dagger\) & 1/9 \\ \((2,1/2)\dagger\) & 2/9 \\ \((1,1/2)\dagger\) & 2/9 \\ \((1,1/2)\) & 2/9 \\ \((1,1/2)\dagger\) & 2/9 \\ \((1,1/2)\) & 2/9 \\ \((1,1/2)\) & 2/3 \\ \hline \(\Xi^{*}\) (2727 MeV) \\ \hline \((2,3/2)\dagger\) & 2/9 \\ \((2,1/2)\) & 1/3 \\ \((2,1/2)\dagger\) & 2/9 \\ \((1,1/2)\) & 2/9 \\ \((1,1/2)\) & 2/9 \\ \((1,1/2)\) & 2/9 \\ \((1,1/2)\) & 2/3 \\ \hline \(\Delta\Xi^{*}\) (2727 MeV) \\ \hline \((2,3/2)\dagger\) & 2/9 \\ \((2,1/2)\) & 1/3 \\ \((2,1/2)\dagger\) & 2/9 \\ \((1,1/2)\) & 1/4 \\ \hline \(\Delta\Omega\) (2904 MeV) \\ \hline \((2,3/2)\dagger\) & 2/9 \\ \((2,3/2)\dagger\) & 2/2 \\ \((1,3/2)\dagger\) & 2/2 \\ \((1,3/2)\) & 1/2 \\ \((1,1/2)\) & 4/9 \\ \hline \(\Xi^{*}\Xi^{*}\) (2 quarks. The spin-flavor [33] component \(N_{33}\) corresponds to an eigenvalue of the normalization kernel \(N(\mathbf{r},\mathbf{r}^{\prime})\) associated with the eigenvector \(\varphi(\mathbf{r})\)[2]: \[\int d^{3}r^{\prime}N(\mathbf{r},\mathbf{r}^{\prime})\varphi(\mathbf{r}^{\prime})=2N_{33} \varphi(\mathbf{r}). \tag{61}\] We can calculate \(N_{33}\) using the creation and annihilation operators of baryons. Namely, we have the formula \[N_{33}=\frac{1}{2}\left\langle B_{a}B_{b}(J,I)_{0}|B_{a}B_{b}(J,I)_{0}\right\rangle, \tag{62}\] where \(|B_{a}B_{b}(J,I)_{0}\rangle\) is the two-baryon state with the quantum numbers \((J,I)\) but without the coordinates of quarks: \[|B_{a}B_{b}(J,I)_{0}\rangle\] \[\equiv \sum_{s_{a},i_{a}}\left\langle J,J|S_{a},s_{a},S_{b},J-s_{a} \right\rangle\left\langle I,I|I_{a},i_{a},I_{b},I-i_{a}\right\rangle\] \[\times\hat{w}^{(B_{a}(s_{a},i_{a}))\dagger}\hat{w}^{(B_{b}(J-s_{ a},I-i_{a}))\dagger}\left|0\right\rangle, \tag{63}\] \[\hat{w}^{(B(s,i))\dagger}\equiv\sum_{\vec{\mu}}w^{(B(s,i))}_{\vec{\mu}}\hat{a }^{\dagger}_{\mu_{1}}\hat{a}^{\dagger}_{\mu_{2}}\hat{a}^{\dagger}_{\mu_{3}}. \tag{64}\] Here, \(S_{a}\) and \(I_{a}\) are the spin and isospin values of \(B_{a}\), respectively, \(B(s,i)\) refers to the baryon \(B\) with the third components of spin \(s\) and isospin \(i\), \(\langle J,j|S_{a},s_{a},S_{b},s_{b}\rangle\) is the Clebsch-Gordan coefficient, and the operator \(\hat{a}^{\dagger}_{\mu}\) was introduced in Eq. (12). As shown in Table 4, the values of \(N_{33}\) are scattered between zero and unity. In particular, when only single-channel cases are considered, the channels with \(N_{33}\) close to unity usually contain decuplet baryons. We expect that these channels may avoid repulsive potentials due to the Pauli exclusion principle for quarks. Figure 4: Equivalent local potentials for the baryon-baryon systems: strangeness \(S=0\) sector. Figure 5: Decomposition of the potentials for the \(NN(1,0)\) and \(\Delta\Delta(3,0)\) systems. Figure 6: Comparison with the HAL QCD potential for the \(\Delta\Delta(3,0)\) system [9]. #### iv.1.2 Strangeness \(S=0\) Now, let us present the equivalent local potentials for the baryon-baryon systems with strangeness \(S=0\) in Fig. 4.2 As seen in the figure, repulsive cores with a range of approximately \(1\,\)fm are significant in the potentials for the \(NN(1,0)\), \(NN(0,1)\), \(N\Delta(2,1)\), and \(N\Delta(1,2)\) systems. Because the spatial extension of each baryon in our model is typically less than \(0.5\,\)fm (see Table 2), our results indicate that the interaction becomes significant when two baryons start to overlap with each other. The values of the potential at the origin amount to more than \(900\,\)MeV, and the repulsion becomes stronger in the order of \(N(1,0)\), \(N(0,1)\), \(N\Delta(2,1)\), and \(N\Delta(1,2)\). On the other hand, the \(\Delta\Delta\) systems have moderate repulsion or even attractive cores at a short range. In particular, thanks to the attraction, the \(\Delta\Delta(3,0)\) and \(\Delta\Delta(1,0)\) systems generate bound states, whose properties will be presented later. The behavior of these potentials is quantitatively similar to the results in Ref. [16]. Therefore, our model with more precise wave functions strengthens the discussions in Ref. [16]. Here, we note that the \(N\Delta(2,2)\), \(N\Delta(1,1)\), \(\Delta\Delta(3,2)\), and \(\Delta\Delta(2,3)\) systems have singular equivalent local potentials because the wave functions have nodes caused by unphysical forbidden states due to the Pauli exclusion principle for quarks. We mark these channels with a dagger (\(\dagger\)) in Table 4 and do not show these singular equivalent local potentials. Footnote 2: All of the explicit potential values in our model are provided in the ancillary files. To confirm the mechanism of attraction/repulsion, we decompose the potentials by considering the following cases: * Case of the color Coulomb plus linear confining force (CL): only the color Coulomb plus linear confining potential \(V_{\rm CL}(r)\) is considered, while \(V_{\rm ss}(r)=0\) and \(N(\mathbf{r},\mathbf{r}^{\prime})=\delta(\mathbf{r}-\mathbf{r}^{\prime})\). * Case of the color Coulomb, linear confining, and color magnetic forces (CL+SS): both the color Coulomb plus linear confining potential \(V_{\rm CL}(r)\) and the color magnetic potential \(V_{\rm ss}(r)\) are considered, while \(N(\mathbf{r},\mathbf{r}^{\prime})=\delta(\mathbf{r}-\mathbf{r}^{\prime})\). Figure 7: Equivalent local potentials for the baryon-baryon systems: strangeness \(S=-1\) sector. \(\bullet\) Case of the full calculation (Full). The resulting equivalent local potentials for the \(NN(1,0)\) and \(\Delta\Delta(3,0)\) systems are plotted in Fig. 5. The \(NN(1,0)\) system has a moderate interaction due to the color Coulomb plus linear confining force (CL, the dashed line in Fig. 5), but when the color magnetic force is included (CL+SS, the dotted line in Fig. 5), it generates a highly repulsive potential. However, the repulsion becomes weaker when the normalization kernel \(N(\mathbf{r},\mathbf{r}^{\prime})\) is considered (Full, the solid line in Fig. 5). On the other hand, the \(\Delta\Delta(3,0)\) system in the CL case has an attraction in the medium range \(1\,\mathrm{fm}<r<2\,\mathrm{fm}\) that is strong enough to produce a bound state (the double dot-dashed line in Fig. 5). This attraction grows when the color magnetic force is included (the long dashed line in Fig. 5), and the short-range repulsion becomes moderate Figure 8: Equivalent local potentials for the baryon-baryon systems: strangeness \(S=-2\) sector. when the normalization kernel is applied (the dot-dashed line in Fig. 5). We emphasize here that the strength of the attraction/repulsion generated by the color Coulomb plus linear confining force is correlated with the values of \(N_{33}\): a larger \(N_{33}\) generates stronger attraction in the CL case. Indeed, the \(\Delta\Delta(0,3)\) system has the same equivalent local potential in the CL case as the \(\Delta\Delta(3,0)\) system, but the repulsive color magnetic interaction in the \(\Delta\Delta(0,3)\) system distorts the potential, resulting in a repulsive potential in the full calculation (the double Figure 10: Equivalent local potentials for the baryon-baryon systems: strangeness \(S=-4\) sector. Figure 9: Equivalent local potentials for the baryon-baryon systems: strangeness \(S=-3\) sector. dot-dashed line in Fig. 4). This implies that, even if both the \(\Delta\Delta(3,0)\) and \(\Delta\Delta(0,3)\) dibaryon states exist as predicted in Ref. [30], their nature will be qualitatively different from each other. In summary, as discussed in Ref. [16], both the Pauli exclusion principle for quarks and color magnetic interactions are essential for the behavior of baryon-baryon interactions at short distances. In Fig. 6 we compare our result for the equivalent local potential in the \(\Delta\Delta(3,0)\) system with the analytic form of the HAL QCD potential with a heavy pion mass \(m_{\pi}=679\,\)MeV [9]. Both potentials provide sufficient attraction to generate a \(\Delta\Delta(3,0)\) bound state, but the details differ. In particular, at the origin, while no repulsive core is observed in the HAL QCD potential, the potential in our quark model exhibits weak repulsion, which originates from the color Coulomb plus linear force (see Fig. 5). Such a discrepancy can be discussed, for example, by adding meson exchange contributions to our potential, as a previous study has shown that the exchanges of scalar and pseudoscalar mesons can be superposed on the quark-model potential without introducing a double-counting problem [31]. In addition, the quark mass dependence of the potential in both quark models and lattice QCD simulations would be important. However, it should be noted that the potential itself is not observable and the value of the potential at the origin is not crucial for the generation of the bound state. Therefore, to evaluate the strength of the potentials, we introduce a quantity \[\beta\equiv-\frac{16\mu_{ab}}{\pi^{2}}\int_{0}^{\infty}dr\,rV_{\rm eq}(r). \tag{65}\] This quantity is motivated by the condition that a three-dimensional potential well generates a bound state. Namely, a three-dimensional potential well \(V(r)=-V_{0}\theta(a-r)\), with a potential depth \(V_{0}\), range \(a\), and the Heaviside step function \(\theta(x)\), generates a bound state if \(\beta\geq 1\). The \(\Delta\Delta(3,0)\) potential in our quark model provides \(\beta=1.87\), while the HAL QCD potential \(\beta=2.51\) with heavy \(\Delta\) mass \(M_{\Delta}=1677\,\)MeV [9]. These values imply that the HAL QCD potential for the \(\Delta\Delta(3,0)\) system with the heavy \(\Delta\) mass is stronger than that in our model with the physical \(\Delta\) mass. #### iv.2.3 Strangeness \(S<0\) The equivalent local potentials with strangeness \(S=-1\), \(-2\), \(-3\), \(-4\), \(-5\), and \(-6\) are plotted in Figs. 7, 8, 9, 10, 11, and 12, respectively. We note that, because the shuffling of quarks associated with the quark-quark interaction inevitably leads to the transition to inelastic channels [see Fig. 3(b)], the \(N\Omega\) and \(\Delta\Omega\) interactions are absent in the single-channel cases. As shown in the figures, the potentials exhibit both attractive and repulsive behavior, depending on the channels.3 The interaction ranges are about \(1\,\)fm in almost all channels, which again indicate that the interaction becomes significant when two baryons start to overlap with each other. The exception is the case where the color Coulomb plus linear confining force is significant, as the range of the color Coulomb plus linear confining force is long: in this case potentials exhibit both attractive [\(N_{33}\approx 1\), for example \(\Delta\Sigma^{*}(3,1/2)\)] and repulsive [\(N_{33}\approx 0\), for example \(\Delta\Xi^{*}(2,2)\)] behavior. Footnote 3: In the strangeness \(S=-2\) sector, the baryon-baryon interaction energy in the constituent quark model was calculated in a static way by subtracting out isolated baryon masses and relative kinetic energy of two baryons from the total energy of a compact six-quark state [26]. On the other hand, in the present study, we calculate the baryon-baryon potentials in a dynamical way by solving the RGM equation. In Fig. 12, we also plot the analytic form of the \(\Omega\Omega(0,0)\) potential in the HAL QCD method. The behav Figure 11: Equivalent local potentials for the baryon-baryon systems: strangeness \(S=-5\) sector. Figure 12: Equivalent local potentials for the baryon-baryon systems: strangeness \(S=-6\) sector. We also plot the HAL QCD potential for the \(\Omega\Omega(0,0)\) system [7]. ior of the potentials in the two approaches is qualitatively consistent. The mechanism of the \(\Omega\Omega(0,0)\) potential in our quark model is similar to that of the \(\Delta\Delta(0,3)\): the repulsion near the origin is the sum of the contributions from the color Coulomb plus linear confining force and repulsive color magnetic force, while the medium-range attraction originates from the color Coulomb plus linear confining force. Because the strange quark is heavier than the up and down quarks, the repulsive color magnetic force, which is proportional to \(1/m_{s}^{2}\) [see Eq. (30)], becomes weaker in the \(\Omega\Omega(0,0)\) system, and hence the medium-range attraction persists. However, the attraction is not sufficient in our model to generate a bound state in contrast to the HAL QCD potential. Indeed, the strength of the potential \(\beta\) in Eq. (65) amounts to \(\beta=0.132\) (\(0.937\)) in our model (HAL QCD method). This discrepancy could be compensated for by including exchange forces of mesons such as the \(\eta\) and \(\sigma\) mesons. #### iv.1.4 Bound states In our quark model calculations in single-channel cases, we find bound states of the \(\Delta\Delta(3,0)\), \(\Delta\Delta(1,0)\), \(\Delta\Sigma(2,1/2)\), \(\Delta\Sigma^{*}(3,1/2)\), and \(\Sigma\Sigma^{*}(2,0)\) systems. Table 5 shows the binding energies \(B\), masses \(M\equiv M_{B_{a}}+M_{B_{b}}-B\), and root mean squared distances between two baryons \(\sqrt{\langle r_{\rm D}^{2}\rangle}\) of these bound states, which are defined using the normalized wave functions of the relative motion as \[\int_{0}^{\infty}dr|\chi_{\rm R}(r)|^{2}=1,\quad\langle r_{\rm D}^{2}\rangle \equiv\int_{0}^{\infty}dr\,r^{2}|\chi_{\rm R}(r)|^{2}. \tag{66}\] We also plot the square of the relative wave functions \(|\chi_{\rm R}(r)|^{2}\) in Fig. 13. As shown by the root mean squared distances in Table 5 and the squared wave functions in Fig. 13, the spatial extension of the bound states largely exceeds the typical size of hadrons of \(1\,{\rm fm}\). This strongly suggests that these dibaryon states are hadronic molecules rather than compact hexaquark states. The \(\Delta\Delta(3,0)\) bound state can be interpreted as the \(d^{*}(2380)\) recently confirmed in experiments [32]. The \(\Delta\Sigma^{*}(3,1/2)\) bound state was predicted in Ref. [23] in the chiral SU(3) quark model. Furthermore, the wave functions of the \(\Delta\Delta(3,0)\) and \(\Delta\Sigma^{*}(3,1/2)\) are very similar to each other. Indeed, both the normalization kernel \(N(\mathbf{r},\mathbf{r}^{\prime})\) and interaction term \(V_{\rm int}(\mathbf{r},\mathbf{r}^{\prime})\) are very similar in the \(\Delta\Delta(3,0)\) and \(\Delta\Sigma^{*}(3,1/2)\) systems, because they are members of the flavor antidecuplet generated by two decuplet baryons. This fact implies that bound states of the \(\Delta\Xi^{*}\)-\(\Sigma^{*}\Sigma^{*}(3,1)\) and \(\Delta\Omega\)-\(\Sigma^{*}\Xi^{*}(3,3/2)\) systems in coupled channels exist as the other members of the flavor antidecuplet, although the attraction is not sufficient in these systems in the single-channel cases. On the other hand, the \(\Delta\Delta(1,0)\), \(\Delta\Sigma(2,1/2)\), and \(\Sigma\Sigma^{*}(2,0)\) systems couple to lower baryon-baryon channels \(NN\), \(N\Sigma^{*}\), and \(N\Xi^{*}\), respectively, in \(S\) wave. Therefore, it is necessary to take into account the coupled-channels effects to determine the properties of these bound states. ### Coupled-channels cases In the previous subsection, we only considered the single-channel cases where transitions between inelastic channels were neglected. However, in several systems, such transitions may play a significant role. Therefore, in this subsection, we examine the coupled-channels effects. #### iv.2.1 Flavor antidecuplet states with \(J=3\) Firstly, we consider the coupled channels of \(\Delta\Xi^{*}\)-\(\Sigma^{*}\Sigma^{*}(3,1)\) and \(\Delta\Omega\)-\(\Sigma^{*}\Xi^{*}(3,3/2)\). They are important because they belong to the flavor antidecuplet generated by two decuplet baryons, together with the \(\Delta\Delta(3,0)\) and \(\Delta\Sigma^{*}(3,1/2)\) states, which are bound as in the previous subsection. We allow transitions between inelastic channels via \(N(\mathbf{r},\mathbf{r}^{\prime})\) and \(V_{\rm int}(\mathbf{r},\mathbf{r}^{\prime})\) and solve the RGM equation (52). As a result, we find bound states below the lower thresholds, _i.e._, the \(\Delta\Xi^{*}\) and \(\Delta\Omega\), respectively. The \(\Delta\Omega\) bound state was predicted in Ref. [22] in the chiral SU(3) quark model. In the strangeness \(S=-2\) (\(-3\)) sector, the binding energy of the bound state measured from the \(\Delta\Xi^{*}\) (\(\Delta\Omega\)) threshold is \(8.0\,{\rm MeV}\) (\(2.4\,{\rm MeV}\)), and hence \begin{table} \begin{tabular}{l c c c} System & \(B\) [MeV] & \(M\) [MeV] & \(\sqrt{\langle r_{\rm D}^{2}\rangle}\) [fm] \\ \hline \(\Delta\Delta(3,0)\) & 11.7 & 2452 & 1.78 \\ \(\Delta\Delta(1,0)\) & 3.2 & 2461 & 2.46 \\ \(\Delta\Sigma(2,1/2)\) & 5.6 & 2420 & 2.19 \\ \(\Delta\Sigma^{*}(3,1/2)\) & 10.6 & 2606 & 1.77 \\ \(\Sigma\Sigma^{*}(2,0)\) & 0.4 & 2577 & 6.30 \\ \end{tabular} \end{table} Table 5: Properties of the bound states. Figure 13: Square of the relative wave functions for the dibaryon bound states. the mass of the bound state \(M\) is 2757 MeV (2902 MeV). In Fig. 14 we plot the squared wave functions of the bound states with the normalization \[\sum_{i}\int_{0}^{\infty}dr|\chi_{{\rm R},i}(r)|^{2}=1, \tag{67}\] where \(i\) denotes the channels. As one can see, the squared wave functions have nonzero values even above the typical size of hadrons of 1 fm, indicating that these dibaryon states are hadronic molecules rather than compact hexaquark states, as the \(\Delta\Delta(3,0)\) and \(\Delta\Sigma^{*}(3,1/2)\) bound states. Furthermore, we can evaluate the fractions of the \(\Delta\Xi^{*}\), \(\Sigma^{*}\Sigma^{*}\), \(\Delta\Omega\), and \(\Sigma^{*}\Xi^{*}\) components in the bound states by calculating each term of the summation in Eq. (67). The resulting fractions are 72 % (28 %) for the \(\Delta\Xi^{*}\) (\(\Sigma^{*}\Sigma^{*}\)) component in the \(\Delta\Xi^{*}\)-\(\Sigma^{*}\Sigma^{*}(3,1)\) bound state, and 73 % (27 %) for the \(\Delta\Omega\) (\(\Sigma^{*}\Xi^{*}\)) component in the \(\Delta\Omega\)-\(\Sigma^{*}\Xi^{*}(3,3/2)\) bound state. We summarize the properties of the bound states in Table 6. From the relative wave functions of the bound state \(\chi_{{\rm R},i}\), we would like to extract the local coupled-channels potentials. However, in the coupled-channels cases, this task is not straightforward in contrast to the single-channel cases, because the number of coupled-channels potentials is generally larger than the number of wave equations. In particular, the wave equations in a two-channel problem become \[\left(-\frac{1}{2\mu_{1}}\frac{d^{2}}{dr^{2}}+V_{11}(r)\right. \qquad\qquad V_{12}(r)\] \[\qquad\left.V_{21}(r)\right.\qquad\qquad\Delta-\frac{1}{2\mu_{2}} \frac{d^{2}}{dr^{2}}+V_{22}(r)\right)\begin{pmatrix}\chi_{{\rm R},1}\\ \chi_{{\rm R},2}\end{pmatrix}\] \[=-B\begin{pmatrix}\chi_{{\rm R},1}\\ \chi_{{\rm R},2}\end{pmatrix} \tag{68}\] where \(\Delta\) is the difference of the two threshold values and \(B\) is the binding energy measured from the lower threshold. We have obtained the relative wave functions \(\chi_{{\rm R},1}\) and \(\chi_{{\rm R},2}\) by solving the RGM equation, but they are not sufficient to uniquely determine the coupled-channels potentials \(V_{11}\), \(V_{12}\), \(V_{21}\), and \(V_{22}\). To solve this problem, we make three assumptions: 1) the potential is dominated by the antidecuplet contribution, 2) inelastic potentials are symmetric, _i.e._, \(V_{12}=V_{21}\), and 3) the weight of each component in the antidecuplet is fixed purely by the Clebsch-Gordan coefficient. For example, in the \(\Delta\Xi^{*}\)-\(\Sigma^{*}\Sigma^{*}(3,1)\) case, because the antidecuplet state \(\overline{\bf 10}\) has the relation \[|\overline{\bf 10}(3,1)\rangle=\sqrt{\frac{2}{3}}\left|\Delta\Xi^{*}(3,1) \right\rangle-\sqrt{\frac{1}{3}}\left|\Sigma^{*}\Sigma^{*}(3,1)\right\rangle, \tag{69}\] the equivalent local coupled-channels potentials can be evaluated by the formulae: \[V_{\rm eq}^{\Delta\Xi^{*}\cdot\Sigma^{*}\Sigma^{*}}(r)=V_{\rm eq }^{\Sigma^{*}\Sigma^{*}\cdot\Delta\Xi^{*}}(r)=-\frac{1}{\sqrt{2}}V_{\rm eq}^{ \Delta\Xi^{*}\cdot\Delta\Xi^{*}}(r), \tag{70}\] \[V_{\rm eq}^{\Delta\Xi^{*}\cdot\Delta\Xi^{*}}(r)=\frac{\frac{1}{2 \mu_{\Delta\Xi^{*}}}\frac{d^{2}\chi_{{\rm R},\Delta\Xi^{*}}}{dr^{2}}-B\chi_{ {\rm R},\Delta\Xi^{*}}(r)}{\chi_{{\rm R},\Delta\Xi^{*}}(r)-\frac{1}{\sqrt{2} }\chi_{{\rm R},\Sigma^{*}\Sigma^{*}}(r)}, \tag{71}\] \begin{table} \begin{tabular}{l c c} System & \(M\) [MeV] & Note \\ \hline \(\Delta\Xi^{*}\)-\(\Sigma^{*}\Sigma^{*}(3,1)\) & 2757 & \(\Delta\Xi^{*}\) 72 \%, \(\Sigma^{*}\Sigma^{*}\) 28 \% \\ \(\Delta\Omega\)-\(\Sigma^{*}\Xi^{*}(3,3/2)\) & 2902 & \(\Delta\Omega\) 73 \%, \(\Sigma^{*}\Xi^{*}\) 27 \%, \\ \(N\Omega(2,1/2)\) & \(2611-1i\) & With meson exchanges \\ \(\Delta\Sigma(2,1/2)\) & \(2460-23i\) & Decay to \(N\Sigma^{*}\) \\ \end{tabular} \end{table} Table 6: Properties of the bound and resonance states in coupled channels. Figure 14: Square of the relative wave functions for the dibaryon bound states in coupled channels. Figure 15: Equivalent local potentials for the baryon-baryon systems: flavor antidecuplet states in coupled channels. \[V_{\rm eq}^{\Sigma^{*}\Sigma^{*}\Sigma^{*}}(r)=\frac{1}{\chi_{{\rm R },\Sigma^{*}\Sigma^{*}}(r)}\left[\frac{1}{2\mu_{\Sigma^{*}\Sigma^{*}}}\frac{d^{2} \chi_{{\rm R},\Sigma^{*}\Sigma^{*}}}{dr^{2}}\right.\] \[-(B+\Delta)\chi_{{\rm R},\Sigma^{*}\Sigma^{*}}(r)-V_{\rm eq}^{ \Sigma^{*}\Sigma^{*}\Sigma^{*}\Delta\Xi^{*}}(r)\chi_{{\rm R},\Delta\Xi^{*}}(r) \bigg{]}\,. \tag{72}\] We plot the calculated local coupled-channels potentials for the \(\Delta\Xi^{*}\)-\(\Sigma^{*}\Sigma^{*}(3,1)\) bound state in Fig. 15. Compared to the single-channel case, the elastic \(\Delta\Xi^{*}\) potential in Fig. 15 becomes more attractive, and the elastic \(\Sigma^{*}\Sigma^{*}\) potential in Fig. 15 changes to attraction. Similarly, we can evaluate the equivalent local potentials for the \(\Delta\Omega\)-\(\Sigma^{*}\Xi^{*}\) bound state via the relation \[\left|\overline{\mathbf{10}}(3,3/2)\right>=\frac{1}{\sqrt{2}}\left|\Delta \Omega(3,3/2)\right>-\frac{1}{\sqrt{2}}\left|\Sigma^{*}\Xi^{*}(3,3/2)\right>. \tag{73}\] The result is also plotted in Fig. 15. Interestingly, while the \(\Delta\Omega\) interaction does not occur in the single-channel case, as the quark shuffle associated with the quark-quark interaction inevitably leads to the transition to inelastic channels, the elastic \(\Delta\Omega\) interaction emerges via the coupling to the \(\Sigma^{*}\Xi^{*}\) and is attractive. In addition, the attraction of the elastic \(\Sigma^{*}\Xi^{*}\) potential grows when the coupled channels are taken into account. We note that, for the flavor antidecuplet \(\Delta\Xi^{*}\)-\(\Sigma^{*}\Sigma^{*}(3,1)\) state with the weight in Eq. (69) and \(\Delta\Omega\)-\(\Sigma^{*}\Xi^{*}(3,3/2)\) state with the weight in Eq. (73), the normalization kernel \(N(\mathbf{r},\mathbf{r}^{\prime})\) and interaction term \(V_{\rm int}(\mathbf{r},\mathbf{r}^{\prime})\) are again similar to those of the \(\Delta\Delta(3,0)\) state. Therefore, we extend the discussion on the \(\Delta\Delta(3,0)\) bound state and conclude that both the Pauli exclusion principle for quarks and color magnetic interactions are essential for the generation of the bound states in the flavor antidecuplet. #### iv.2.2 Flavor octet states with \(J=2\) Next, we consider the \(N\Omega(2,1/2)\) system. In the single-channel case, the \(N\Omega\) interaction is absent because the shuffling of quarks associated with the quark-quark interaction inevitably leads to the transition to inelastic channels, which is the same as the \(\Delta\Omega\) interaction. However, the coupled channels \(\Delta\Xi^{*}\), \(\Sigma^{*}\Xi\), and \(\Sigma\Xi^{*}\), whose thresholds are above but close to the \(N\Omega\) threshold, may bring attraction to the \(N\Omega(2,1/2)\) system. Indeed, the HAL QCD method has predicted a strong attraction in the \(N\Omega(2,1/2)\) system [8]. As discussed in Ref. [33], such attraction cannot be provided by conventional meson exchanges, so it is natural to examine the \(N\Omega\) interaction in terms of quark degrees of freedom. Previous research on the \(N\Omega\) interaction in the constituent quark model can be found in, _e.g._, Ref. [19], and we revisit this using more precise wave functions. By solving the RGM equation in the coupled channels of \(N\Omega\)-\(\Delta\Xi^{*}\)-\(\Sigma^{*}\Xi\)-\(\Sigma\Xi^{*}(2,1/2)\), we find no bound state below the \(N\Omega\) threshold. Then, assuming that the \(N\Omega\) contribution is dominant near the \(N\Omega\) threshold in the coupled channels, we evaluate the equivalent local potential for the elastic \(N\Omega(2,1/2)\) only from the \(N\Omega\) wave function at the \(N\Omega\) threshold energy \(\chi_{{\rm R},N\Omega}(r)\): \[V_{\rm eq}^{N\Omega\text{-}N\Omega}(r)=\frac{1}{2\mu_{N\Omega}\chi_{{\rm R},N \Omega}(r)}\frac{d^{2}\chi_{{\rm R},N\Omega}}{dr^{2}}. \tag{74}\] The resulting potential is shown as the solid line in Fig. 16. This indicates that the \(N\Omega(2,1/2)\) interaction is attractive via the coupled channels, although the attraction is not sufficient to generate a bound state. The strength of the potential \(\beta\) (65) is \(\beta=0.739\) in our model. Additionally, we can include the meson exchange potential calculated in Ref. [33], in which the \(\eta\) meson, correlated two mesons in the scalar-isoscalar channel, and \(K\) meson in a box diagram were taken into account. As a result, we obtain a bound state with a binding energy \(0.2\,{\rm MeV}\) and decay width \(1.5\,{\rm MeV}\), which arises from the decay to the \(\Lambda\Xi\) and \(\Sigma\Xi\) channels in \(D\) wave in the box diagram. The pole position of the \(N\Omega\) bound state is \(M=2611-1i\,{\rm MeV}\). We plot the real part of the \(N\Omega(2,1/2)\) potential with the meson exchange contributions added as the dashed line in Fig. 16, and also present the simulation results in the HAL QCD method [8] as the thick line. Comparing the potentials in the present study and in the HAL QCD method, the shape is different at the range \(r\lesssim 0.4\,{\rm fm}\), which was also observed in the \(\Delta\Delta(3,0)\) system in Fig. 6. Although the potential is not observable, understanding the origin of the discrepancy at the range \(r\lesssim 0.4\,{\rm fm}\) may be important. In contrast, attraction in the longer range is similar to each other. The strength of the potential \(\beta\) amounts to \(\beta=1.13+0.10i\) (\(\beta=1.47\)) in our model (HAL QCD method). Similarly, we calculate the relative wave functions of Figure 16: Equivalent local potential for the \(N\Omega\) system in our model (solid line). We also plot the real part of the potential with the meson exchange contributions added (dashed line) and the HAL QCD potential for the \(N\Omega(2,1/2)\) system [8] (thick line). the \(N\Sigma^{*}(2,1/2)\), \(N\Xi^{*}(2,1)\), and \(N\Xi^{*}(2,0)\) states in the coupled-channels problems \(N\Sigma^{*}\)-\(\Delta\Sigma(2,1/2)\), \(N\Xi^{*}\)-\(\Delta\Sigma^{*}\)-\(\Delta\Xi\)-\(\Delta\Xi\)-\(\Sigma\Sigma^{*}(2,1)\), and \(N\Xi^{*}\)-\(\Sigma\Sigma^{*}(2,0)\), respectively. They are of interest, because they belong to the flavor octet of the two-baryon states together with the \(N\Omega\)-\(\Lambda\Xi^{*}\)-\(\Sigma^{*}\Xi\)-\(\Sigma\Xi^{*}(2,1/2)\) coupled channels [19], and hence we expect attractive interaction due to the coupled channels. By solving the RGM equation, we find no bound states below the \(N\Sigma^{*}\) and \(N\Xi^{*}\) thresholds. We calculate the equivalent local potentials for the \(N\Sigma^{*}\) and \(N\Xi^{*}\) systems at the \(N\Sigma^{*}\) and \(N\Xi^{*}\) threshold energies, respectively, and show the results in Fig. 17. As we can see, compared to the single-channel cases, the repulsion becomes moderate in the \(N\Sigma^{*}(2,1/2)\) and \(N\Xi^{*}(2,1)\) systems, and the attraction grows in the \(N\Xi^{*}(2,0)\) system. To conclude whether these systems are bound or not, we have to evaluate the contributions from the meson exchanges and add them to the present potentials. #### iv.2.3 Bound states coupling to decay channels Among the bound states listed in Table 5, the \(\Delta\Delta(1,0)\), \(\Delta\Sigma(2,1/2)\), and \(\Sigma\Sigma^{*}(2,0)\) bound states exist above the lowest thresholds with the same quantum numbers, \(NN\), \(N\Sigma^{*}\), and \(N\Xi^{*}\), respectively. Therefore, we aim to evaluate the impact of the decay channels on these bound states by tracing the bound state poles in the complex energy plane. However, solving the fully coupled-channels RGM equation (52) for the complex eigenenergy above the lowest threshold is not feasible. To circumvent this problem, we incorporate the decay channels perturbatively. Specifically, we explicitly consider the bound-state channels, _i.e._, \(\Delta\Delta(1,0)\), \(\Delta\Sigma(2,1/2)\), and \(\Sigma\Sigma^{*}(2,0)\), as in the single-channel cases, while implicitly accounting for the decay channels by replacing the interaction term \(V_{\rm int}(\mathbf{r},\mathbf{r}^{\prime})\) with \[V_{\rm int}(\mathbf{r},\mathbf{r}^{\prime})\rightarrow V_{\rm int}(\mathbf{r},\mathbf{r}^{\prime})+\int d^{3}r_{1}d^{3}r_{2}d^{3}r_{3}V_{ \rm int}(\mathbf{r},\mathbf{r}_{1})\] \[\times G(\mathcal{E},\mathbf{r}_{1},\mathbf{r}_{2})N^{-1}(\mathbf{r}_{2},\mathbf{ r}_{3})V_{\rm int}(\mathbf{r}_{3},\mathbf{r}^{\prime}) \tag{75}\] where \(G(\mathcal{E},\mathbf{r}_{1},\mathbf{r}_{2})\) is the loop function of the decay channel \[G(\mathcal{E},\mathbf{r}_{1},\mathbf{r}_{2})\equiv\int\frac{d^{3}p}{(2\pi)^{3}}\frac{e ^{i\mathbf{p}\cdot(\mathbf{r}_{1}-\mathbf{r}_{2})}}{\mathcal{E}+\Delta-p^{2}/(2\mu^{ \prime})} \tag{76}\] with \(\Delta\) denoting the difference of the two threshold values and \(\mu^{\prime}\) the reduced mass of the decay channel. In Eq. (75), transitions between the bound-state channel and the decay channel occur at \(V_{\rm int}(\mathbf{r},\mathbf{r}_{1})\) and \(N^{-1}(\mathbf{r}_{2},\mathbf{r}_{3})V_{\rm int}(\mathbf{r}_{3},\mathbf{r}^{\prime})\) of the second term. As a result of including the decay channel, the bound state poles of the \(\Delta\Delta(1,0)\) and \(\Sigma\Sigma^{*}(2,0)\) systems disappear due to the repulsion from the inelastic-channel contributions, while the \(\Delta\Sigma(2,1/2)\) bound state becomes a resonance with an eigenenergy of \(\mathcal{E}=35.0-23.1i\,\mathrm{MeV}\), which corresponds to a resonance pole position \(M=2460-23i\,\mathrm{MeV}\). Note that the real part of the pole position is above the \(\Delta\Sigma\) threshold, but the pole exists in the same Riemann sheet as the bound state in the single-channel case. Therefore, if the \(\Delta\Sigma(2,1/2)\) resonance exists as predicted in our calculation, it will be observed as a cusp structure at the \(\Delta\Sigma\) threshold in experiments. We extract the equivalent local potential for the \(\Delta\Delta(1,0)\), \(\Delta\Sigma(2,1/2)\), and \(\Sigma\Sigma^{*}(2,0)\) systems from the relative wave function \(\chi_{\rm R}(r)\), where the wave function is evaluated at the thresholds \(\mathcal{E}=0\) for the \(\Delta\Delta(1,0)\) and \(\Sigma\Sigma^{*}(2,0)\) systems and at the resonance eigenenergy \(\mathcal{E}=35.0-23.1i\,\mathrm{MeV}\) for the \(\Delta\Sigma(2,1/2)\) system. However, in the \(\Delta\Delta(1,0)\) case, the relative wave function has a node to make the wave function orthogonal to the unphysical forbidden state, and hence the equivalent local potential for the \(\Delta\Delta(1,0)\) system becomes singular. Figure 17: Equivalent local potentials for the baryon-baryon systems: flavor octet states in coupled channels. Figure 18: Equivalent local potentials for the baryon-baryon systems: inclusion of decay channels. Therefore, we only plot the equivalent local potentials for the \(\Delta\Sigma(2,1/2)\) and \(\Sigma\Sigma^{*}(2,0)\) systems in Fig. 18. The equivalent local potentials contain imaginary parts due to the decay channel, and the attraction of the real parts becomes moderate in both the \(\Delta\Sigma(2,1/2)\) and \(\Sigma\Sigma^{*}(2,0)\) systems. We note that, if we assume that the equivalent local potential for the \(\Sigma\Sigma^{*}(2,0)\) in Fig. 18 is valid for any energy, the \(\Sigma\Sigma^{*}(2,0)\) potential generates a resonance state at \({\cal E}=56.7-31.8i\,\)MeV. Figure 19: Strength of the potential \(\beta\) (65) for the baryon-baryon systems. Red circles with solid lines represent attractive interactions, while blue circles with dashed lines represent repulsive interactions. The area of the circles corresponds to the absolute values of \(\beta\). Strength of the potentials Finally, we calculate the strength of the potential \(\beta\) (65) for all baryon-baryon systems. The results are shown in Fig. 19 as a bubble chart, where red circles with solid lines (blue circles with dashed lines) represent attractive (repulsive) interactions and the area of the circles corresponds to the absolute values of \(\beta\). We use the elastic parts of the potentials for the \(N\Sigma^{*}(2,1/2)\), \(\Delta\Sigma(2,1/2)\), \(N\Xi^{*}(2,1)\), \(N\Xi^{*}(2,0)\), \(\Sigma\Sigma^{*}(2,0)\), \(\Delta\Xi^{*}(3,1)\), \(\Sigma^{*}\Sigma^{*}(3,1)\), \(N\Omega(2,1/2)\), \(\Delta\Omega(3,3/2)\), and \(\Sigma^{*}\Xi^{*}(3,3/2)\) systems in the coupled-channels cases, while using the single-channel cases for the others. We omit the \(\Delta\Delta(1,0)\) potential because the equivalent local potential becomes singular when coupling to the decay channel \(NN\) is taken into account. We do not take meson exchange contributions into account in the \(N\Omega(2,1/2)\) potential. As shown in Fig. 19, the strongest attractions can be seen in the \(\Delta\Delta(3,0)\), \(\Delta\Sigma^{*}(3,1/2)\), \(\Delta\Xi^{*}\)-\(\Sigma^{*}\Sigma^{*}(3,1)\), and \(\Delta\Omega\)-\(\Sigma^{*}\Xi^{*}(3,3/2)\) systems, which are members of the flavor antidecuplet two-baryon states with \(J=3\). The constituent quark model suggests that they are attractive enough to generate bound states. The explicit values of the strength \(\beta\) are: \(\beta=1.87\) for the \(\Delta\Delta(3,0)\), \(1.83\) for the \(\Delta\Sigma^{*}(3,1/2)\), \(1.14\) for the \(\Delta\Xi^{*}(3,1)\), \(0.683\) for the \(\Sigma^{*}\Sigma^{*}(3,1)\), \(0.693\) for the \(\Delta\Omega(3,3/2)\), and \(1.17\) for the \(\Sigma^{*}\Xi^{*}(3,3/2)\) systems. Therefore, it would be interesting to perform systematic studies of the two-baryon interactions in the flavor antidecuplet states with \(J=3\) in lattice QCD simulations and relativistic ion collisions as well as scattering experiments. Additionally, strong attraction \(\beta>1\) can be found in the \(\Delta\Sigma^{*}(1,1/2)\) system (\(\beta=1.07\)), although its attraction is slightly insufficient to generate a bound state. We would like to point out that, even if the potential is not attractive in the constituent quark model, meson exchange contributions, which are not taken into account in the present study except for the \(N\Omega(2,1/2)\) system, may help in generating a bound state, as in the case of the deuteron in the \(NN(1,0)\) channel. The exchanges of scalar and pseudoscalar mesons can be superposed on the quark-model potential without introducing a double-counting problem [31]. In any case, the results in Fig. 19 will serve as a guideline to search for attractive interactions between two baryons and shed light on the quark dynamics inside baryons. ## IV Conclusion In this study, we investigated the short-range baryon-baryon interactions in the flavor SU(3) sector within the constituent quark model. We employed the color Coulomb, linear confining, and color magnetic forces between two constituent quarks. The wave functions of the ground-state baryons, _i.e._, the octet (\(N,\Lambda,\Sigma,\Xi\)) and decuplet (\(\Delta,\Sigma^{*},\Xi^{*},\Omega\)) baryons, were described using the Gaussian expansion method. The model parameters were determined by fitting the masses of the ground-state baryons. We used the forces between constituent quarks and baryon wave functions to systematically calculate the relative wave functions of two baryons in the resonating group method. We then evaluated the equivalent local potentials between two baryons which reproduce the relative wave functions of two baryons in the resonating group method. The most interesting finding was the existence of two-baryon bound states with a binding energy of approximately \(10\,\mathrm{MeV}\) in the flavor antidecuplet with total spin \(J=3\), namely, \(\Delta\Delta(J=3,I=0)\), \(\Delta\Sigma^{*}(3,1/2)\), \(\Delta\Xi^{*}\)-\(\Sigma^{*}\Sigma^{*}(3,1)\), and \(\Delta\Omega\)-\(\Sigma^{*}\Xi^{*}(3,3/2)\). By decomposing the potentials for these systems, we confirmed that the contribution from the color Coulomb plus linear confining force is attractive enough to produce a bound state, and the color magnetic force brings even more attractions. We also checked that a strong attraction associated with the color Coulomb plus linear force is correlated to the spin-flavor [33] component \(N_{33}\): a larger \(N_{33}\) generates a stronger attraction. The spin-flavor [33] component measures the contribution of totally antisymmetric states of six quarks for two ground-state baryons in \(S\) wave. In this sense, both the Pauli exclusion principle for quarks and color magnetic interactions are essential for the generation of the bound states in the flavor antidecuplet. Because the spatial extension of the bound states in the flavor antidecuplet largely exceeds the typical size of hadrons \(1\,\mathrm{fm}\), we concluded that these bound states are hadronic molecules rather than compact hexaquark states. In particular, the \(\Delta\Delta(3,0)\) bound state can be interpreted as the \(d^{*}(2380)\) state recently confirmed in experiments. Therefore, to understand the mechanism of the quark dynamics on the baryon-baryon interaction, the experimental search for the other members belonging to the antidecuplet will be helpful. Another interesting finding was made in the \(N\Omega(2,1/2)\) interaction. When we restrict the model space to the elastic channel, the \(N\Omega\) interaction is absent in our model because the shuffling of quarks associated with the quark-quark interaction inevitably leads to a transition to inelastic channels. On the other hand, by including the coupling to inelastic channels \(\Lambda\Xi^{*}\), \(\Sigma^{*}\Xi\), and \(\Sigma\Xi^{*}\), attraction in the \(N\Omega(2,1/2)\) system emerges, although it is not sufficient to generate a bound state. Then, assistance comes from the meson exchange, resulting in a bound state with a tiny binding energy \(0.2\,\mathrm{MeV}\) and a small decay width \(1.5\,\mathrm{MeV}\). In the coupled-channels cases, we also found a resonance state \(\Delta\Sigma(2,1/2)\) with the eigenenergy \(\mathcal{E}=35.0-23.1i\,\mathrm{MeV}\). The calculated equivalent local potentials between two baryons will not only be useful in further studies on dibaryon states but also provide clues to understand the mechanism of baryon-baryon interactions. When comparing the equivalent local potentials with those in the HAL QCD method for the \(\Delta\Delta(3,0)\), \(\Omega\Omega(0,0)\), and \(N\Omega(2,1/2)\) systems, we found that, while these potentials are attractive in both approaches, their detailed shapes differ. In particular, our model provides weak repulsion at the origin in the \(\Delta\Delta(3,0)\) and \(N\Omega(2,1/2)\) systems, in contrast to the HAL QCD potentials. In the \(\Delta\Delta(3,0)\) system, this weak repulsion at the origin comes from the color Coulomb plus linear confining force. Although the potential is not observable, understanding the origin of the discrepancy at short distances may be important. Besides, the \(\Omega\Omega(0,0)\) potential in our model is weaker and not sufficiently attractive to generate a bound state, which implies that meson exchange contributions may assist the attraction. We also evaluated the strength of the potentials, which will be a guideline to search for the attractive interactions between two baryons and shed light on the quark dynamics inside baryons. ###### Acknowledgements. The authors acknowledge M. Oka for helpful discussions on the baryon-baryon interactions in quark models. ## Appendix A Weights for baryons We summarize the weights \(w_{\vec{\mu}}\) for the ground-state baryons in Table 7.
2308.04006
An Ethereum-based Product Identification System for Anti-counterfeits
Fake products are items that are marketed and sold as genuine, high-quality products but are counterfeit or low-quality knockoffs. These products are often designed to closely mimic the appearance and branding of the genuine product to deceive consumers into thinking they are purchasing the real thing. Fake products can range from clothing and accessories to electronics and other goods and can be found in a variety of settings, including online marketplaces and brick-and-mortar stores. Blockchain technology can be used to help detect fake products in a few different ways. One of the most common ways is through the use of smart contracts, which are self-executing contracts with the terms of the agreement between buyer and seller being directly written into lines of code. This allows for a high level of transparency and traceability in supply chain transactions, making it easier to identify and prevent the sale of fake products and the use of unique product identifiers, such as serial numbers or QR codes, that are recorded on the blockchain. This allows consumers to easily verify the authenticity of a product by scanning the code and checking it against the information recorded on the blockchain. In this study, we will use smart contracts to detect fake products and will evaluate based on Gas cost and ethers used for each implementation.
Shashank Gupta
2023-08-08T02:57:41Z
http://arxiv.org/abs/2308.04006v1
# An Ethereum-based Product Identification System for Anti-counterfeits ###### Abstract Fake products are items that are marketed and sold as genuine, high-quality products but are actually counterfeit or low-quality knock-offs. These products are often designed to closely mimic the appearance and branding of the genuine product in order to deceive consumers into thinking they are purchasing the real thing. Fake products can range from clothing and accessories to electronics and other goods and can be found in a variety of settings, including online marketplaces and brick-and-mortar stores. Blockchain technology can be used to help detect fake products in a few different ways. One of the most common is through the use of smart contracts, which are self-executing contracts with the terms of the agreement between buyer and seller being directly written into lines of code. This allows for a high level of transparency and traceability in supply chain transactions, making it easier to identify and prevent the sale of fake products and the use of unique product identifiers, such as serial numbers or QR codes, that are recorded on the blockchain. This allows consumers to easily verify the authenticity of a product by scanning the code and checking it against the information recorded on the blockchain. In this study, we will use smart contracts to detect fake products and will evaluate based on Gas cost and ethers used for each implementation. ## II Introduction Fake products, also known as counterfeit or knock-off goods, are items that are marketed and sold as genuine, high-quality products but are actually low-quality imitations. These products can range from clothing and accessories to electronics and other goods and can be found in a variety of settings, including online marketplaces and brick-and-mortar stores. Examples of fake products include designer knock-off handbags, fake designer clothing, and counterfeit electronics with the brand name of a well-known company. These fake products are often designed to closely mimic the appearance and branding of the genuine product in order to deceive consumers into thinking they are purchasing the real thing. In this project, we propose to design an anti-counterfeit system based on blockchain technology. Blockchain offers several key benefits for an anti-counterfeit system, including its security, decentralization, and transparency properties. By using blockchain, we can create a secure and transparent record of product authentication that is resistant to tampering and fraud. The decentralized nature of blockchain also allows for a more open and transparent system and enables multiple parties to verify the authenticity of a product. These features make blockchain an ideal technology for designing an effective anti-counterfeit system. ## III Motivation Counterfeit goods are imitations or knock-offs of products that are created and sold with the intent to deceive consumers into believing they are genuine. These goods can range from high-end luxury items such as designer handbags and watches to everyday items like medications and consumer electronics. The market for counterfeit goods is vast and lucrative, with estimates suggesting that it is worth hundreds of billions of dollars globally. The production and sale of these goods often occur in underground or illicit markets, making it difficult to track and regulate. The counterfeiting industry is the biggest criminal enterprise in the world, with sales of fake and pirated goods estimated to be worth between $1.7 trillion and $4.5 trillion annually. This is higher than the amount earned through either drug or human trafficking. China is responsible for producing around 80% of these goods, with 60-80% of them being bought by Americans. These figures demonstrate the significant effects that such illegal trade has on the US economy, business interests, and innovations. [15] Counterfeit goods can be dangerous to consumers for a number of reasons. In the case of medications, for example, counterfeit drugs may not contain the active ingredients they claim to, or they may contain harmful additives that can cause serious health problems. Similarly, counterfeit electronics may not meet safety standards and can pose a risk of fire or electrical shock. In addition to the potential risks to consumers, the production and sale of counterfeit goods also harm legitimate businesses and industries. When consumers purchase counterfeit goods, they are essentially giving money to criminals and supporting illegal activities. This can lead to job losses and economic harm to legitimate companies, as well as damage to a brand's reputation. In addition to the potential risks to consumers, the production and sale of counterfeit goods also harm legitimate businesses and industries. When consumers purchase counterfeit goods, they are essentially giving money to criminals and supporting illegal activities. This can lead to job losses and economic harm to legitimate companies, as well as damage to a brand's reputation. Consumers can protect themselves by purchasing goods from reputable sources and being cautious of deals that seem too good to be true. It is important to know that purchasing counterfeit goods not only puts oneself at risk but also supports illegal activities and harms legitimate businesses. Overall, preventing counterfeit products is crucial for protecting consumers, supporting legitimate businesses, and safeguarding society as a whole. Governments and law enforcement agencies around the world are working to combat the market for counterfeit goods, but it is also important for individuals to be aware of the issue and avoid purchasing counterfeit products. ## IV Related Work One of the potential uses of blockchain technology is supply chain tracking. [14], [2], [15] By creating an immutable record of transactions using blockchain, companies can track the movement of goods through their supply chain in a transparent and secure way. This can help improve the efficiency of supply chain operations and reduce the risk of fraud. Additionally, the use of blockchain in supply chain tracking can provide greater visibility and accountability for all parties involved in the supply chain, from manufacturers to distributors to retailers. This can help to improve the overall trust and transparency of the supply chain. Overall, the use of blockchain in supply chain tracking has the potential to revolutionize the way that goods are tracked and traced throughout the supply chain. Another potential use of blockchain technology is identity management. [16], [1] By using blockchain to create a digital identity system, individuals would be able to have greater control over their own personal information and would be able to share it with others on their own terms. This could be especially useful in situations where individuals need to prove their identity, such as when applying for a loan or opening a bank account. Additionally, a blockchain-based digital identity system would be more secure than traditional systems, as it would be difficult for an attacker to alter or forge the records on the blockchain. Overall, the use of blockchain in identity management has the potential to improve security and privacy while also making it easier for individuals to prove their identity. A voting system is another application of blockchain technology. [1] By using blockchain to create a secure, transparent, and auditable voting system, it would be possible to ensure that elections are fair and free from tampering. Additionally, a blockchain-based voting system would allow for real-time tracking and counting of votes, which could help to increase the efficiency and speed of the election process. Furthermore, a blockchain-based voting system could improve voter turnout by making it easier for individuals to cast their ballots, even if they are unable to physically attend a polling place. Overall, the use of blockchain in voting systems has the potential to improve the security, transparency, and accessibility of the election process. ## V Blockchain Blockchain technology is a decentralized, digital ledger that records transactions on multiple computers in a way that makes them tamper-resistant. It is the underlying technology behind cryptocurrencies such as Bitcoin, but it has many other potential uses. Blockchain technology works by allowing multiple parties to contribute data to the ledger, but once that data is added, it cannot be altered or deleted. This creates a permanent and tamper-proof record of transactions, which can be accessed and verified by anyone on the network. This allows for a high level of transparency and trust among the parties involved and makes it difficult for fraudulent activity to go undetected. One of the key benefits of blockchain technology is its ability to facilitate peer-to-peer transactions without the need for a central authority or intermediary. This allows for more efficient and cost-effective transactions and can help to reduce the risk of fraud and other types of crime. In a blockchain, data is stored in blocks that are linked together in a chain. Each block contains a number of transactions, and once those transactions are added to the block, they cannot be altered or deleted. This creates a permanent and tamper-proof record of the transactions. Each block also contains a unique code, known as a "hash," which distinguishes it from other blocks in the chain. This hash is generated using complex algorithms and is based on the data contained in the block, as well as the hash of the previous block in the chain. This creates a secure and unbroken chain of blocks that can be easily verified and traced back to their origin. The data on the blockchain is distributed across the network, with each node on the network having a copy of the entire blockchain. This allows for decentralized storage of the data and ensures that the information on the blockchain is accessible to anyone on the network. ## VI Ethereum Ethereum is a decentralized, open-source blockchain platform that allows for the creation of smart contracts and decentralized applications (dapps). It was launched in 2015 by Vitalik Buterin, a programmer and researcher who was involved in the development of the original blockchain technology, upon which Bitcoin is based. It uses a proof-of-work consensus mechanism. This means that in order to add a new block to the blockchain, users must solve complex mathematical expressions, a process known as mining. By doing this, they "prove" that they have put in the computational work to add the block, and this process confirms that the block has been successfully added to the blockchain. As a reward for successfully adding a block, miners are rewarded with ETH, the native cryptocurrency of the Ethereum blockchain. ## VII Goals Our goals for this project are as follows: **1**. The goal of the project is to create a secure and transparent system for tracking and verifying the authenticity of products using blockchain technology. **2**. To make the system user-friendly and accessible, so that consumers can easily verify the authenticity of a product and businesses can participate in the system without needing to have a deep understanding of blockchain technology. **3**. To secure product details using a QR code, which will be scanned by consumers to verify the authenticity of a product. **4**. To provide security to clients by offering data and transparency about the products they are purchasing. **5**. To improve the ability to detect fake products in the marketplace, and to enhance the overall performance of the anti-counterfeit system. By achieving these goals, we aim to create an effective and secure anti-counterfeit system that will protect consumers, support legitimate businesses, and help to disrupt the market for counterfeit goods. ## VIII Technical Soundness of Approach ### _Smart Contract_ A smart contract is a self-executing contract with the terms of the agreement between buyer and seller being directly written into lines of code. This code is deployed to the blockchain, where it can be executed automatically when certain predetermined conditions are met. Smart contracts provide a number of benefits over traditional contracts. Because they are written in code and executed automatically on the blockchain, they can be enforced without the need for a third-party intermediary. This makes the execution of the contract more efficient and cost-effective. Additionally, because the terms of the contract are written into the code, they are transparent and easy to verify, reducing the risk of disputes or misunderstandings. ### _Testnet_ Goerli is a testnet for the Ethereum blockchain. Testnets are test versions of blockchain networks that are used by developers to test and experiment with new features and applications before they are deployed to the main network. Goerli is a particularly useful testnet for Ethereum developers because it is a "proof-of-authority" network, which means that it is more stable and predictable than other testnets. This makes it a good environment for testing and debugging smart contracts and other Ethereum-based applications. ### _Remix IDE_ One way to write and deploy a smart contract on a testnet like Goerli is to use the Remix IDE. The remix is a web-based Integrated Development Environment (IDE) for the Ethereum platform. It is a tool that allows developers to write, test, and deploy smart contracts on the Ethereum blockchain and then deploy it to a testnet like Goerli for further testing and use. ### _Wallet_ Metamask is a popular browser extension that allows users to interact with the Ethereum blockchain. It serves as a digital wallet for users to store, manage, and transfer their Ether and other Ethereum-based assets. It also allows users to access decentralized applications (dApps) built on the Ethereum platform. It also integrates seamlessly with the web browsers that it is available for, allowing users to easily access their wallets and interact with dApps while browsing the internet. ## IX Problem formulation The proposed product tracking system using blockchain technology aims to improve supply chain transparency and traceability. The system assumes that all producers, distributors, and retailers are trusted nodes, and it maintains the status of each product, including the manufacturer of the product, the current owner of the product, and the history of the owners. Each product is assigned a unique product ID, which is used to generate a QR code. This product ID is maintained on the blockchain as the key to a product, allowing for the tracking of its history. At each status change of a product, the manufacturer, distributor, or retailer will update the new information to the blockchain. When the product is created, it is labeled as "available" on the blockchain. When the product is sold to the distributor or retailer, it remains labeled as "available," and only when it is sold to the end customer is it labeled as "unavailable." Only products labeled as "available" can be sold. Customers can check the availability and track the history of a product before purchase by scanning the QR code with their mobile device. This allows them to verify the authenticity and provenance of the product, as well as ensure that it has not already been sold to someone else. Overall, the use of blockchain technology in this product tracking system has the potential to improve supply chain efficiency and trust, as well as give customers greater confidence in the products they purchase. Figure 1 and Figure 2 are shown for displaying the flow and how it improves security. ## X Methodology ### _Dummy Ethers_ We signed up to MetaMask to create the wallet. Apart from the mainnet, we manually need to add the Goerli testnet into our wallet. Goerli testnet has a global chain ID 5. Then we signed up on Alchemy and linked our wallet Address to it. Then we used Goerli Faucet to request Dummy Ethers in our connected wallet. It provides 0.5 Ethers on the testnet wallet once in every 24 hours period on request. ### _Writing Smart Contract_ To use Remix to write a smart contract and deploy it to Goerli, first open the Remix IDE in a web browser. Next, create a new file and write the smart contract code using the Solidity programming language. The next step is to connect your MetaMask wallet to remix IDE. Once the code is written and complied successfully, select "Injected Provider- Metamask" in the environment section. Then choose the account from which you requested dummy ethers and use the deploy button to deploy the smart contract to the testnet. After the smart contract is deployed to Goerli, it can be accessed and interacted with using a web3 wallet or another Ethereum-compatible tool. This allows developers to test the functionality of the smart contract and ensure that it is working as intended. ### _Problem formulation_ The proposed product tracking system using blockchain technology aims to improve supply chain transparency and traceability. The system assumes that all producers, distributors, and retailers are trusted nodes, and it maintains the status of each product, including the manufacturer of the product, the current owner of the product, and the history of the owners. Each product is assigned a unique product ID, which is used to generate a QR code. This product ID is maintained on the blockchain as the key to a product, allowing for the tracking of its history. At each status change of a product, the manufacturer, distributor, or retailer will update the new information to the blockchain. When the product is created by the manufacturer, it is labeled as "available" on the blockchain. When the product is sold to the distributor or retailer, it remains labeled as "available," and only when it is sold to the end customer is it labeled as "unavailable." Only products labeled as "available" can be sold. Customers can check the availability and track the history of a product before purchase by scanning the QR code with their mobile device. This allows them to verify the authenticity and provenance of the product, as well as ensure that it has not already been sold to someone else. Overall, the use of blockchain technology in this product tracking system has the potential to improve supply chain efficiency and trust, as well as give customers greater confidence in the products they purchase. Figure 1 and 2 show the flow and how it improves security. ## XI Evaluation In order to determine the effectiveness of our implementation, we evaluate it using several metrics related to the cost of running transactions on the Ethereum network. One of these metrics is the gas price, which is the amount of Ether that must be paid to a miner for every unit of gas consumed by a transaction. This is an important factor because it directly affects the cost of running a transaction on the Ethereum network. Another metric that we use to evaluate our implementation is the gas cost per transaction, which is the total amount of gas consumed by a transaction divided by the number of transactions performed. This metric allows us to compare the efficiency of different implementations and identify ways to reduce the cost of running transactions on the Ethereum network. Finally, we also consider the ETH cost, which is the total amount of Ether spent on transactions. This metric provides a broader view of the overall cost of using the Ethereum network and helps us evaluate the overall efficiency of our implementation. See Figure 3, 4, and 5 for our evaluation results. ## XII Conclusion As we can see from the evaluation cost, deployment of a Smart contract is costly as compared to product registration Figure 1: Supply Chain Flow [20] Figure 2: Detection of Fake Product [20] and product selling. But Smart contract deployment is only a one-time fee whereas product selling and registration is a continuous process. The total cost of the system is 0.00064428 ETH
2309.02986
Innovative approaches to high school physics competitions: Harnessing the power of AI and open science
High school physics competitions serve as a platform for talented students to showcase their skills, engage in challenging problems, and foster a passion for science. This paper explores innovative approaches to enhance these competitions by harnessing the power of open science and artificial intelligence (AI) tools. Particularly we delve into the capabilities of state-of-the-art AI chatbots, i.e. ChatGPT, Bard, Claude, related to problem solving in physics. Together with open science tools like SageMath and Jupyter AI, they have the potential to serve as intelligent, powerful co-pilots, tutors, and assistants in understanding and applying physics, as well as knowledge from connected STEM fields. Furthermore, these innovative approaches can revolutionize high school physics competitions, providing students and their tutors with powerful resources to excel in their scientific pursuits.
Dominik Borovský, Jozef Hanč, Martina Hančová
2023-09-06T13:32:46Z
http://arxiv.org/abs/2309.02986v1
Innovative approaches to high school physics competitions: Harnessing the power of AI and open science ###### Abstract High school physics competitions serve as a platform for talented students to showcase their skills, engage in challenging problems, and foster a passion for science. This paper explores innovative approaches to enhance these competitions by harnessing the power of open science and artificial intelligence (AI) tools. Particularly we delve into the capabilities of state-of-the-art AI chatbots, i.e. ChatGPT, Bard, Claude, related to problem solving in physics. Together with open science tools like SageMath and Jupyter AI, they have the potential to serve as intelligent, powerful co-pilots, tutors, and assistants in understanding and applying physics, as well as knowledge from connected STEM fields. Furthermore, these innovative approaches can revolutionize high school physics competitions, providing students and their tutors with powerful resources to excel in their scientific pursuits. ## 1 Introduction High school physics competitions and preparation for them are an essential part of informal physics education at the secondary level. Special places in these high school competitions are held by the Physics Olympiad, the PhO ([https://www.ipho-new.org](https://www.ipho-new.org)), and the Young Physicist Tournament, the YPT ([https://www.ijpt.org](https://www.ijpt.org)), both of which have long-standing traditions with international significance, ranking them among the most prestigious. In these competitions, talented students can not only refine and deepen their theoretical and experimental knowledge and skills but also compare with their peers. In the broader context, participation in the PhO or/and YPT not only provides an opportunity for competition but also has a strong impact on shaping the students' future careers. The successful participation often guides talented students not only for university studies in natural science and technical fields but equips them with a high level of critical thinking and scientific literacy essential also in the study of medicine, economics, or law. In other words, such competitions appear also important from the perspective of current trends in STEM education and the needs of our society, where STEM disciplines are the primary drivers of the economy and where our jobs, health, education & environment, and even roles as citizens become strongly dependent on STEM, digital, and data literacy. With the ubiquitous rise and implementation of artificial intelligence (AI) in recent years, particularly exemplified this year (2023) by phenomenal AI chatbots like ChatGPT ([https://chat.openai.com](https://chat.openai.com)), we have decided to explore the potential of such technologies in preparing students for competitions like PhO and YPT. In this article, we present our ongoing research and latest findings dealing with this task. ## 2 Our Research and Methodology Our research concerning generative AI in physics education (which is a part of the Ph.D. study of DB) is connected to the following research questions: * _How can state-of-art AI Chatbots, like ChatGPT, be integrated into physics education to support physics teaching and learning?_ * _What are the perceptions of students and teachers concerning the effectiveness of the generative AI tools in enhancing understanding and engagement in physics?_ * _How does the use of generative AI impact students' performance in physics, as quantified by empirical data?_ As for research design and methodology, to find answers we have decided to apply an exploratory sequential mixed-methods design [1] with a focus on qualitative perspectives supported by quantitative data (design QUAL \(\Rightarrow\) quan). Currently, in the qualitative phase, we are undertaking a literature review to understand the theoretical framework and potential of generative AI like ChatGPT in physics education. This qualitative stage also encompasses a pilot case study, where we are exploring the capabilities and performance of generative AI in diverse physics tasks, as well as its integration into physics education. One of our prime examples of such integration is the field of informal preparation for competitions like the PhO and YPT. The forthcoming planned quantitative phase will provide data to measure the impact of AI within the existing conditions of physics education. ## 3 Technology and Models behind the State-of-Art AI Chatbots To uncover the capabilities and truly grasp the performance of generative AI tools in the context of current published works related to education, we cannot solely rely on direct interaction with ChatGPT through an experimental approach, as illustrated in references [2, 3, 4, 5]. It's also crucial to delve into a more detailed theoretical analysis and understanding of what ChatGPT actually is, and how the model behind ChatGPT works from technology and mathematics perspective. At the heart of today's phenomenal chatbots, like ChatGPT [6], lies a unique and revolutionary artificial neural network architecture known as the transformer. Introduced by researchers from Google in 2017 [7], the transformer model has since been the backbone of many breakthroughs in AI. We do not explain in detail how transformers work (explanations at different levels can be found in [8, 9, 10]). For our purposes, it is sufficient to look at a transformer in analogy with a human brain. As a type of neural network, transformer as a neural network is similar to the neurons network in our human brain. The size of transformers is given by a number of parameters and if a number of parameters is more than \(10^{9}\) we call a transformer for natural language processing a _Large Language Model_ (LLM) [11]. Every transformer has to be taught on a dataset after creation and initialization, similarly as a newborn has to learn about the world. This process is called _pre-training_, in analogy with the human brain, it is like early learning when a baby learns basic concepts, sounds, visuals, languages, etc. After pre-training, the model is _fine-tuned_ (together with implementing ethical norms) on a smaller, specific dataset for particular tasks like translation, question-answering, or summarization. ChatGPT was fine-tuned with the aim of improving my performance for user interactions. Fine-tuning corresponds to schooling or formal education in human life which can be diverse and in the final stage is tailored towards specific content and skills of the chosen profession (career). A pre-trained and fine-tuned transformer designed for natural language processing is called a _Generative Pre-trained Transformer_ or GPT (for more accurate details on what type of transformer GPT exactly is, see [8]). ChatGPT with model GPT-3.5 was trained on a gigantic and diverse dataset equivalent to reading 45 million books in 26 languages. The language capabilities of ChatGPT are therefore phenomenal compared to its predecessors. It can interactively communicate with you indistinguishably from a human, remembering the content of the conversation within a single session. It can discuss and draw conclusions on virtually any topic, including history, science, art, or technology. In interactions with a chatbot like ChatGPT, any text or question we pose, expecting a response, is called a _prompt_ (see Fig. 1). Figure 1: The problem from the Physics Olympiad (2022, regional round, category C) as prompts in AI chatbots: a simple prompt (on the left) and with an improving advanced prompt using priming (on the right). It's crucial to recognize that re-inputting the same prompt doesn't guarantee an identical response from the chatbot. This variability isn't a glitch, but it is intentional. In an LLM model, it is determined by a technical parameter called _temperature_. By default, many chatbots set the temperature parameter at 0.7 for their large language models. When the temperature is set to zero, the chatbot's output remains the same, reflecting the highest probability response from the neural network structure. With any non-zero value, the chatbot has the possibility to randomly select a different answer with a probability close to the maximum. This means that the higher the temperature, the further from the optimal answer the chatbot can deviate. It also means that a higher temperature will result in less boring, more creative and unpredictable text. It is analogous to the concept of temperature in physics, where a higher temperature means the more chaotic and unpredictable motion of particles and vice versa. From a technological perspective, the release of ChatGPT in November 2022 wasn't so groundbreaking event, as numerous LLM transformers, including GPT-3, were already available (see Tab.1, Fig. 2 in [11]). However, the release of ChatGPT became one of the milestones igniting accelerated research in the LLM field. Another important milestone was the release of LLM with a free, open license from Meta called LLaMA [12]. ## 4 Current Capabilities of Generative AI Tools ### Problem Solving with the State-of-Art AI Chatbots To determine the current capabilities of State-of-the-Art AI Chatbots in solving physics tasks and problems from PhO and YPT, we conducted a simple pilot case study. We have chosen currently the most powerful chatbots using the strongest LLMs (according to chatbot arena at [https://chat.lmsys.org](https://chat.lmsys.org)): * the aforementioned _ChatGPT_ from OpenAI [6], supported by Microsoft (with the free GPT-3.5 model and the paid GPT-4 model and plugins available at [https://chat.openai.com](https://chat.openai.com)) * _Bard_ from Google using PaLM2 [13] ([https://bard.google.com](https://bard.google.com); via a Google account), * _Claude_ from the Google-backed Anthropic [14] ([https://claude.ai](https://claude.ai); via the Opera with VPN), * _Vicuna_, the open-source chatbot [15] from Berkeley University based on the LLaMA model from Meta (accessible in the chatbot arena [https://chat.lmsys.org](https://chat.lmsys.org)). #### 4.1.1 PhO Problem As the first problem in the study, we chose a task from the 63rd edition of the Slovak Physics Olympiad (regional round, category C - 2nd year of high school) from the academic year 2021/2022 (authors: Bystricky & Konrad). The task is shown in Figure 1 as a simple prompt. In Figure 2, we see the best solutions (chosen from 10 times repeatedly produced solutions to the same prompt) of part a) provided by OpenAI GPT-3.5 and Google Bard. Both chatbots were characterized by great speed (they generated the solution almost instantly). Additionally, Bard provided a preview of three different responses in the sense of the temperature parameter described earlier, from which we can choose and continue the conversation accordingly. We see that both chatbots provided annotated structured solutions, with the correct application of the physical law. Such annotations are what we expect in solutions to PhO tasks. However, both chatbots made errors. Google Bard did not convert quantities into SI units, an oversight sometimes also seen among younger PhO solvers. In both cases, the chatbots failed in mathematics. The correct values are \(n=0.403\) mol, \(m=11.3\) g which means that GPT-3.5 value was 2 orders of magnitude higher. We'll just mention here that the Vicuna-33B chatbot solved the problem as competently as GPT-3.5 did but was significantly slower and also with numerical errors (\(n=0.219\) mol, \(m=6.0698\) g). None of the chatbots were able to correct numerical errors, even after multiple prompts. Strictly speaking, from the perspective of the rules of the PhO, none of the chatbots demonstrated the required general solution. This behavior is understandable since we didn't prompt the chatbots to do so. In such cases, we can make further prompts iteratively to get the desired general solution. However, a challenge explaining general solutions is not a trivial matter. Based on our multiple interactions with chatbots and also the literature review, it seems that a very effective prompt strategy is called _priming_. Generally, the right priming improves the performance of language models by providing them with context or initial information before generating text. This is typically done by supplying the model with a more advanced prompt or a set of input tokens related to the task or context in question. There are several priming techniques (see [16, 11]), and an example is the advanced prompt in Fig. 1 (on the right). In Fig. 3, we can see the completely correct annotated general solution from Claude and Figure 2: Comparing solutions for part a) of the PhO problem from Fig. 1 provided by free versions of AI chatbots from OpenAI (GPT-3.5) and Google (Bard). ChatGPT, along with the correct numerical results. We want to highlight that, in the case of the Claude chatbot, it's possible to upload instructions in pdf form which simplifies and clarifies communication. Initially, Claude provided a numerically incorrect result (\(m=0.70\) g), which he corrected only after our challenge. In the case of ChatGPT, we see the solution in the paid GPT-4 model with the Wolfram plugin. This means that every numerical result is secretly calculated not by GPT-4, but by Wolfram Alpha, a computational knowledge engine developed by Wolfram Research ([https://www.wolframalpha.com](https://www.wolframalpha.com)). Thanks to this plugin, ChatGPT almost certainly provides correct numerical results. In rare cases, the Wolfram plugin fails, gets stuck in a loop, and provides no answer. In such situations, it is necessary to solve the problem from scratch. #### 4.1.2 YPT Problem From the perspective of physical complexity, task a) was of a basic knowledge level related to the ideal gas theme, which even an average high school student Figure 3: The state-of-art AI chatbots solutions using priming and plugins: Claude v2 (on the left; with instructions as a pdf attachment) and GPT-4 solution (on the right; with plugin Wolfram). should manage. In our second instance, we tackled a much more challenging problem from the YPT competition (problem 12 from the 34th IYPT). The problem's statement was as follows: _A Wilberforce pendulum consists of a mass hanging from a vertically oriented helical spring. The mass can both move up and down on the spring and rotate about its vertical axis. Investigate the behavior of such a pendulum and how it depends on relevant parameters._ We were able to solve the problem, concerning its dynamics, successfully with the Chatbot ChatGPT using its two paid models (applying an advanced modelling approach - Lagrange analytical method; here it is recommended to install a browser extension [17] for rendering LaTeX equations): * GPT-4 model: [https://sharept.com/c/MZPHMS2](https://sharept.com/c/MZPHMS2) * GPT-4 code interpreter: [https://chat.openai.com/share/0d059dc6-1632-4e9f-af46-4c5926b91304](https://chat.openai.com/share/0d059dc6-1632-4e9f-af46-4c5926b91304) ### Generative AI in Open Science Jupyter Notebooks Problems faced by AI in using mathematics, whether related to the accuracy of numerical results or algebraic manipulations, can be addressed by integrating AI with open science tools designed for scientific computing and data processing. Moreover, this integration also facilitates a more straightforward and efficient interaction with AI during scientific computing. One of such open science tools is a free Python-based mathematics software called SageMath ([https://www.sagemath.org](https://www.sagemath.org)). In our previous works [18, 19] we introduced and described the capabilities of SageMath in research and STEM education. We also demonstrated that SageMath is ideally suited for solving PhO and YPT problems. Technologically, from a visual standpoint, SageMath is a _Jupyter Notebook_ - an interactive web page that runs on any modern web browser and can be easily edited as a Word document. It allows users to insert text, images, and videos, as well as computational code representing variables, equations, or commands for plotting graphs. Moreover, any Python library or extension designed for Jupyter can be installed and utilized in SageMath. In this article, we show two possibilities for integrating AI into Jupyter that we identified through our research in relevant resources. #### 4.2.1 AI Assistant Jupyternaut. In March 2023, also a team from Project Jupyter ([https://jupyter.org](https://jupyter.org)) began intensively working on AI implementation through a sub-project named Jupyter AI [20]. The culmination of this effort is the jupyter_ai extension, which integrates the Jupyter notebook with AI, allowing users to choose from various large language models (several dozens), including the OpenAI GPT-3.5, which was chosen by us. Upon its installation, an AI chatbot named Jupyternaut appears. This chatbot acts as a general-purpose AI virtual assistant - serving mainly for coding purposes and, as for physics (or natural sciences), becomes a helpful guide for computations, explaining and applying physics concepts, and laws. Remarkably, with the aid of GPT-3.5, it can do so in 26 languages, including Slovak. Figure 4 (on the left) illustrates the AI's interaction within SageMath, showing a conversation during solving our PhO problem. Using the /learn command, we requested Jupyternaut to study all materials within the _Documents_ folder. In our case, it was a text with general instructions for solving physics problems (Fig. 1, on the right), complemented by guidelines stating that explanations should be rendered in SageMath as text cells, while calculations and symbolic manipulations should be done as computation cells. Then, with the /generate command, we asked Jupyternaut to create a Jupyter notebook as SageMath, tasked with solving the given physics problem. In a brief moment, the virtual assistant produced a notebook, which after our minor adjustments is displayed in Fig. 4 (on the right). Now, we can see the general solution, with numerical results, accurate and precise up to 15 decimal places. By highlighting any section of the notebook on the right, Jupyternaut can provide a detailed explanation (tailored to our prompts) or fix errors. Employing an AI assistant directly within a digital scientific environment as SageMath, compared to a web-based chatbot interface, is substantially more straightforward. The need for copying content back and forth is eliminated. Concurrently, the synergy with SageMath gives us better interactivity at intermediate steps, easy modifiability, reproducibility, and direct visualization capabilities. #### 4.2.2 ChatGPT - Jupyter - AI Assistant. A second, very easy solution for bringing AI into the SageMath Jupyter notebook is through the use of a browser extension, named _ChatGPT Figure 4: A solution generated by Jupyternaut (with GPT-3.5) with corresponding conversation (on the left) and the resulting SageMath Jupyter notebook (on the right). _Jupyter - AI Assistant_[21], which is compatible with browsers like Chrome, Edge, or Opera. This extension using only GPT-3.5 can be installed with a single click, appearing as a toolbar with various control buttons within the Jupyterlab environment. Similar to Jupyternaut, it can comment on the highlighted computational code, format it, explain it, fix its errors, and also generate new code based on the prompt. ## 5 Conclusion In this paper, we explored the role and capabilities of generative artificial intelligence through state-of-art chatbots like ChatGPT to enhance high school physics competitions. The results indicate the undeniable benefits of using chatbots in the context of the Physics Olympiad (PhO) and Young Physicist Tournament (YPT). These AI tools, when combined with open science platforms like SageMath, can remove problems with the mathematics that even the best chatbots struggle with alone. They could also guide students through intricate calculations, offer explanations for physical phenomena, and suggest various approaches to tackle a given problem, while ensuring every step is transparent and reproducible. However, over-reliance on chatbots may inadvertently reduce a student's critical thinking and problem-solving skills. Since general-purpose language models like ChatGPT are also pre-trained on a large body of Internet data they produce misconceptions and AI hallucinations. To address these concerns, such AI tools must be adapted in the context of active learning to be facilitators and not authorities. An example of such adaptation is experimentally tested Khanmigo, a fine-tuned version of ChatGPT [22]. While Khanmigo has been created to suit educational needs in math teaching, we believe that thanks to rapid progress fine-tuned open-source language models will serve specifically to the demands of physics education and competitions. In a world where STEM and digital literacy are becoming increasingly crucial, merging cutting-edge AI tools with traditional methods of education might just be the perfect recipe for nurturing the next more powerful generation of scientists and researchers.
2301.10578
Strongly proper connected coloring of graphs
We study a new variant of \emph{connected coloring} of graphs based on the concept of \emph{strong} edge coloring (every color class forms an \emph{induced} matching). In particular, an edge-colored path is \emph{strongly proper} if its color sequence does not contain identical terms within a distance of at most two. A \emph{strong proper connected} coloring of $G$ is the one in which every pair of vertices is joined by at least one strongly proper path. Let spc($G$) denote the least number of colors needed for such coloring of a graph $G$. We prove that the upper bound spc($G$)$\leq${5} holds for any $2$-connected graph $G$. On the other hand, we demonstrate that there are $2$-connected graphs with arbitrarily large girth satisfying spc($G$)$\geq${4}. Additionally, we prove that graphs whose cycle lengths are divisible by $3$ satisfy spc($G$)$\leq{3}$. We also consider briefly other connected colorings defined by various restrictions on color sequences of connecting paths. For instance, in a \emph{nonrepetitive connected coloring} of $G$, every pair of vertices should be joined by a path whose color sequence is \emph{nonrepetitive}, that is, it does not contain two adjacent identical blocks. We demonstrate that $2$-connected graphs are $15$-colorable while $4$-connected graphs are $6$-colorable, in the connected nonrepetitive sense. A similar conclusion with a finite upper bound on the number of colors holds for a much wider variety of connected colorings corresponding to fairly general properties of sequences. We end the paper with some open problems of concrete and general nature.
Michał Dębski, Jarosław Grytczuk, Paweł Naroski, Małgorzata Śleszyńska-Nowak
2023-01-25T13:32:13Z
http://arxiv.org/abs/2301.10578v2
# Strongly proper connected coloring of graphs ###### Abstract We study a new variant of _connected coloring_ of graphs based on the concept of _strong_ edge coloring (every color class forms an _induced_ matching). In particular, an edge-colored path is _strongly proper_ if its color sequence does not contain identical terms within a distance of at most two. A _strong proper connected_ coloring of \(G\) is the one in which every pair of vertices is joined by at least one strongly proper path. Let \(\operatorname{spc}(G)\) denote the least number of colors needed for such coloring of a graph \(G\). We prove that the upper bound \(\operatorname{spc}(G)\leq 5\) holds for any \(2\)-connected graph \(G\). On the other hand, we demonstrate that there are \(2\)-connected graphs with arbitrarily large girth satisfying \(\operatorname{spc}(G)\geq 4\). Additionally, we prove that graphs whose cycle lengths are divisible by \(3\) satisfy \(\operatorname{spc}(G)\leq 3\). We also consider briefly other connected colorings defined by various restrictions on color sequences of connecting paths. For instance, in a _nonrepetitive connected coloring_ of \(G\), every pair of vertices should be joined by a path whose color sequence is _nonrepetitive_, that is, it does not contain two adjacent identical blocks. We demonstrate that \(2\)-connected graphs are \(15\)-colorable while \(4\)-connected graphs are \(6\)-colorable, in the connected nonrepetitive sense. A similar conclusion with a finite upper bound on the number of colors holds for a much wider variety of connected colorings corresponding to fairly general properties of sequences. We end the paper with some open problems of concrete and general nature. Introduction Let \(G\) be a simple connected graph with colored edges. A path \(P\) in \(G\) is _proper_ if no two consecutive edges of \(P\) have the same color. An edge coloring of \(G\) is a _proper connected coloring_ if every pair of distinct vertices is joined by at least one proper path. The least number of colors needed for such a coloring of a graph \(G\) is denoted by \(\operatorname{pc}(G)\). In the proper edge coloring of a graph, in a traditional sense, each pair of edges with a common vertex is colored differently, so, every path is proper. Hence, by the well known theorem of Vizing [20], \[\operatorname{pc}(G)\leq\chi^{\prime}(G)\leq\Delta(G)+1.\] It is also easy to see that for every tree \(T\), we have \(\operatorname{pc}(T)=\chi^{\prime}(T)=\Delta(T)\). However, if \(G\) is a 2-connected graph1, then we already have \(\operatorname{pc}(G)\leq 3\), and this is tight. This somewhat surprising fact was proved by Borozan, Fujita, Gerek, Magnant, Manoussakis, Montero, and Tuza in [3] (see also [14]), where the proper connected coloring was introduced, in analogy to the well studied topic of _rainbow connected coloring_, invented by Chartrand, Johns, McKeon, and Zhang in [7] (see [15]). In the later concept, as you can imagine, the point is that each pair of vertices should be connected by a _rainbow_ path, i.e., one in which no two edges have the same color. In a similar way one may investigate a variety of coloring concepts involving various restrictions on _color patterns_ allowable on connecting paths (see [4], [5], [6]). Footnote 1: A graph \(G\) is _\(k\)-connected_ if it cannot be disconnected by deleting \(k-1\) vertices. Similarly, \(G\) is _\(k\)-edge-connected_ if it cannot be disconnected by deleting \(k-1\) edges. In the present paper we consider a new variant of connected coloring, inspired by the concept of _strong edge coloring_ of graphs, invented by Fouquet and Joliviet [10], and, independently, by Erdos and Nesetril [9]. In the strong edge coloring of a graph \(G\) every color class should form an _induced_ matching, which means that not only every pair of incident edges is colored differently, but also every pair of edges incident to some other common edge should be differently colored (see [8] for a recent survey on this topic). In particular, in a strong edge coloring of a path any sub-path with at most three edges must by rainbow. We will call such paths _strongly proper_. Analogously, an edge-colored graph is called _strongly proper connected_ if any two vertices are connected by at least one strongly proper path. The least number of colors needed for such a coloring of a graph \(G\) is denoted by \(\operatorname{spc}(G)\) and referred to as the _strong proper connection number_ of \(G\). In much the same way as for the traditional edge coloring, we have the trivial bound \(\operatorname{spc}(G)\leq\chi^{\prime}_{s}(G)\), where \(\chi^{\prime}_{s}(G)\) is the _strong chromatic index_ of \(G\), defined naturally as the least number of colors needed for a strong edge coloring of \(G\). However, this parameter is more mysterious than its classical archetype \(\chi^{\prime}(G)\). In particular, a long standing conjecture by Erdos and Nesetril [9] states that \(\chi^{\prime}_{s}(G)\leq\frac{5}{4}\Delta(G)^{2}\). This bound is tight (if true), as is demonstrated by the family of blowups of \(C_{5}\). Currently best general result, obtained by Hurley, de Joannis de Verclos, and Kang [11], states that the bound \(\chi^{\prime}_{s}(G)\leq 1.772\Delta(G)^{2}\) holds for sufficiently large \(\Delta(G)\) (see [8] for a survey of many other results towards the Erdos-Nesetril conjecture). In this paper we prove a finite upper bound on \(\operatorname{spc}(G)\) for \(2\)-connected graphs. Our main result reads as follows. **Theorem 1**.: _Every \(2\)-connected graph \(G\) satisfies \(\operatorname{spc}(G)\leq 5\)._ We do not know if this upper bound is tight. Clearly, \(\operatorname{spc}(G)\geq 3\) for any graph of diameter at least \(3\). Curiously, it is not so easy to produce examples of \(2\)-connected graphs demanding four colors. However, we provide a general construction of a family of graphs with \(\operatorname{spc}(G)=4\) and arbitrarily large girth. Moreover, this family contains infinitely many bipartite graphs. **Theorem 2**.: _For every \(d\geq 3\), there exists a \(2\)-connected graph \(G_{d}\) with girth at least \(d\) such that \(\operatorname{spc}(G_{d})\geq 4\)._ We complement these results by proving that graphs with all cycle lengths divisible by \(3\) are \(3\)-colorable in a strongly proper connected sense. **Theorem 3**.: _Let \(G\) be a \(2\)-connected graph. If the length of every cycle in G is divisible by \(3\), then \(\operatorname{spc}(G)\leq 3\)._ The proof of this result is simple but it provides a good illustration of the main idea used in the proof Theorem 1. The main tool is based on the ear decomposition of graphs. Recall that an _open ear decomposition_ of a graph \(G\) is a sequence \(P_{1},\ldots,P_{h}\) of subgraphs of \(G\) that partition the set of edges of \(G\), where \(P_{1}\) is a cycle and every \(P_{i}\), with \(2\leq i\leq h\), is a path that intersects \(P_{1}\cup\cdots\cup P_{i-1}\) in exactly its endpoints. Each \(P_{i}\) is called an _ear_. We will make use of following well known result of Whitney [21]. **Theorem 4** (Whitney, 1932).: _A graph \(G\) with at least two edges is \(2\)-connected if and only if it has an open ear decomposition._ A paradigm of colorful connectedness can be studied for other types of colorings as well. As postulated by Brause, Jendrol', and Schiermeyer [4], one may fix any property \(\mathcal{P}\) of words (sequences), and then investigate the corresponding \(\mathcal{P}\)_-connected coloring_, defined by the condition that every pair of vertices is joined by a path whose color sequence has the property \(\mathcal{P}\). Consider for example the following, particularly intriguing property of sequences, which is close (in some sense) to the one stemming from the strong edge coloring of graphs. A sequence of the form \(c_{1}c_{2}\cdots c_{n}c_{1}c_{2}\cdots c_{n}\) is called a _repetition_. An edge-colored path is _nonrepetitive_ if its color sequence does not contain a repetition as a _block_, i.e., a subsequence of _consecutive_ terms. An edge-colored graph \(G\) is _nonrepetitively connected_ if every pair of distinct vertices is joined by at least one nonrepetitive path. Denote by \(\operatorname{nrc}(G)\) the least number of colors needed for such a coloring of \(G\). Is it true that \(\operatorname{nrc}(G)\) is finitely bounded for \(2\)-connected graphs? Notice that it is not at all obvious that \(\operatorname{nrc}(P)\) is finite for every path \(P\). However, by a \(1906\) result of Thue [18], every path can be nonrepetitively colored by using just _three_ colors. So, there is a chance for a finite bound and we prove that this is indeed true. **Theorem 5**.: _Every \(2\)-connected graph \(G\) satisfies \(\operatorname{nrc}(G)\leq 15\) and every \(4\)-connected graph \(G\) satisfies \(\operatorname{nrc}(G)\leq 6\)._ Let us mention that the notion of nonrepetitive coloring of graphs, as introduced by Alon, Haluszczak, Grytczuk, and Riordan in [1], can be considered more generally, in a way similar to the usual proper coloring of graphs (in both, edge or vertex version). A recent survey by Wood [22] collects many interesting results on this topic. The proof of Theorem 5 is based on known results on spanning trees in \(k\)-connected graphs. It can be easily extended to more general scenarios involving fairly universal properties of words. We discuss these issues in the final section of the paper, where we state a general conjecture on \(\mathcal{P}\)-connected coloring of graphs. Proofs of all our results are collected in the next section. ## 2 Proofs of the results ### \(2\)-connected graphs with cycle lengths divisible by \(3\) We start with a simple proof that \(2\)-connected graphs whose all cycle lengths are divisible by \(3\) satisfy \(\operatorname{spc}(G)\leq 3\). Proof of Theorem 3.: Let \(G\) be a \(2\)-connected graph. We may assume that \(G\) is _minimally_\(2\)-connected, which means that it looses this property if we remove any single edge from it. By Theorem 4, \(G\) has an open ear decomposition, which we denote as \(ED=(P_{1},\ldots,P_{h})\). Let \(G_{i}\) be the subgraph of \(G\) consisting of the first \(i\) ears of \(ED\), that is, \(G_{i}=P_{1}\cup\cdots\cup P_{i}\). For each ear \(P_{i}\), let \(s_{i}\) and \(t_{i}\) be the endpoints of \(P_{i}\). First we present a claim concerning the lengths of \(P_{i}\) and some other paths in \(G_{i}\). **Claim 1**.: _Let \(P_{i}\) be an ear from \(ED\). Then, the length of \(P_{i}\) and the lengths of all paths between \(s_{i}\) and \(t_{i}\) in the graph \(G_{i-1}\) are divisible by \(3\) (see Figure 1)._ Proof.: The statement is obvious for \(i=1\), so, assume that \(i>1\). The graph \(G_{i-1}\) is \(2\)-connected so there exist two paths, \(Q\) and \(R\), from \(s_{i}\) to \(t_{i}\), such that \(Q\cap R=\{s_{i},t_{i}\}\). Let \(q\) and \(r\) be the lengths of \(Q\) and \(R\), respectively, and let \(p\) be the length of \(P_{i}\). The paths \(Q\) and \(R\) form a cycle in \(G_{i-1}\), so \(q+r\) must be divisible by \(3\). Moreover, \(Q\) and \(P_{i}\), as well as \(R\) and \(P_{i}\), form a cycle in \(G_{i}\). Therefore we have \[q+r=q+p=r+p\equiv 0\pmod{3}.\] This implies that \(q+r+p=0\pmod{3}\) and therefore \(q=r=p=0\pmod{3}\), which completes the proof of the claim. Let us define a sequence of oriented graphs \(D_{0},D_{1},\ldots,D_{h}\) such that \(D_{0}\) is empty and for each \(i>0\), \(D_{i}\) is obtained from \(D_{i-1}\) by adding the path \(P_{i}\), oriented from \(s_{i}\) to \(t_{i}\). Let \(D\) be the last oriented graph in this sequence, i.e., \(D:=D_{h}\). Now we will color the edges of \(D\) with \(3\) colors. In this coloring every _directed_ path will be strongly proper. It is easy to see that the digraph \(D\) is strongly connected, so our coloring will give us even more than the required edge coloring of \(G\). Let \(C\) be a \(3\)-coloring of the edges of \(D\) by colors \(\{1,2,3\}\), and let \(P\) be a directed path in \(D\). We say that \(P\) has a _canonical pattern_ if the sequence of colors of \(P\) forms a _block_ (subsequences of consecutive terms) in the infinite periodic sequence \((1,2,3,1,2,3,1,2,3,....)\). Note that a path \(P\) has a canonical pattern if and only if \(P\) is strongly proper. **Claim 2**.: _There exists a \(3\)-coloring \(C\) of the edges of \(D\) such that for every restriction \(C_{i}\) of \(C\) to the subgraph \(D_{i}\), \(1\leq i\leq h\), the following properties hold:_ 1. _For every_ \(v\in V(D_{i})\)_, all edges going out of_ \(v\) _have the same color._ 2. _For every_ \(v\in V(D_{i})\)_, all edges going into_ \(v\) _have the same color._ 3. _Every path in_ \(D_{i}\) _has a canonical pattern._ Proof.: We will construct the coloring \(C\) by inductively defining \(C_{i}\), for \(i=1,2,...,h\). The base case is when \(D_{1}\) is a directed cycle of length divisible by \(3\). We color the edges of \(D_{1}\) consecutively with colors \(1,2,3\), obtaining thereby the coloring \(C_{1}\). The properties (i)-(iii) are obviously satisfied. Let \(2\leq i\leq h\) be fixed and suppose that we have the coloring \(C_{i-1}\) of the edges of \(D_{i-1}\) satisfying properties (i)-(iii). Recall that \(D_{i}\) is obtained from \(D_{i-1}\) by adding the path \(P_{i}\) from \(s_{i}\) to \(t_{i}\). We will construct the coloring \(C_{i}\) from \(C_{i-1}\) by coloring the edges of \(P_{i}\) as follows (see Figure 2): 1. the first edge of \(P_{i}\) has the same color as the edges going out of \(s_{i}\) in \(D_{i-1}\), 2. the remaining edges of \(P_{i}\) are colored in such a way that \(P_{i}\) has a canonical pattern (for example if the first edge has color \(2\) then the second one has color \(3\), the third one \(1\), the fourth one \(2\), and so on). We will prove that \(C_{i}\) satisfies properties (i)-(iii). Properties (i) and (ii) remain obviously satisfied for vertices from \(V(D_{i})\setminus V(P_{i})\). They are also satisfied for all internal vertices of \(P_{i}\), as they have both indegree and outdegree Figure 1: An exemplary ear decomposition \(ED=\{P_{1},P_{2},P_{3},P_{4}\}\) of some graph with cycles of length divisible by \(3\) (\(P_{1}\) is a cycle with \(12\) edges, \(P_{2}\) is a path with \(6\) edges, \(P_{3}\) and \(P_{4}\) are paths with \(3\) edges). equal to \(1\) in \(D_{i}\), and for \(s_{i}\) by (a). For \(t_{i}\) property (i) is of course still satisfied, but to see that property (ii) holds, let us take a closer look at paths from \(s_{i}\) to \(t_{i}\). We know that all edges going out of \(s_{i}\) in \(D_{i}\) have the same color (by (i)), that all paths from \(s_{i}\) to \(t_{i}\) in \(D_{i}\) has a canonical pattern (for paths from \(D_{i-1}\) from (iii), for \(P_{i}\) from (b)) and that the lengths of \(P_{i}\) and of every path from \(s_{i}\) to \(t_{i}\) in \(D_{i-1}\) are divisible by \(3\) (from \(1\)). Therefore both \(P_{i}\) and all other paths from \(s_{i}\) to \(t_{i}\) in \(D_{i}\) end with the same color, so all edges going into \(t_{i}\) have the same color and property (ii) is satisfied also for \(t_{i}\). Now consider the property (iii). It remains clearly satisfied for paths which do not contain edges from \(P_{i}\). Now consider some path \(R\) from \(u\) to \(v\), where \(u,v\in V(D_{i})\), which contain all edges from \(P_{i}\) (see Figure 3a). It means that \(R\) consists of some path \(R_{1}\) from \(u\) to \(s_{i}\) (maybe empty), the path \(P_{i}\) and some path \(R_{2}\) from \(t_{i}\) to \(v\) (maybe empty). Let \(Q\) be a path from \(s_{i}\) to \(t_{i}\) in the graph \(D_{i-1}\). From (iii) we have that the path: \(R_{1}\cup Q\cup R_{2}\) has a canonical pattern, from (i) that the first edge of \(P_{i}\) has the same color as the first edge of \(Q\) and from Claim \(1\) that the lengths of \(P_{i}\) and \(Q\) are divisible by \(3\). Therefore \(R\) has a canonical pattern and property (iii) remains satisfied in this case. Next let us consider a path \(R\) from \(u\) to \(v\), where both \(u\) and \(v\) are internal vertices of \(P_{i}\) (see Figure 3b). First assume that \(R\) is contained in \(P_{i}\) (that is, the order of the vertices on the path \(P_{i}\) is as follows: \(s_{i}\), \(u\), \(v\), \(t_{i}\)). Then \(R\) has a canonical pattern from (b). So let's assume the opposite, it means that the order of the vertices on the path \(P_{i}\) is: \(s_{i}\), \(v\), \(u\), \(t_{i}\). Let us call the parts of \(R\) as follows: let \(R_{1}\) be a part of \(R\) from \(u\) to \(t_{i}\), let \(R_{2}\) be a part of \(R\) from \(t_{i}\) to \(s_{i}\), and let \(R_{3}\) be a part of \(R\) from \(s_{i}\) to \(v\). Note that \(R_{1}\) and \(R_{3}\) are parts of \(P_{i}\) and \(R_{2}\) is a path from \(t_{i}\) to \(s_{i}\) in \(D_{i-1}\). Let \(W\) be some path from \(s_{i}\) to \(t_{i}\) in \(D_{i-1}\). Let \(w_{1}\) be a vertex from \(W\) such that the distance from \(w_{1}\) to \(t_{i}\) on \(W\) is congruent to the same value modulo \(3\) as the distance from \(u\) to \(t_{i}\) on \(R_{i}\). Let \(W_{1}\) be a part of \(W\) from \(w_{1}\) to \(t_{i}\). Because \(P_{i}\) has a canonical pattern (from (b)), and also \(W\) and \(W_{1}\cup R_{2}\) have canonical Figure 2: A coloring of \(P_{i}\) (a fragment of the graph \(D_{i-1}\) is dashed, the path \(P_{i}\) is drawn by normal lines, the numbers \(1,2,3\) mean colors of edges). patterns (from (iii)), and \(R_{1}\) ends with the same color as \(W_{1}\) (from (ii)), we have that the path \(R_{1}\cup R_{2}\) has a canonical pattern. Similarly, considering \(w_{2}\) as any vertex from \(V(D_{i-1})\) such that \(w_{2}\) belongs to some path from \(s_{i}\) to \(t_{i}\) and the distance from \(s_{i}\) to \(w_{2}\) is congruent to the same value modulo \(3\) as the distance from \(s_{i}\) to \(v\), we get that the path \(R_{2}\cup R_{3}\) has a canonical pattern. Therefore the whole path \(R\) has a canonical pattern and property (iii) remains satisfied also in this case. The last case is when one endpoint of a path is from \(V(D_{i-1})\) and the second one is an internal vertex of \(P\). Carrying out analogous considerations as in the previous case, we get that the property (iii) is fulfilled here as well, which completes the proof of (iii). Therefore, the proof of the claim is complete by induction. We constructed the coloring C of the edges of \(D\) such that every path in \(D\) has a canonical pattern. It means that every path of \(D\) is strongly proper. Therefore \(G\) with coloring \(C\) is strongly proper connected. The proof of Theorem 3 is complete. ### \(2\)-connected graphs satisfy \(\operatorname{spc}(G)\leq 5\) Proof of Theorem 1.: Let \(H\) be a minimally \(2\)-connected spanning subgraph of \(G\). We will construct an edge coloring of \(H\) with at most \(5\) colors that makes \(H\) strongly proper connected. Note that it suffices to get the assertion of the theorem, as the remaining edges from \(E(G)\setminus E(H)\) can be colored arbitrarily without affecting validity of the coloring. From Theorem 4 we know that \(H\) has an open ear decomposition. Similarly to the proof of Theorem 3 we will color the graph while adding ears, but this time the order of adding ears will be important. Let \(ED=(P_{1},\ldots,P_{h})\) be an open ear decomposition of \(H\) in which Figure 3: The property (iii) - every path \(R\) from \(D_{i}\) has a canonical pattern. in every step we add the longest possible ear (see Figure 4). Let \(H_{i}\) be the subgraph of \(H\) consisting of the first \(i\) ears of \(ED\), that is, \(H_{i}=P_{1}\cup\cdots\cup P_{i}\). For an ear \(P_{i}\) let \(s_{i}\) and \(t_{i}\) be the endpoints of \(P_{i}\). First we present some claims concerning the properties of the ears from \(ED\). Claim 3 shows that we will process ears from the longest to the shortest. **Claim 3**.: _The ears of \(ED\) are in the opposite order to the number of their edges._ Proof.: Let us assume the opposite. Let \(P_{i}\in ED\) be an ear with the smallest possible index such that \(P_{i}\) has more edges than \(P_{i-1}\). Note that \(i>1\). From the definition of \(ED\) we know that both endpoints of \(P_{i}\) are internal vertices of some ears with smaller indices (maybe two different). We consider three cases. **Case 1:** None of \(s_{i}\) and \(t_{i}\) are internal vertices of \(P_{i-1}\). Therefore both \(s_{i}\) and \(t_{i}\) belong to some ears with indices smaller than \(i-1\). It means that we could add a longer ear \(P_{i}\) to the graph \(H_{i-2}\) instead of \(P_{i-1}\), which contradicts the definition of \(ED\). **Case 2:** Both \(s_{i}\) and \(t_{i}\) are internal vertices of \(P_{i-1}\). It means that instead of \(P_{i-1}\) we could add to the graph \(H_{i-2}\) a longer ear composed of the part of \(P_{i-1}\) from one endpoint to \(s_{i}\), the path \(P_{i}\) and the part of \(P_{i-1}\) from \(t_{i}\) to the second endpoint of \(P_{i-1}\), which again contradicts the definition of \(ED\). **Case 3:** Exactly one endpoint of \(P_{i}\) is an internal vertex of \(P_{i-1}\). Assume that it is \(s_{i}\). So \(t_{i}\) belongs to some ear with index smaller than \(i-1\). It means that to the graph \(H_{i-2}\) we could add an ear composed of the part of \(P_{i-1}\) from one endpoint to \(s_{i}\) and the path \(P_{i}\), which has more edges than \(P_{i-1}\). It contradicts the definition of \(ED\) again. The proof of the claim is complete. Let us recall that \(H\) is minimally 2-connected so every \(P_{i}\) has at least one internal vertex. From the definition of \(ED\) we know that \((s_{i},t_{i})=P_{i}\cap H_{i-1}\). Claim 4 shows that for every ear \(P_{i}\) its endpoints cannot be adjacent in the graph \(H_{i-1}\). Figure 4: An exemplary ear decomposition \(ED=\{P_{1},P_{2},P_{3},P_{4}\}\) of some graph, where the thickest lines depict \(P_{1}\) and the thinnest lines depict \(P_{4}\) (\(P_{1}\) is a cycle with 10 edges, \(P_{2}\) is a path with 4 edges, \(P_{3}\) is a path with 3 edges and \(P_{4}\) is a path with 2 edges). **Claim 4**.: _Vertices \(s_{i}\) and \(t_{i}\) are not adjacent in the graph \(H_{i-1}\)._ Proof.: Let us assume the opposite. There is an edge \(e=s_{i}t_{i}\) in \(H_{i-1}\). Therefore there is some ear \(P_{j}\in ED\), where \(j<i\), containing \(e\). According to the ordering of ears in \(ED\), in every step we added the longest possible ear to our graph. But instead of \(P_{j}\) we could add a longer ear \((P_{j}\setminus e)\cup P_{i}\), which contradicts the definition of \(ED\). Claims 5 and 6 shows the relationship between positions of the endpoints of ears with two or three edges. **Claim 5**.: _Let \(P_{i}\) and \(P_{j}\) be ears from \(ED\) such that both have two edges or both have three edges, where \(j<i\). Then no endpoint of \(P_{i}\) can be an internal vertex of \(P_{j}\)._ Proof.: Let us assume the opposite. From Claim 4 we know that the endpoints of any ear cannot be adjacent in \(H\). Therefore the only possibility here is that exactly one endpoint of \(P_{i}\), say \(s_{i}\), is an internal vertex of \(P_{j}\). Without loosing the generality of our consideration we can assume that \(s_{i}\) is adjacent to \(t_{j}\) in \(H\). Let \(r\) be some vertex from \(H_{j-1}\) different than \(s_{j}\). Let \(R\) be the shortest path between \(t_{i}\) and \(r\) in the graph \(H_{i-1}\) which contains edges only from \(E(H_{i-1})\setminus E(H_{j})\) (see Figure 5). Note that \(R\) is an empty path if \(t_{i}\in V(H_{j-1})\) and \(R\) has at least 1 edge in the other cases. Now consider the ear \((P_{j}\setminus(s_{i}t_{j}))\cup P_{i}\cup R\). It has more edges than \(P_{j}\) and has both endpoints in \(V(H_{j-1})\), so it could be added to the graph \(H_{j-1}\). This means that in the \(j\)-th step we did not add the longest possible ear, which contradicts the definition of \(ED\). Figure 5: An example of an impossible case, where one endpoint of an ear \(P_{i}\) with 3 edges is an internal vertex of some other ear \(P_{j}\) with 3 edges (\(H_{j-1}\) is dashed, thick lines depict \(P_{j}\), normal lines depict \(P_{i}\), dotted lines depict \(R\)). **Claim 6**.: _Let \(P_{i}\) and \(P_{j}\) be ears from \(ED\) such that \(j<i\), \(P_{i}\) has two edges, \(P_{j}\) has three edges and one endpoint of \(P_{i}\), say \(s_{i}\), is an internal vertex of \(P_{j}\). Then \(t_{i}\) (the second endpoint of \(P_{i}\)) must be an endpoint of \(P_{j}\) not adjacent to \(s_{i}\) in \(H_{j}\)._ Proof.: Without loosing the generality of our consideration we can assume that \(s_{j}\) is adjacent to \(s_{i}\) in \(H_{j}\). Note that if \(t_{i}\) is the same as \(t_{j}\) then our assumptions about the order of the ears in \(ED\) are satisfied; see Figure 6. So it remains to show that \(t_{i}\) cannot be different from \(t_{j}\). There are two cases to consider. If \(t_{i}\) were a vertex of \(P_{j}\) different from \(t_{j}\), then vertices \(s_{i}\) and \(t_{i}\) would be adjacent in the graph \(H_{i-1}\), which contradicts Claim 4. If \(t_{i}\) were from \(H_{i-1}\setminus P_{j}\) it would mean that we added the ear \(P_{j}\) to \(H_{j-1}\) when we could add a longer ear. To construct this ear we could take the shortest path between \(t_{i}\) and some vertex from \(H_{j-1}\) different than \(t_{j}\) which contains edges only from \(E(H_{i-1})\setminus E(H_{j})\) and append to it \((P_{i}\cup P_{j}\setminus(s_{i}s_{j})))\) (the construction is analogous as in the proof of Claim 5). This contradicts the definition of \(ED\). Therefore the proof of the claim is complete. Let \(i^{*}\) be a number defined such that \(P_{i^{*}}\) is the last ear in \(ED\) with at least three edges. Let us define a sequence of oriented graphs \(D_{0},D_{1},\ldots,D_{i^{*}}\) such that \(D_{0}\) is empty and for each \(i>0\), \(D_{i}\) is obtained from \(D_{i-1}\) by adding the path \(P_{i}\), oriented from \(s_{i}\) to \(t_{i}\). Take \(D\) to be the last oriented graph in this sequence, i.e. \(D:=D_{i^{*}}\). Now we will color the edges of \(D\) with 5 colors such that for every two vertices \(u\), \(v\) there is a strongly proper _directed_ path from \(u\) to \(v\), which is a strengthening of the property required from edge-coloring of \(H\). In the following claim (iii) is the crucial part, while (i), (ii) and (iv) are invariants needed in an inductive proof. **Claim 7**.: _There exists a coloring \(C\) such that for every \(i\leq i^{*}\), the restriction of \(C_{i}\) to the edges of \(D_{i}\) satisfies the following properties:_ Figure 6: Ears \(P_{i},P_{j}\in ED\) such that \(P_{i}\) has 2 edges, \(P_{j}\) has 3 edges (\(H_{j-1}\) is dashed, thick lines depict \(P_{j}\) and normal lines depict \(P_{i}\)). 1. _For every_ \(v\in V(D_{i})\)_, all edges going out of_ \(v\) _have the same color._ 2. _For every_ \(v\in V(D_{i})\)_, all edges going into_ \(v\) _have the same color, other than the color of edges going out of_ \(v\)_._ 3. _For every two vertices_ \(u,v\in V(D_{i})\)_, there exists a strongly proper path from_ \(u\) _to_ \(v\) _and from_ \(v\) _to_ \(u\)_._ 4. _For every edge_ \(uv\in E(D_{i})\)_, such that_ \(u\) _and_ \(v\) _are not internal vertices of the same ear with_ \(3\) _edges, and for every two directed edges_ \(xu,vy\in E(D_{i})\)_, we have_ \(C_{i}(xu)\neq C_{i}(vy)\)_._ Proof.: We will construct the coloring \(C\) by inductively defining \(C_{i}\) for \(i=1,2,\ldots i^{*}\). The base case of the induction is when \(D_{1}\) is a directed cycle. It suffices to color the edges of \(D_{1}\) greedily, making sure that each edge is colored differently than the two preceding and the two following edges. Now consider any \(i\leq i^{*}\) and suppose that we already have the coloring \(C_{i-1}\) which satisfies properties (i) - (iv) (for \(i-1\)). For convenience let us consider a directed path \(P^{\prime}_{i}\) obtained from \(P_{i}\) by directing it from \(s_{i}\) to \(p_{i}\) and adding two edges \(ws_{i}\) and \(t_{i}x\) from \(D_{i-1}\), for arbitrarily chosen \(w\) and \(x\) (see Figure 7). Now we will construct \(C_{i}\) from \(C_{i-1}\) by coloring the internal edges of \(P^{\prime}_{i}\) such that 1. the second edge of \(P^{\prime}_{i}\) has the same color as the edges going out of \(s_{i}\) in \(D_{i-1}\), 2. the penultimate edge of \(P^{\prime}_{i}\) has the same color as the edges going into \(t_{i}\) in \(D_{i-1}\), Figure 7: Constructing a path \(P^{\prime}_{i}\) for a given directed graph \(D_{i-1}\) and a path \(P_{i}\). 3. the remaining edges of \(P_{i}^{\prime}\) are colored with a different color than the two preceding and the two following edges on \(P_{i}^{\prime}\) (note that it is possible, as we have 5 colors in use). We will prove that \(C_{i}\) satisfies properties (i) - (iv). It is clear that properties (i) and (ii) remain satisfied for vertices from \(V(D_{i})\setminus V(P_{i})\). They are also satisfied for \(s_{i}\) and \(t_{i}\) by (a) and (b), and for the other vertices of \(P_{i}\), as they have indegree and outdegree 1 in \(D_{i}\). Now consider the property (iii). It remains satisfied if \(u,v\in V(D_{i-1})\). If both \(u\) and \(v\) are internal vertices of \(P_{i}\), then (iii) holds by (c). Now consider the case when \(u\in V(D_{i-1})\) and \(v\) is an internal vertex of \(P_{i}\). The required \(u-v\) path is obtained by composing a strongly proper path \(Q\) from \(u\) to \(s_{i}\) in \(D_{i-1}\) (note that \(Q\) exists by (iii) applied for \(i-1\)) with a fragment of \(P_{i}\) from \(s_{i}\) to \(v\). We will show that this path is strongly proper. Since both \(Q\) and a fragment of \(P_{i}\) are strongly proper, we only need to exclude color conflicts between four1 edges around \(s_{i}\). First edge of \(P_{i}\) has different color than the last edge of \(Q\) by (a) and (ii). Second edge of \(P_{i}\) is colored differently than the last edge of \(Q\) by (c). For the remaining pair note that from Claim 5 we know that \(s_{i}\) is not an internal vertex of any ear \(P_{j}\) with 3 edges, where \(j<i\). It follows that (iv) holds for the last edge of \(Q\) in the coloring \(C_{i-1}\). It follows that last but one edge of \(Q\) is colored differently than edges going out of \(s_{i}\) in \(D_{i-1}\), hence by (a) there is no conflict with the first edge of \(P_{i}\). Footnote 1: This statement is technically incorrect if \(Q\) has one or zero edges. However, we ignore this case as it is much easier. In the remaining case when \(u\) is an internal vertex of \(P_{i}\) and \(v\in V(D_{i-1})\) the argument is analogous (the required path is obtained by composing a fragment of \(P_{i}\) from \(u\) to \(t_{i}\) and a strongly proper path from \(t_{i}\) to \(v\)). Therefore, the proof of (iii) is complete. Now consider the property (iv) that involves three edges \(xu\), \(uv\) and \(vy\). It remains satisfied if both \(u\) and \(v\) are in \(V(D_{i})\setminus V(P_{i})\). If both \(u\) and \(v\) are internal vertices of \(P_{i}\), we only need to consider the case when \(P_{i}\) has at least 4 edges; in this case (iv) follows directly from (c). If \(u=s_{i}\) or \(v=t_{i}\) the property (iv) follows directly from (c) again. Now consider the case \(v=s_{i}\) or \(u=t_{i}\). Let \(y^{\prime}\) be an out-neighbor of \(v\) in \(D_{i-1}\) and \(x^{\prime}\) be an in-neighbor of \(u\) in \(D_{i-1}\) (note an easy case when \(x^{\prime}=x\) and \(y^{\prime}=y\)). By the induction assumption (iv) it holds that \(C_{i-1}(x^{\prime}u)\neq C_{i-1}(vy^{\prime})\), and by (a) and (b) it follows that \(C_{i}(vy)=C_{i-1}(vy^{\prime})\) and \(C_{i}(xu)=C_{i-1}(x^{\prime}u)\), which completes the proof of (iv). Therefore, the proof of the claim is complete by induction. Before coloring the edges of \(H\), we need to orient ears with two edges. Let \(P_{i}\) be an ear with two edges. We pick \(s_{i}^{\prime}\) and \(t_{i}^{\prime}\) from \(\{s_{i},t_{i}\}\) such that no edge going into \(s_{i}^{\prime}\) is an internal edge of an ear with three edges and no edge going out of \(t_{i}^{\prime}\) is an internal edge of an ear with three edges. Such a choice is possible because by Claim 6 if one of the vertices from \(\{s_{i},t_{i}\}\) is an internal vertex of some ear \(P_{j}\) with 3 edges, then the other one is an endpoint of \(P_{j}\), and hence by Claim 5 it is not an internal vertex of any other ear with 3 edges. We will think of the ear \(P_{i}\) as oriented from \(s_{i}^{\prime}\) to \(t_{i}^{\prime}\). Now let \(C\) be a \(5\)-coloring of edges of \(D\) given by Claim 7. Define a \(5\)-coloring \(C^{\prime}\) of edges of \(H\) such that \(C^{\prime}(uv)=C(uv)\) for every edge \(uv\in E(D)\) and for every vertex \(v\) such that \(v\) is an internal vertex of an ear \(P_{i}\) with two edges, let \(C^{\prime}(s^{\prime}_{i}v)\) be the color of edges going out of \(s^{\prime}_{i}\) in \(C\) and \(C^{\prime}(vt^{\prime}_{i})\) be the color of edges going into \(t^{\prime}_{i}\) in \(C\). Note that this definition is correct, since by Claim 5\(s^{\prime}_{i}\) and \(t^{\prime}_{i}\) are vertices of \(D\), and it is unambiguous by Claim 7 (i) and (ii). We will show that \(C^{\prime}\) is the desired coloring of \(H\). Consider any two vertices \(u,v\in V(H)\). We need to find a strongly proper path between \(u\) and \(v\). If both \(u\) and \(v\) are in \(V(D)\), such a path exists by Claim 7 (iii). If \(u,v\notin V(D)\), pick \(i\) and \(j\) such that \(u\) is an internal vertex of the ear \(P_{i}\) and \(v\) is an internal vertex of \(P_{j}\) (where \(P_{i}\) and \(P_{j}\) have two edges). In this case the desired path between \(u\) and \(v\) is obtained by taking a strongly proper path from \(t^{\prime}_{i}\) to \(s^{\prime}_{j}\) in \(D\) (which exists by Claim 7 (iii)) and appending to it the edge \(ut^{\prime}_{i}\) at the start and the edge \(s^{\prime}_{j}v\) at the end; let us denote the constructed path by \(P\). We will show that \(P\) is strongly proper. If \(P\) has less than three edges, it follows directly from Claim 7 (ii), so we assume otherwise. By Claim 7 (ii) the first and second edge have different colors, and last and last but one edge on \(P\) also have different colors. By the choice of \(t^{\prime}_{i}\), the second edge of \(P\) is not an internal edge of any ear with \(3\) edges, so by Claim 7 (iv) the first and third edge on \(P\) have different colors. Similarly, by the choice of \(s^{\prime}_{j}\), the last but one edge on the constructed path is not an internal edge of any ear with \(3\) edges, so \(P\) is strongly proper by Claim 7 (iv). Note that the same argument applies when \(u\notin V(D)\) and \(v\in V(D)\) or \(u\in V(D)\) and \(v\notin V(D)\), except that only one end of \(P\) needs to be considered. This proves that \(H\), together with the constructed coloring \(C^{\prime}\), is strongly proper connected, so the proof of Theorem 1 is complete. ### Proof of the lower bound \(\operatorname{spc}(G_{d})\geq 4\) Proof of Theorem 2.: Let \(d\geq 3\) and let \(a,b\geq\max\{3,d/3\}\) be fixed integers. Consider a graph \(G_{d}\) consisting of three edges, \(x_{1}w_{1}\), \(x_{2}w_{2}\), \(x_{3}w_{3}\), and six paths \(P_{1},P_{2},\ldots,P_{6}\) such that \(P_{1}\) and \(P_{2}\) go from \(w_{1}\) to \(x_{2}\), \(P_{3}\) and \(P_{4}\) go from \(w_{2}\) to \(x_{3}\), \(P_{5}\) and \(P_{6}\) go from \(w_{3}\) to \(x_{1}\). Moreover, \(P_{1},P_{3},P_{5}\) have length \(3a+1\) and \(P_{2},P_{4},P_{6}\) have length \(3b\) (see Figure 8). Suppose for the contrary that \(\operatorname{spc}(G_{d})\leq 3\) and fix an edge coloring of \(G_{d}\) with colors \(1\), \(2\) and \(3\) that makes it strongly proper connected. Choose vertices \(v_{1},v_{2},\ldots,v_{6}\) such that for each \(i\), \(v_{i}\) is a vertex from \(P_{i}\) at distance at least \(2\) from both ends of \(P_{i}\), and the path with four edges closest to \(v_{i}\) is strongly proper. Note that such a choice is possible since the edge-colored graph is strongly proper connected, e.g., \(v_{i}\) is a third vertex on a strongly proper path from the central vertex of \(P_{i}\) to \(x_{1}\). Now define a directed graph \(D\) on vertices \(v_{1},\ldots,v_{6}\) such that there is an arc \(v_{i}v_{j}\) if and only if there exists a strongly proper path from \(v_{i}\) to \(v_{j}\) in \(G_{d}\) with the canonical color pattern (a block of the sequence \((1,2,3,1,2,3,\ldots)\)). Note that each strongly proper path that uses three colors must exhibit this canonical pattern in both directions. Since the edge-colored graph \(G\) is strongly proper, it follows that \(D\) contains a tournament as a directed subgraph. Now consider two cases. **Case 1:** (\(D\) _is acyclic_) Note that in this case the vertices of \(D\) can be reordered as \(u_{1},u_{2},\ldots,u_{6}\) so that for \(i\in\{1,2,\ldots,5\}\) there exists a strongly proper path from \(u_{i}\) to \(u_{i+1}\) with the canonical color pattern. Note that joining all those paths produces a walk \(W\) that is strongly edge-colored (i.e. no two consecutive edges and no two edges at distance 2 on the walk have the same color) - this follows by the choice of \(v_{1},v_{2},\ldots,v_{6}\), guaranteeing that no edge on \(W\) appears two times in a row. Also, each vertex from \(\{v_{1},v_{2},\ldots,v_{6}\}\) appears on \(W\) exactly once, as otherwise there would be a cycle in \(D\). This is a contradiction with the structure of \(G_{d}\), i.e., there is no walk in \(G_{d}\) that visits each vertex from \(\{v_{1},v_{2},\ldots,v_{6}\}\) exactly once and does not contain two consecutive occurrences of some edge. **Case 2:** (\(D\) _contains a directed cycle \(u_{1}u_{2}\ldots u_{k}\)_). Consider a closed walk \(W\) obtained by joining strongly proper paths with the canonical color pattern that go from \(u_{1}\) to \(u_{2}\), from \(u_{2}\) to \(u_{3}\), and so on, up to a path from \(u_{k}\) to \(u_{1}\). Note that, like in the previous case, \(W\) is strongly edge-colored and no edge of \(G_{d}\) appears on \(W\) two times in a row. Let \(S=\{x_{1},w_{1},x_{2},w_{2},x_{3},w_{3}\}\) and consider consecutive occurrences of vertices from \(S\) on \(W\). Note that up to natural symmetries (i.e. renaming \(x_{i}\) to \(w_{i}\) and vice versa or rotating names, so that \(x_{i},w_{i}\) become \(x_{i+1}\) and \(w_{i+1}\)) there are two cases: either (2a) at some point \(w_{1}\) is followed by \(x_{2}\), followed again by \(w_{1}\) or (2b) \(x_{i}\) is always followed by \(w_{i}\), and \(w_{1}\) is followed by \(x_{2}\), \(w_{2}\) by \(x_{3}\) and \(w_{3}\) by \(x_{1}\). Figure 8: A graph \(G_{d}\) with strong proper connection number equal to 4. In case (2a) note that the first occurrence of \(w_{1}\) must be preceded by \(x_{1}\) and the second occurrence of \(w_{1}\) - followed by \(x_{1}\) (because otherwise \(P_{1}\) and \(P_{2}\) would form a cycle with the canonical color pattern, which is impossible as their total length is not divisible by \(3\)). However, it implies that the edge \(x_{1}w_{1}\) occurs on \(W\) twice at distance exactly \(3a+3b+2\), which is a contradiction, because colors on \(W\) must repeat every three edges. In the remaining case (2b) a part of \(W\) from the first occurrence of the edge \(x_{1}w_{1}\) to its second occurrence must be a strongly edge-colored cycle; denote it by \(C\). Since the length of \(C\) must be divisible by \(3\), it contains either vertices \(\{v_{1},v_{3},v_{5}\}\) or \(\{v_{2},v_{4},v_{6}\}\). Let \(u_{1},u_{2},u_{3}\) be vertices from \(\{v_{1},v_{2},\ldots,v_{6}\}\) outside \(C\). Note that \(u_{i}\) may not be incident to both incomming and outgoing arcs in \(D\). Indeed, if that was the case, then a part of \(C\), together with paths from \(C\) to \(u_{i}\) and from \(u_{i}\) to \(C\) with canonical color pattern, would form a strongly \(3\)-edge-colored cycle with length not divisible by \(3\), which is a contradiction. However, this implies that \(D\) does not contain an arc between two of the vertices from \(\{u_{1},u_{2},u_{3}\}\) (i.e. there can be no arc between two vertices with outdegree \(0\)), which contradicts the fact that \(D\) contains a tournament. Therefore, the proof is complete. ### Nonrepetitive connected coloring of graphs In this subsection we prove that \(4\)-connected graphs satisfy \(\operatorname{nrc}(G)\leq 6\) and \(2\)-connected graphs satisfy \(\operatorname{nrc}(G)\leq 15\). Actually, we will derive these bounds as simple consequences of more general results. **Theorem 6**.: _Let \(G\) be a graph containing two edge disjoint spanning trees. Then \(\operatorname{nrc}(G)\leq 6\)._ Proof.: Let \(T_{1}\) and \(T_{2}\) be two spanning trees of \(G\) such that \(E(T_{1})\cap E(T_{2})=\emptyset\). Let \(r\) be a common root of these trees. Let \(E_{i}(T_{1})\) be the set of edges at distance \(i\) from the root \(r\). So, \(E_{0}(T_{1})\) consists of the edges of \(T_{1}\) incident to \(r\), \(E_{1}(T_{1})\) contains the edges of \(T_{1}\) incident to the neighbors of \(r\), and so on. By the theorem of Thue [18], there exists a nonrepetitive sequence \(a_{0}a_{1}a_{2}\cdots\) of arbitrary length such that \(a_{i}\in\{1,2,3\}\). We may color the edges of the tree \(T_{1}\) using this sequence so that each edge in the set \(E_{i}(T_{1})\) gets color \(a_{i}\). The same construction may be applied to the tree \(T_{2}\), with similarly defined sets \(E_{i}(T_{2})\), and sufficiently long nonrepetitive sequence \(b_{0}b_{1}b_{2}\cdots\), with \(b_{i}\in\{4,5,6\}\). All other edges of \(G\) may be colored arbitrarily. We claim that this coloring satisfies the desired property. Indeed, let \(u,v\) be any two vertices of \(G\). Denote by \(P_{j}(x,y)\), \(j=1,2\), the unique path from \(x\) to \(y\) in the tree \(T_{i}\). Consider the path \(P_{1}(u,r)\). Clearly, it is nonrepetitive by the construction of the coloring. If \(v\) lies on \(P_{1}(u,r)\), then the sub-path \(P_{1}(u,v)\) is nonrepetitive, too, and we are done. So, assume that \(v\) lies outside \(P_{1}(u,r)\) and consider the path \(P_{2}(r,v)\). If the only common vertex of these two paths is \(r\), then we may glue them together into a longer path \(P_{1}(u,r)P_{2}(r,v)\), which is clearly nonrepetitive, as the sets of colors on both fragments are disjoint. Finally suppose that the two paths, \(P_{1}(u,r)\) and \(P_{2}(r,v)\), have some common vertices other than the root \(r\) and let \(x\) be the one with the largest distance from \(r\) (in the tree \(T_{1}\), say). Then the two sub-paths \(P_{1}(u,x)\) and \(P_{2}(x,v)\) intersect in only one vertex \(x\) and, as before, we may glue them together to get the nonrepetitive path \(P_{1}(u,x)P_{2}(x,v)\). This completes the proof. To get the second assertion of Theorem 5 it suffices to apply the following simple fact following easily from the celebrated theorem of Nash-Williams [16] (see Corollary 44 in [17]). **Theorem 7** (Nash-Williams [16]).: _Every \(2k\)-edge-connected graph contains \(k\) edge-disjoint spanning trees._ Indeed, it is enough to take \(k=2\) and notice that a \(4\)-edge-connected graph is all the more \(4\)-(vertex)-connected. For the second bound for \(2\)-connected graphs we apply a similar approach with a silghtly weaker property based on edge independent trees. Recall that two spanning trees in a graph \(G\), \(T_{1}\) and \(T_{2}\), having the same root \(r\), are _edge-independent_ if for every vertex \(v\), the unique paths \(P_{1}(v,r)\) in \(T_{1}\) and \(P_{2}(v,r)\) in \(T_{2}\) are edge disjoint. **Theorem 8**.: _Let \(G\) be a graph containing two edge-independent spanning trees. Then \(\operatorname{nrc}(G)\leq 15\)._ Proof.: Let \(T_{1}\) and \(T_{2}\) be two edge independent spanning trees of \(G\). We will construct a similar coloring as in the proof of Theorem 6, but notice that this time the sets of edges \(E(T_{1})\) and \(E(T_{2})\) need not be disjoint. Therefore we will color the edges of \(G\) by ordered pairs of colors whose first coordinates are controlled by an appropriate coloring of \(T_{1}\) while second coordinates are determined by an analogous coloring of \(T_{2}\). So, let \(r\) be a common root of trees \(T_{1}\) and \(T_{2}\). Let \(E_{i}(T_{1})\) be the set of edges at distance \(i\) from the root \(r\). So, \(E_{0}(T_{1})\) consists of the edges of \(T_{1}\) incident to \(r\), \(E_{1}(T_{1})\) contains the edges of \(T_{1}\) incident to the neighbors of \(r\), and so on. By Thue's theorem [18], there exists a nonrepetitive sequences, \(a_{0}a_{1}a_{2}\cdots\) and \(b_{0}b_{1}b_{2}\cdots\), of arbitrary length such that \(a_{i}\in\{1,2,3\}\) and \(b_{i}\in\{4,5,6\}\). We may color the edges of trees \(T_{1}\) and \(T_{2}\) using these sequences so that each edge in the set \(E_{i}(T_{1})\) gets color \(a_{i}\) and each edge in \(E_{i}(T_{2})\) gets color \(b_{i}\). Now, if an edge \(e\) belongs to both trees, then its final color is an ordered pair of colors \((a_{i},b_{j})\). In this way we get a partial coloring of \(G\) using at most \(9+6=15\) colors. The rest of the edges of \(G\), not belonging to trees \(T_{i}\), may be colored by these colors arbitrarily. We claim that this coloring satisfies the desired property. Indeed, let \(u,v\) be any two vertices of \(G\). Denote by \(P_{j}(x,y)\), \(j=1,2\), the unique path from \(x\) to \(y\) in the tree \(T_{i}\). Consider the path \(P_{1}(u,r)\). Clearly, it is nonrepetitive by the construction of the coloring. If \(v\) lies on \(P_{1}(u,r)\), then the sub-path \(P_{1}(u,v)\) is nonrepetitive, too, and we are done. So, assume that \(v\) lies outside \(P_{1}(u,r)\) and consider the path \(P_{2}(r,v)\). Let \(x\) be a common vertex of these two paths, \(P_{1}(u,r)\) and \(P_{2}(r,v)\), such that the two sub-paths, \(P_{1}(u,x)\) and \(P_{2}(x,v)\) intersect only in \(x\). Let \(e_{u}\) and \(e_{v}\) denote the last edges of these two sub-paths, respectively (so their coomon end is \(x\)). Now, it is not hard to verify that, by the assumption of the edge-independence of trees \(T_{i}\), each of these two edges belong to only one tree, namely \(e_{u}\) to \(T_{1}\) and \(e_{v}\) to \(T_{2}\). Consequently, the color of \(e_{u}\), which is some \(a_{i}\), cannot occur at the path \(P_{2}(v,x)\). It follows that the path \(P_{1}(u,x)P_{2}(x,v)\) is nonrepetitive. This completes the proof. It is conjectured that every \(k\)-edge-connected graph contains \(k\) edge-independent spanning trees with an arbitrarily choosen common root \(r\) (see [17]). To get the first part of Theorem 5 it suffices to invoke the results of Itai and Rodeh [12] and Khuller and Scheiber [13] confirming this conjecture for \(k=2\) (see [17]). ## 3 Final remarks Let us conclude the paper with some natural open problems. The first one is very concrete and asks for the optimum value of the strongly proper connection number \(\operatorname{spc}(G)\) in the class of \(2\)-connected graphs. **Problem 9**.: _Determine the least possible number \(k\) such that every \(2\)-connected graph \(G\) satisfies \(\operatorname{spc}(G)\leq k\)._ We know that \(k=4\) or \(5\), but which is the correct value? It would be also nice to know what happens for graphs with higher connectivity. For instance, it is known (see [3], [14]) that \(\operatorname{pc}(G)=2\) holds already for \(3\)-connected graphs. Also, our graphs \(G_{d}\) from the proof of Theorem 2 are not \(3\)-connected. This prompts us to formulate the following conjecture. **Conjecture 10**.: _Every \(3\)-connected graph \(G\) satisfies \(\operatorname{spc}(G)\leq 3\)._ Let us stress however that we do not even know if the above inequality holds for graphs with any sufficiently high connectivity. It would be also nice to know more on nonrepetitive connected coloring and the corresponding parameter \(\operatorname{nrc}(G)\). **Problem 11**.: _Determine the least possible number \(t\) such that every \(2\)-connected graph \(G\) satisfies \(\operatorname{nrc}(G)\leq t\)._ By Theorem 8 we know that \(t\in\{3,4,\ldots,15\}\). The problem seems challenging even if restricted to some classes of graphs. Consider, for instance, nonrepetitive connected coloring of _planar_ graphs. Barnette [2] proved that every \(3\)-connected planar graph contains a spanning tree of maximum degree at most three (see [17]). Using a general upper bound form [1] one gets that for such graphs we have \(\operatorname{nrc}(G)\leq 8\). Also, since \(4\)-connected planar graphs are Hamiltonian, as proved by Tutte [19], they satisfy the best possible bound \(\operatorname{nrc}(G)\leq 3\). As mentioned in the introduction, one may consider fairly general \(\mathcal{P}\)_-connected colorings_, where \(\mathcal{P}\) is any property of sequences. The minimum number of colors needed for such a coloring of \(G\), with fixed property \(\mathcal{P}\), is denoted by \(\mathcal{P}\)-\(\operatorname{c}(G)\). A property \(\mathcal{P}\) is called _honest_ if it possess the following basic features: 1. If a sequence \(S\) has property \(\mathcal{P}\), then each nonempty block of \(S\) also satisfies \(\mathcal{P}\). 2. If \(S\) and \(T\) are two sequences over disjoint alphabets (color sets) satisfying \(\mathcal{P}\), then their concatenation \(ST\) also satisfies \(\mathcal{P}\). 3. There exist arbitrarily long sequences over some finite alphabet satisfying property \(\mathcal{P}\) Let us denote by \(m(\mathcal{P})\) the least possible constant in condition (iii). For instance, if \(\mathcal{P}\) corresponds to strong coloring or nonrepetitive coloring, then \(m(\mathcal{P})=3\), while if \(\mathcal{P}\) stems from the usual proper coloring, then \(m(\mathcal{P})=2\). It is now easy to see that repeating the proof of Theorem 5 gives the following general result. **Theorem 12**.: _Let \(\mathcal{P}\) be any honest property of sequences. If \(G\) is a graph containing two edges disjoint spanning trees, then \(\mathcal{P}\text{-}\mathrm{c}(G)\leq 2m(\mathcal{P})\). In particular, every \(4\)-connected graph \(G\) satisfies \(\mathcal{P}\text{-}\mathrm{c}(G)\leq 2m(\mathcal{P})\)._ One naturally wonders if the above statement could be true for \(2\)-connected graphs (or at least for \(3\)-connected graphs), possibly with some larger upper bound. **Conjecture 13**.: _Let \(\mathcal{P}\) be any honest property of sequences. Then there exists a constant \(t(\mathcal{P})\) such that every \(2\)-connected graph \(G\) satisfies \(\mathcal{P}\text{-}\mathrm{c}(G)\leq t(\mathcal{P})\)._ By the proof of Theorem 8 we know that it is true if an honest property satisfies additionally the following property: * If \(S\) and \(T\) are two sequences satisfying \(\mathcal{P}\) such that the last term of \(S\) does not occur in \(T\) and the first term of \(T\) does not appear in \(S\), then their concatenation \(ST\) also satisfies \(\mathcal{P}\). One also naturally wonders if the minimum possible number of colors in a \(\mathcal{P}\)-connected coloring can be achieved at the expense of increasing connectivity. **Conjecture 14**.: _Let \(\mathcal{P}\) be any honest property of sequences. Then there exists a constant \(c(\mathcal{P})\) such that every \(c(\mathcal{P})\)-connected graph \(G\) satisfies \(\mathcal{P}\text{-}\mathrm{c}(G)\leq m(\mathcal{P})\)._ ## 4 Declarations 1. On behalf of all authors, the corresponding author states that there is no conflict of interest. 2. Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
2307.14490
HUGE: Huge Unsupervised Graph Embeddings with TPUs
Graphs are a representation of structured data that captures the relationships between sets of objects. With the ubiquity of available network data, there is increasing industrial and academic need to quickly analyze graphs with billions of nodes and trillions of edges. A common first step for network understanding is Graph Embedding, the process of creating a continuous representation of nodes in a graph. A continuous representation is often more amenable, especially at scale, for solving downstream machine learning tasks such as classification, link prediction, and clustering. A high-performance graph embedding architecture leveraging Tensor Processing Units (TPUs) with configurable amounts of high-bandwidth memory is presented that simplifies the graph embedding problem and can scale to graphs with billions of nodes and trillions of edges. We verify the embedding space quality on real and synthetic large-scale datasets.
Brandon Mayer, Anton Tsitsulin, Hendrik Fichtenberger, Jonathan Halcrow, Bryan Perozzi
2023-07-26T20:29:15Z
http://arxiv.org/abs/2307.14490v1
# HUGE: Huge Unsupervised Graph Embeddings with TPUs ###### Abstract. Graphs are a representation of structured data that captures the relationships between sets of objects. With the ubiquity of available network data, there is increasing industrial and academic need to quickly analyze graphs with billions of nodes and trillions of edges. A common first step for network understanding is Graph Embedding, the process of creating a continuous representation of nodes in a graph. A continuous representation is often more amenable, especially at scale, for solving downstream machine learning tasks such as classification, link prediction, and clustering. A high-performance graph embedding architecture leveraging Tensor Processing Units (TPUs) with configurable amounts of high-bandwidth memory is presented that simplifies the graph embedding problem and can scale to graphs with billions of nodes and trillions of edges. We verify the embedding space quality on real and synthetic large-scale datasets. graph embedding, scalable algorithms, tensor processing units + Footnote †: ccs: Computing models single machine. At the same time, specialized hardware such as Tensor Processing Units (TPUs), a custom ASIC introduced in Jouppi et al. (2015), have large amounts of high-bandwidth memory that enables high-throughput gradient updates. In this work, we present a simple, TPU-based architecture for graph embedding that exploits the computational advantages of TPUs to embed billion-scale graphs2. This architecture eliminates the need to develop complex algorithms to partition or synchronize the embedding table across multiple machines. Footnote 2: Open source implementation available at: [https://github.com/google-research/google-research/tree/master/graph_embedding/huge](https://github.com/google-research/google-research/tree/master/graph_embedding/huge) More specifically, we propose a two-phase architecture. First random walks are generated and summarized via a distributed data processing pipeline. After sampling, graph embedding is posed as a machine learning problem in the style of DeepWalk (Zhou et al., 2017). We propose an unsupervised method for measuring embedding space quality, comparing the embedding space result to the structure of the original graph and show that the proposed system is competitive in speed and quality compared to modern CPU-based systems while achieving HUGE scale. **Graph Embeddings at Google.** Graph-based machine learning is increasingly popular at Google (Perozzi et al., 2018). There are dozens of distinct model applications using different forms of implicit and explicit graph embedding techniques in many popular Google products. Over time, these models have evolved from single-machine in-memory algorithms (similar to those dominating in academic literature) to more scalable approaches based on distributed compute platforms. In this work we detail a relatively new and very promising extension of classic graph embedding methods that we have developed in response to the need for differentiable graph embedding methods which can operate with data at extreme scale. ## 2. Background In this section, we first briefly review the related work in Section 2.1. We review DeepWalk (Zhou et al., 2017), which we use as a base for our high-performance embedding systems, in Section 2.2. We then proceed with describing two architectures for scaling Deepwalk graph embedding using commodity (HUGE-CPU) and TPU (HUGE-TPU) hardware that allow us to scale to huge graphs in Section 2.3. ### Related Work We now proceed to review the two basic approaches to embedding large graphs. Over the past years, tremendous amount of work introduced various embedding methods as well as a myriad of techniques and hardware architectures to scale them up. We summarize the related work in terms of the embedding approach, speedup techniques, and its expected scalability in Table 1. #### 2.1.1. Graph Embedding Approaches We categorize embedding methods as either _neural network_-based or _matrix factorization_-based. Regardless of the approach, each method employs, sometimes implicitly, a similarity function that relates each node to other nodes in the graph. The best-performing methods depart from just using the adjacency information in the graph to some notion of random walk-based similarity, for example, personalized PageRank (PRP) (Kipf and Welling, 2017). A key insight for accelerating the computation of these similarities is that they are highly localized in the graph (Kipf and Welling, 2017), meaning 2-3 propagation steps are enough to approximate them. Neural embedding methods view node embeddings as parameters of a shallow neural network. Neural methods optimize these parameters with stochastic gradient descent for either adjacency (Zhou et al., 2017), random walk (Zhou et al., 2017), or personalized PageRank (Kipf and Welling, 2017) similarity functions. This optimization is done via sampling, and the updates to the embedding table are usually very sparse. Thus, random memory access typically bounds the performance of these methods. An alternative to gradient-based methods is to directly factorize the similarity matrix. There are deep connections between neural and matrix factorization approaches (Zhou et al., 2017; Wang et al., 2017)--essentially, for many node similarities the optimal solutions for neural and factorization-based embeddings coincide. The main challenge to matrix-based methods is maintaining sparsity of intermediate representations. For large graphs, one can not afford to increase the density of the adjacency matrix nor keep too many intermediate projections. #### 2.1.2. Scaling Graph Embedding Systems There are several directions for speeding up embedding algorithms--some are tailored to particular methods while some are more general. We now briefly review the most general speedup techniques. Graph coarsening (Garvin et al., 2015; Ganin et al., 2015) iteratively contracts the graph, learns the embeddings for the most compressed level, and deterministically propagates the embeddings across the contraction hierarchy. Graph partitioning methods (Zhou et al., 2017; Wang et al., 2017) distribute the computation across machines while attempting to minimize communication across machines. Early approaches to matrix factorization (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) attempt to sparsify the random walk or PPR matrices. Unfortunately, higher-order similarity matrices are still too dense for these embeddings methods to scale to very large graphs. Leveraging specialized sparse numerical linear algebra techniques (Wang et al., 2017) proved to be a more fruitful approach. Implicit solvers (Wang et al., 2017; Wang et al., 2017) can factorize the matrix without explicitly materializing it in memory. These methods are constrained to perform linear decomposition, which is not able to successfully account for structure of graphs. Two families of techniques that produce most scalable embedding methods are spectral propagation and matrix sketching (Zhou et al., 2017). Spectral propagation methods (Zhou et al., 2017; Wang et al., 2017) first compute some truncated eigendecomposition of the adjacency or the Laplacian matrix of a graph and then use these eigenvectors to simulate the diffusion of information. Matrix sketching approaches approximate the similarity matrix, either iteratively (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) or in a single pass (Wang et al., 2017). The latter option is more scalable. #### 2.1.3. Hardware-based Embedding Acceleration Compared to algorithmic advances, hardware-based acceleration has arguably received less attention. Zhu et al. (Zhu et al., 2017) proposes a hybrid system that uses CPU for sampling and GPU for training the embeddings. Since the most RAM a single GPU can offer is in the order of 100 gigabytes, one can only train embeddings of 100 million node graphs on such systems. Wei et al. (Wei et al., 2017), Yang et al. (Yang et al., 2017) address this problem with partitioning to include more GPUs. This approach requires tens of GPUs for a billion-node graph, which is prohibitive compared to scalable CPU-based systems, which can embed a billion-node graph on a single high-memory machine in hours. Efficient computation of higher-order similarity is one aspect where hardware acceleration is currently lacking. Wang et al. (Wang et al., 2017), Yang et al. (Yang et al., 2019) propose efficient systems for random walk generation for general hardware architectures. However, in an absence of a suitable embedding method, these systems are not useful for graph embedding. ### DeepWalk Before describing our TPU embedding system, it is necessary to review DeepWalk (Yang et al., 2019), which is the basic method for neural graph embedding. DeepWalk adapts word2vec (Yang et al., 2019), a widely successful model for embedding words, to graph data. DeepWalk generates a "corpus" of short random walks; the objective of DeepWalk is to maximize the posterior probability of observing a neighboring vertex in a random walk within some specific window size. To maximize this probability efficiently, it uses hierarchical softmax (Kipf and Welling, 2015), which constructs a Huffman tree of nodes based on their frequency of appearance, or a more computationally efficient approximation, negative sampling (Kipf and Welling, 2015). For each node that was observed within the window size from some node, DeepWalk picks \(k\ll n\) uniformly at random as contrastive negative examples. There are several computational problems with DeepWalk's architecture, which are to be solved if we are to scale DeepWalk to graphs with billions of nodes: * Random walk generation for large graphs is computationally prohibitive due to random memory accesses on each random walk step. * Random walk corpus grows in size rapidly, growing much larger in size than the original sparse graph. * Negative sampling-based optimization is also computationally prohibitive due to random memory accesses. If batched, each gradient update is bound to update a significant part of the embedding table. To overcome the difficulties with random walk sampling, we present a distributed random walk algorithm in section 3.2 that is routinely used at Google to scale random walk simulations to web-scale graphs. ### Tensor Processing Units We proceed with briefly reviewing TPU architecture highlighting the aspects critical for our graph embedding system. A detailed review can be found in (Yang et al., 2019; Yang et al., 2019; Yang et al., 2019). TPUs are dedicated co-processors optimized for matrix and vector operations computed at half precision. TPUs are organized in _pods_, which3 can connect a total of 4096 of TPU chips with 32 GiB memory each, which together makes up to 128 TiB of distributed memory available for use. TPUs chips inside a pod are connected with dedicated high-speed, low-latency interconnects organized in a 3D torus topology. Footnote 3: In the TPUv4 architecture. ### Common ML Distribution Strategies Various methods for distributing Machine Learning workloads have been discussed in the literature (Bergmann et al., 2017) and most Machine Learning (ML) frameworks provide consistent APIs implementing multiple distribution schemes through a consistent interface. This section highlights some common distribution paradigms focusing on the techniques used to scale DeepWalk using commodity hardware (which we refer to as HUGE-CPU) and TPUs (HUGE-TPU). TensorFlow provides the tf.distribute.Strategy abstractions to enable users to separate model creation from the training runtime environment with minimal code changes. Two common strategies are the Parameter-Server (PS) strategy and Multi-Worker Mirrored Strategy. #### 2.4.1. Parameter-Server Strategy In the context of graph embedding, using a PS strategy is useful for representing a large embedding table. The PS strategy defines two compute pools of potentially heterogeneous hardware that the user can access. One pool contains machines labeled "parameter servers" and the other pool's machines are named "workers". A model's trainable variables are sharded across the machines in the parameter-server pool which serve requests, potentially over a network, both for the values of these variables and to update them. For graph embedding, machines in the worker pool asynchronously receive batches of examples, fetch the necessary embedding rows from parameter servers over a network, compute gradients and push updates back to parameter server machines. #### 2.4.2. Multi-Worker Mirrored Strategy The Multi-Worker Mirrored Strategy replicates all variables in the model on each device in a user defined pool of worker machines. A (potentially) large batch of input examples is divided among the multiple workers and proceed to compute gradients using their smaller per-replica batches. At \begin{table} \begin{tabular}{c c c c} \hline \hline Method Family & Speedup Technique & Reference Methods & Scalability \\ \hline neural & \(-\) & DeepWalk, LINE, VERSE & 10M \\ neural & graph coarsening & HARP, MILE & 10M \\ neural & partitioning & BigGraph, EDGES, DeLine & 1000M \\ \hline factorization & \(-\) & NetMF, GraRep & 10k \\ factorization & matrix sparsification & NetSMF, NetMFSC, STRAP, NRP & 100M \\ factorization & implicit solvers & HOPE, AROPE & 100M \\ factorization & spectral propagation & ProNE, LightNE & 1000M \\ factorization & matrix sketching & FastRP, RandNE, NodeSketch, InstantEmbedding & 1000M \\ \hline \hline \end{tabular} \end{table} Table 1. An overview of different approaches to scaling graph embeddings up. In this work, we demonstrate a system that works _without_ any of the yet mentioned techniques scaled to largest graphs. Scalability is given as an approximate graph size that can be processed by best-performing algorithm&system combination in a day. the completion of a single step, gradients across the replicas are aggregated and all variable copies are updated synchronously. While this can accelerate computationally heavy workloads, compared to the parameter server architecture, this design has limited use in the context of Graph Embedding. Replicating embedding tables across multiple machines introduces unwanted redundancy and memory consumption. #### 2.4.3. TPUstrategy and Accelerated TPU Embedding Tables Training a model (or graph embedding) in TensorFlow using TPU hardware, the TPUstrategy is very similar to the MultiWorkerMirrored-Strategy. A user defines a desired TPU topology, a slice of a POD that can be thought of as a subset of interconnected processing units. Under the TPUstrategy, trainable variables are copied to all TPU replicas and large batches of examples are divided into smaller per-replica batches and distributed to available replicas and gradients are aggregated before a syncronous update. Normally, this distribution paradigm would limit the scalability of models that define large embedding tables. However, TPUs are capable of sharding embedding layers over all devices in an allocated topology and leverage high bandwidth interconnections between replicas to support accelerated sparse look-ups and gradient updates. Accelerated embedding tables are exposed in TensorFlow using the tf.tpu.experimental.embedding.TPUEmbedding (TPUEmbedding) layer and are the primary mechanism for scaling DeepWalk training on TPUs. ## 3. Method We scale the DeepWalk algorithm to embed extremely large-scale graphs using two methods. The first, called HUGE-CPU, uses only commodity hardware whereas the second, HUGE-TPU, leverages modern TPUs for increased bandwidth and performance gains. Figure 2 visualizes the parameter-server architecture of HUGE-CPU. The details of parameter-server architecture are covered in section 3.3.1. Figure 3 illustrates the TPU system design behind HUGE-TPU and is detailed in section 3.3.2. ### Preprocessing One key observation is that most positional graph embedding systems cannot generate useful embeddings for nodes with less than two edges. Specifically, nodes with no edges are generally not well defined by embedding algorithms, and similarly, positional embeddings of nodes with only one edge are totally determined by the embedding of their single neighbor. Therefore, we typically prune the input graph, eliminating nodes with degree less than two. In our experiments, we only prune once though the pruning operation itself may introduce nodes that fall below the degree threshold. ### Sampling After preprocessing the graph, we run random walk sampling to generate co-occurrence tuples that will be used as the input to the graph embedding system. A high-level overview of the distributed random walk sampling is provided in Algorithm 1. The input to the sampling component is the preprocessed graph and the output are TensorFlow Examples containing co-occurrence tuples extracted from the random walks. The implementation of the distributed random walk sampling algorithm is implemented using the distributed programming platform FlumeC++(Brandin et al., 2017). In the initialization phase, the distributed sampler takes as input the \(\mathcal{N}\) nodes of the graph and replicates them \(\gamma\) times each to create the seeds of \(\gamma|\mathcal{N}|\) walks it will generate (Line 1). Next, the random sampling process proceeds in an iterative fashion, performing \(k\) joins which successively grow the length of each random walk (Lines 2-4). Each join combines the walk with the node at its end point. 4 After joining the end of the walk with its corresponding node from the graph \(G\), sampling of the next node occurs (Line 4). We note that many kinds of sampling can be used here to select the next node at this step - including uniform sampling, random walks with backtracking, and other forms of weighted sampling. For the results in this paper, we consider the case of using uniform sampling. A final GroupBy operation is used to collapse the random walks down to co-occurrence counts between pairs of nodes as a function of visitation distance (Line 7). Footnote 4: This join is necessary, as the system must support graphs which are too large to fit in the memory of a single machine. The output of the sampling pre-processing step is a sharded series of files encoding a triple: (source_id, destination_id, co_counts), source_id is the node ID of a starting point in the random walk, the destination_id is a node ID that was arrived at during the \(\gamma\) random walks and co_counts is a histogram of length walk_length containing the number of times the source_id encountered destination_id (indexed by the random walk distance of the co-occurrence). The DeepWalk model defines a graph reconstruction loss that has a "positive" and "negative" component. The "positive" examples are random walk paths that exist in the original graph. "Negative" examples are paths that do not exist in the original graph. If desired, the sampling step can be used to generate different varieties of negative samples (through an additional distributed sampling algorithm focusing on edges which _do not_ exist). However, in practice, we frequently prefer to perform approximate random negative sampling "on-the-fly" while training. ### Distributed training #### 3.3.1. Huge-Cpu Figure 2 outlines the system design for the HUGE-CPU baseline system architecture. This system leverages distributed training with commodity hardware. Two pools of machines are defined as described in 2.4.1, a cluster of parameter-serves and a pool of workers. During initialization, trainable variables such as the large embedding table are sharded across the machines in the parameter-server pool. Workers distribute and consume batches of training examples from the output of the graph sampling pre-processing step, asynchronously fetch embedding activations from parameter servers, compute a forward pass and gradients and asynchronously push gradient updates to the relevant activations back to the parameter servers. There is no locking or imposed order of activation look-ups or updates. This enables maximum throughput of the system but comes at the cost of potentially conflicting gradient updates. #### 3.3.2. Huge-Tpu Figure 3 visualizes the system design of distributed training of the DeepWalk embedding model using TPUs after the sampling procedure is complete. The replication strategy used for TPUs in conjunction with their high FLOPS per second requires generating extremely large batches of training examples for every step. The bottleneck in this system is rarely the embedding lookup or model tuning but the input pipeline to generate the large batch size required at every step. File shards of the sampling data are distributed over the workers in a cluster dedicated to generating input data. The workers independently deserialize the co-occurrence input data and augment the source_id and destination_id pairs with negative samples, replicating source_id and randomly sampling additional destination_id node IDs uniformly from the embedding vocabulary. The input cluster then streams the resulting training Tensors to the TPU system which de-duplicates and gathers the relevant embedding activations for the batch and distributes the computational work of computing the forward pass and gradient to the TPU replicas which are then aggregated and used to update the embedding table. ## 4. Experiments ### Experimental Details #### 4.1.1. Datasets For testing the scalability of our methods, we resort to random graphs. We resort to the standard (degree-free) Stochastic Block Model (Kipf and Welling, 2015), which is a generative graph model that divides \(n\) vertices into \(k\) classes, and then places edges between two vertices \(v_{i}\) and \(v_{j}\) with probability \(p_{ij}\) determined from the class assignments. Specifically, each vertex \(v_{i}\) is given a class \(y_{i}\in\{1,\ldots,k\}\), and an edge \(\{v_{i},v_{j}\}\) is added to the edge set \(E\) with probability \(P_{y_{i}}y_{j}\) \begin{table} \begin{tabular}{c c c} \hline \hline Parameter & HUGE-TPU & HUGE-CPU \\ \hline num\_walks\_per\_node & 128 & 128 \\ walk\_length & 3 & 3 \\ Per-Replica Batch Size & 4096 & 1024 \\ num\_neg\_per\_pos & 31 & 3 \\ Global Batch Size & \(2^{24}\) & \(2^{19}\) \\ LWSGD & (5K, 0.01, & N/A \\ & 100K, 0.001) & \\ SGD & N/A & 0.001 \\ \hline \hline \end{tabular} \end{table} Table 2. Parameters used for all HUGE-TPU and HUGE-CPU experiments. LWSGD is Stochastic Gradient Descent with a Linear Warmpup and decay learning rate schedule. The schedule is parameterized by four numbers, the number of warmup steps, the final value after warmup, the number of decay steps and the final value after the decay phase at which point the learning rate is held constant. Figure 3. System diagram for accelerated HUGE unsupervised graph embedding. A large embedding table is efficiently sharded over the TPU HBM using TensorFlow TPUEmbedding layer. A cluster of machines that read, parse and randomly sample the input data is leveraged to avoid an input bottleneck. This diagram is illustrative and does not represent the true connectivity of the TPU topology. Figure 2. System diagram for the Parameter-Server (CPU) based DeepWalk model (HUGE-CPU). Two pools of machines are defined, parameter-servers and workers. Workers asynchronously fetch batches of training examples from disk and collect relevant embedding activations from parameters servers that serve requests for the sharded embedding table. Gradients are computed and updated asynchronously. where \(P\) is a symmetric \(k\times k\) matrix containing the between/within-community edge probabilities. Assortative clustering structure in a graph can be induced using the SBM by setting the on-diagonal probabilities of \(P\) higher than the off-diagonal probabilities. For benchmarking, we set \(P_{y_{i}y_{j}}=q\) iff \(i=j\) and to \(p\) otherwise. Complementing our analysis on synthetic benchmark datasets, we also study the performance of the methods on two large real-world graphs: Friendster (Friendster, 2007) and OGBN-Papers100m (Geban et al., 2011). We report the dataset statistics in Table 3. #### 4.1.2. Baselines First we compare HUGE-CPU and HUGE-TPU with other state-of-the-art scalable graph embedding algorithms: InstantEmbedding (Krishnan et al., 2015), PyTorch-BigGraph (Krishnan et al., 2015) and LightNE (Li et al., 2017) on an end-to-end node classification task using the OGBN-Papers100m dataset. We further explore the embedding space quality of each algorithm using both the OGBN-Papers100m and Friendster datasets. Finally we compare embedding space quality metrics as a function of training time to explore the speedups of HUGE-TPU compared to HUGE-CPU using a randomly generated graphs with 100M (SBM-100M) and 1B nodes SBM-1000M. ### Parameters for HUGE methods Table 2 shows the parameters used by the HUGE-CPU and HUGE-TPU methods. The random walk sampling procedure describe in 3.2 was executed sampling \(\gamma=128\) walks per node with a walk length of \(k=3\). The set of samples were shared for all experiments involving HUGE-CPU and HUGE-TPU to minimize the affect of random sampling on the results. num_neg_per_pos is the number of random negative destinations sampled for every "positive" example drawn from the sampling pre-processing step. The global batch size for HUGE-TPU may be computed as per_replica_batch_size *(1 + num_neg_per_pos). A step is not well defined for the HUGE-CPU algorithm since workers asynchronously pull variables and push updates. Due to the increased computational power and high bandwidth interconnections between replicas, HUGE-TPU achieves a much higher throughput and global per-step batch size. Training with extremely large batch sizes can be challenging. We have found that a Stochastic Gradient Descent (SGD) optimizer with a linear warmup and ramp down gives good results. HUGE-CPU was also trained with a SGD optimizer but uses a fixed learning rate. ### Evaluation Metrics For all other graphs besides OGBN-Papers100m, there are no ground-truth labels for node classification. This problem is not unique to publicly available large graphs--in our practical experience, oftentimes there is need to evaluate and compare different embedding models in an unsupervised fashion. We propose simple unsupervised metrics to compare the embedding quality of different embeddings of a graph. For the analysis, we \(L_{2}\)-normalize all embeddings. We also report four self-directed metrics for evaluation we use in our production system to monitor the embedding quality. First, edge signal-to-noise ratio (**edge SNR**) defined as: \[\text{SNR}=\frac{\mathbb{B}_{u,\partial E}\left[d(u,v)\right]}{\mathbb{B}_{u, \partial E}\left[d(u,v)\right]},\] where we approximate the numerator term by taking a random sub-sample of all non-edges. In our experiments, we also show the entire distribution of **edge- and non-edge distances**. The intuition behind these metrics is that the distance between nodes that are connected in the original graph (a "true" edge) should be "closer" than nodes that are not adjacent in the input graph. Last, we compute the sampled version of the **edge recall**(Zhu et al., 2017). We sample 100 nodes, pick \(k\) closest nodes in the embedding space forming a set \(S\). Then, sampled recall is: \[\text{recall}(\partial k(u)=\frac{|N(u)\cap S|}{k}.\] Despite small sample size, the recall is stable and, coupled with the edge SNR, it is a useful indicator of the reconstruction performance of different graph embedding methods. ### Downstream Embedding Quality Being the fastest embedding system is not enough - we also want embedding vectors to be as useful as possible. Every percentage point of quality on downstream tasks directly translates to missed monetary opportunities. Therefore, in our experience, when working on scalable versions of algorithms, it is critical to maintain high embedding quality. To that end, we provide one of the first studies of embedding scalability and performance _across_ different hardware architectures. We compare with the fastest single-machine CPU embedding available (Li et al., 2017) and industrial-grade GPU embedding system (Krishnan et al., 2015). Note that these systems have a much more restrictive limit for a maximum number of nodes that they can process. PyTorch-BigGraph can not process graphs with more than \(1.4\times 10^{9}\) nodes on the current hardware, assuming a system with the latest-generation GPUs and highest available memory. LightNE does not have such restriction, but it keeps both the graph and the embedding table in \begin{table} \begin{tabular}{c c c c} \hline \hline Method & Quality & Speedup & Hardware \\ \hline PyTorch-BigGraph & 43.64 & 23.0 & 16x A100 GPUs \\ LightNE & 27.90 & 40.8 & 160 vCPUs \\ InstantEmbedding & 53.15 & 3.5 & 64 vCPUs \\ HUGE-CPU & **56.03** & 1 & 5120 vCPUs \\ HUGE-TPU & **56.13** & 9.9 & 4x4x4 v4 TPUs \\ \hline \hline \end{tabular} \end{table} Table 4. Embedding quality as measured by downstream task accuracy, relative speed, and hardware used for four different embedding methods. Speed normalized to the runtime of HUGE-CPU. \begin{table} \begin{tabular}{l c} \hline \hline Name & \(|V|\) & \(|E|\) \\ \hline Friendster & 65.6M & 3612M \\ OGB-Papers100M & 111M & 1616M \\ SBM-10M & 10M & 100M \\ SBM-100M & 100M & 1000M \\ SBM-1000M & 1000M & 10000M \\ \hline \hline \end{tabular} \end{table} Table 3. Datasets we use for our experimental studies. We report the total number of nodes and edges in all graphs. Figure 4. Unsupervised embedding analysis results for OGBN-Papers100m. We see that HUGE-TPU has superior edge SNR compared to all baselines. Figure 5. Embedding analysis results for Friendster. HUGE-TPU achieves the best edge SNR due to better distance distributions. Figure 6. Embedding analysis results for SBM-100M. This figure compares training HUGE-CPU for 3, 12 and 36 billion training examples compared to the results of HUGE-TPU and InstantEmbedding. memory. Because of that, scalability into multi-billion node embedding territory is still an open question for that system. Table 4 presents the embedding quality results on the OGB-Papers100M dataset. For measuring the embedding quality, we follow the Open Graph Benchmark evaluation protocol (Krizhevsky et al., 2014) with a simple logistic regression model. We skip tuning the model parameters on the validation set, and report the accuracy of predictions on the test set. We also report the relative speedup over the CPU DeepWalk embedding implementation. HUGE-TPU is the only one that maintains the end-to-end classification quality provided by DeepWalk and improves runtime efficiency relative HUGE-CPU by an order of magnitude. ### Self-directed Embedding Space Evaluation To better understand the differences in downstream model performance, we present our self-directed metric for datasets with no ground-truth labels. We analyze the embedding space quality with the proposed metrics comparing HUGE-CPU, HUGE-TPU, InstantEmbedding, PyTorch-BigGraph and LightNE. To that end, we report the results on 2 real and 2 synthetic datasets, presented in Figures 4-5 and 6-7, respectively. Interestingly, the results are fairly consistent across all datasets considered. We see that HUGE-TPU provides superior separation between the distributions of edges and non-edges, achieving a very high edge signal to noise ratio. We also see that the sampled edge recall metric on is generally much harder to optimize for on very large graphs, and that HUGE-TPU meets or exceeds the performance of its comparable baseline HUGE-CPU. ### Visualization In order to better understand our embeddings, we frequently resort to visualizations. Figure 8 shows a plot of the _entire_ embedding space of OGBN-Papers100M dataset for HUGE-TPU and LightNE, projected via t-SNE (Zhu et al., 2017). Compared to HUGE-TPU, the LightNE embedding demonstrated surprisingly poor global clustering structure, which explains its subpar downstream task performance we covered in Section 4.4. ### Discussion While both HUGE-CPU and HUGE-TPU can horizontally scale according to the user configuration, we use the same topologies throughout all experiments. HUGE-CPU uses 128 Parameter Server machines and 128 Workers with 20 cores each and 2GiB of RAM. HUGE-TPU uses a v4 TPU with 64 replicas. The total training examples processed by HUGE-TPU relative to HUGE-CPU for this configuration is shown in table 5. Since the throughput of HUGE-CPU and HUGE-TPU is fixed for a given topology and batch size, the throughput is constant thought all experiments. As shown in table 4, HUGE-TPU achieves the highest accuracy in the end-to-end node classification task using the OGBN-Papers100m dataset though HUGE-CPU not far behind. However, while HUGE-CPU is able to scale horizontally to handle extremely large embedding spaces, in-memory and hardware accelerated achieve orders of magnitude speedups compared to HUGE-CPU. When analyzing the embedding space quality metrics however, in terms of HUGE-TPU consistently achieves superior performance. The distribution of distances between adjacent nodes for HUGE-TPU is typically much smaller than the other methods as is reflected by an SNR that is consistently orders of magnitude larger than other methods. The consistently high SNR is probably due to the extremely high throughput compared with HUGE-CPU. To further explore the affect of throughput on the system, we ran HUGE-CPU for multiple fixed number of training steps: 3B, 12B and 36B, while fixing the TPU training time on the SBM-100M \begin{table} \begin{tabular}{l c} \hline \hline Method & Relative Examples Per Second \\ \hline HUGE-CPU & 1 \\ HUGE-TPU & 173x \\ \hline \hline \end{tabular} \end{table} Table 5. The average examples per seconds processed by HUGE-CPU and HUGE-TPU for all reported experiments. HUGE-CPU used 128 Parameter Servers and 128 Workers with 20 cores each. HUGE-TPU was configured with a v4 in a 64 chip configuration. We report the total number of examples processed per second relative to HUGE-CPU Figure 7. Embedding analysis results for SBM-1000M to explore the embedding space quality as a function of training time for HUGE-CPU compared to HUGE-TPU and InstantEmbedding. and SBM-1000M datasets. In relative terms, HUGE-CPU-3B took approximately half the time of HUGE-TPU, HUGE-CPU-12B was trained for 2x the time of HUGE-TPU and HUGE-CPU-36B was trained for 6x the allowed time of HUGE-TPU. We also compare these results with InstantEmbedding to contrast the Deepwalk style embeddings with a matrix factorization graph embedding method. Predictably, the results show that the HUGE-CPU will "converge" or at least approach, over time, the performance of InstantEmbedding in terms of edge/non-edge distributions and recall. However, HUGE-TPU consistently outperforms both InstantEmbedding and HUGE-CPU in terms of SNR and edge and non-edge distance distributions even when HUGE-CPU is allowed to train for more than 6x more time than HUGE-TPU. To summarize, we comprehensively demonstrate the quality and performance of HUGE-TPU over HUGE-CPU as well as state-of-the-art industrial-grade systems for graph embeddings. First, we showed that on a largest-scale labelled embedding data, HUGE-TPU achieves state-of-the-art performance while being order of magnitude faster than comparable CPU-based system. We then proceed with unsupervised embedding evaluations we use in deployed production systems at Google. We show how HUGE-TPU is competitive in embedding quality over both real and synthetic tasks, only improving its performance compared to the baselines as the size of the graphs increases. ## 5. Conclusion In this work we have examined the problem of scalable graph embedding from a new angle: TPU systems with large amounts of shared low-latency high-throughput memory. We build a system (HUGE) that does not suffer from key performance issues of the previous work, and greatly simplifies the system design. HUGE is deployed at Google, in a variety of different graph embedding applications. Our experiments demonstrate the merits of using accelerators for graph embedding. They show that the HUGE-TPU embedding is competitive in speed with other scalable approaches while delivering embeddings which are more performant. In fact, the embeddings learned with HUGE-TPU are of the same quality as running the full embedding algorithm (with no compromises for its speed).
2308.03769
Towards Integrated Traffic Control with Operating Decentralized Autonomous Organization
With a growing complexity of the intelligent traffic system (ITS), an integrated control of ITS that is capable of considering plentiful heterogeneous intelligent agents is desired. However, existing control methods based on the centralized or the decentralized scheme have not presented their competencies in considering the optimality and the scalability simultaneously. To address this issue, we propose an integrated control method based on the framework of Decentralized Autonomous Organization (DAO). The proposed method achieves a global consensus on energy consumption efficiency (ECE), meanwhile to optimize the local objectives of all involved intelligent agents, through a consensus and incentive mechanism. Furthermore, an operation algorithm is proposed regarding the issue of structural rigidity in DAO. Specifically, the proposed operation approach identifies critical agents to execute the smart contract in DAO, which ultimately extends the capability of DAO-based control. In addition, a numerical experiment is designed to examine the performance of the proposed method. The experiment results indicate that the controlled agents can achieve a consensus faster on the global objective with improved local objectives by the proposed method, compare to existing decentralized control methods. In general, the proposed method shows a great potential in developing an integrated control system in the ITS
Shengyue Yao, Jingru Yu, Yi Yu, Jia Xu, Xingyuan Dai, Honghai Li, Fei-Yue Wang, Yilun Lin
2023-07-25T08:22:18Z
http://arxiv.org/abs/2308.03769v1
# Towards Integrated Traffic Control ###### Abstract With a growing complexity of the intelligent traffic system (ITS), an integrated control of ITS that is capable of considering plentiful heterogeneous intelligent agents is desired. However, existing control methods based on the centralized or the decentralized scheme have not presented their competencies in considering the optimality and the scalability simultaneously. To address this issue, we propose an integrated control method based on the framework of Decentralized Autonomous Organization (DAO). The proposed method achieves a global consensus on energy consumption efficiency (ECE), meanwhile to optimize the local objectives of all involved intelligent agents, through a consensus and incentive mechanism. Furthermore, an operation algorithm is proposed regarding the issue of structural rigidity in DAO. Specifically, the proposed operation approach identifies critical agents to execute the smart contract in DAO, which ultimately extends the capability of DAO-based control. In addition, a numerical experiment is designed to examine the performance of the proposed method. The experiment results indicate that the controlled agents can achieve a consensus faster on the global objective with improved local objectives by the proposed method, compare to existing decentralized control methods. In general, the proposed method shows a great potential in developing an integrated control system in the ITS. ## I Introduction The development of the intelligent traffic system (ITS) has long been focused in academia and industry over the past decades, which has evolved into a complex system with plentiful intelligent agents, including connected and automated vehicles (CAV) and multiple intelligent traffic control measures (e.g., traffic signal, variable speed limits, perimeter gating, etc.). It is foreseeable that the ITS will become increasingly complex with the surge of intelligent agents, thus the control of ITS becomes increasingly essential. Specifically, the control of ITS should consider all involved intelligent agents which are heterogeneous in different scales, such as in different responding frequency; as well as with diverse objectives, such as mitigating congestion, guaranteeing safety, reducing emission and energy consumption. Existing optimal control methods of individual agents in ITS are theoretically mature and have evolved towards more expeditious, agile, and adaptive with the aid of AI-based approaches [1, 2, 3]. However, given the complex nature of ITS, these methods are incapable of performing as expected in practice, while a wide deployment of intelligent agents has frequently trapped the entire ITS into the Braess paradox [4, 5]. Therefore, investigating an integrated control method, which is capable of coordinating the behaviours of heterogeneous agents in ITS renders a non-trivial task to be tackled. Recent research focuses on enhancing scalability and optimality in integrated traffic control, for which a centralized or decentralized control scheme is adopted, respectively. The centralized scheme assumes perfect knowledge of all agents' states and optimizes global objectives, while the decentralized scheme optimizes local objectives and achieves Pareto optimality[6]. However, the centralized control scheme has its limit in coordinating heterogeneous agents, whereas the decentralized scheme is inefficient in synchronizing agents' behaviours. Therefore, neither the centralized nor the decentralized scheme presents its competency in the integrated control of plentiful heterogeneous intelligent agents. Fortunately, the emergence of Blockchain technology and Decentralized Autonomous Organization (DAO) provide opportunities for developing integrated traffic control with scalability and optimality considerations[7, 8]. DAO controls a complex system of heterogeneous agents using smart contracts, ensuring control optimality through a proposal-voting-action-incentive mechanism. In addition, the autonomous and independent execution of smart contracts ensures scalability and secures data transmission. While the potential of DAO-based control in ITS has been discussed, few studies have evaluated its performance in practice. Specifically, the feasible design of smart contracts and endogenous governing defects of DAO, especially the structural rigidity[9], pose challenges in its application in practice, particularly in complex systems with numerous intelligent agents. With the consideration of the issues above, a consensus and incentive mechanism for DAO-based ITS control and an operation on DAO are proposed in this research. Specifically, the proposed mechanism focuses on optimizing the energy consumption efficiency (ECE) in ITS as mentioned by Wang [10]. The ECE represents the rate of improved local objectives and changed control effort magnitude, which should be maximized and balanced through all involved agents. Meanwhile, the operation on DAO unveils critical agents in ITS to execute smart contracts in DAO, in order to avoid frequent code altering in deployed smart contracts or deploying new smart contracts. In addition, the operation on DAO is inspired by recent studies in the dense reinforcement learning and the message transaction optimization for Blockchain-enabled Internet-of-Vehicles [11, 12]. The rest of this paper is organized as follows. Section II investigates the related work regarding existing integrated control schemes and DAO-based control mechanism. Section III elaborates the methodology including the problem formulation, the solving framework, the detailed algorithms of the operation on DAO and the consensus and incentive mechanism. Section IV discusses the application of the proposed integrated control method in ITS, as well as presents examination results of the proposed method in a numerical experiment. Finally, this research is concluded in section V ## II Related Work ### _Integrated traffic control in ITS_ In this section, existing integrated control schemes in ITS (centralized and decentralized schemes) are reviewed. Specifically, the application scenario and the defects of centralized and decentralized control scheme are elaborated, respectively. #### Ii-A1 Centralized control method The centralized scheme operates under the assumption of a perfect knowledge of all controlled agents' states. This knowledge is then leveraged to regulate the agents' behaviors, optimizing a global objective that encapsulates the objectives of all agents. In ITS, the centralized control scheme is commonly employed in managing multiple homogeneous agents, such as CAV platoon control [13, 14, 15], traffic signal coordination [16], and highway control measures coordination [17, 18]. However, the centralized scheme exhibits limitations in scalability and is incompetent in coordinating heterogeneous intelligent agents. Additionally, the assumption of perfect knowledge raises various concerns about communication reliability and security [19, 20]. #### Ii-A2 Decentralized control method The decentralized scheme operates by regulating intelligent agents based on local information as well as information received from their connected agents. This approach optimizes their behaviors towards their local objectives while achieving a consensus to achieve a Pareto optimality. In ITS, the decentralized scheme is frequently employed in coordinating behaviors of heterogeneous agents, such as coordinating multiple urban traffic control measures with diverse objectives (e.g., mitigating traffic congestion and reducing evacuation time after an event [21]) or in different scales [22, 23]. However, the defects in achieving optimization and synchronization impedes the decentralized scheme from being adopted to regulate a large number of intelligent agents [6, 18]. In summary, neither the centralized nor the decentralized scheme has demonstrated its competence in the integrated control of numerous heterogeneous intelligent agents. An integrated ITS control which can achieve both the optimality and the scalability is desired, as it is illustrated in Fig. 1. ### _DAO and related control methods_ The emergence of Blockchain technology and decentralized autonomous organization (DAO) provides a promising avenue for developing an integrated traffic control system that considers both the scalability and the optimality. It is worth noting that the deployment of smart contracts on the Blockchain can further guarantee secure and private-preserving data transmission, although this aspect is not discussed in this study [24, 25]. The decentralized and autonomous nature of the DAO has spurred interests in applying it to the integrated control of ITS. [7, 8]. In addition, the framework of Dao-based Fig. 1: Existing integrated control scheme capability and unsolved issue Fig. 2: DAO-based control mechanism control in ITS has been extensively discussed, which can be used in integrated control of urban traffic system [25], crowd-sensing systems [26], vehicle parking recommendation systems[27], etc. In general, agents in physical world are encrypted into Blocks in Blockchians, which can automatically and independently execute smart contracts that are encoded and deployed in advance. Moreover, the smart contracts stimulates agents to achieve a consensus through a proposal-voting-action-incentive mechanism [28]. The DAO-based control framework can be summarized in Fig. 2. However, the detailed design of the smart contract in ITS control has received little attention, particularly the formulation of the consensus and incentive mechanism. Another critical issue is the structural rigidity. Specifically, altering the code in the smart contract once the system is operating can be extremely difficult, while deploying new smart contracts can be expensive [9, 29]. The issue of structural rigidity poses a significant challenge when applying the DAO in a complex system with a growing number of intelligent agents, such as in ITS. Despite extensive research on integrated traffic control, the research gap still lies in the lack of a suitable control design that can achieve both scalability and optimality while regulating numerous heterogeneous intelligent agents. Particularly, the detailed DAO-based control design addressing the formulation of the consensus and incentive mechanism and the structural rigidity issue is desired. ## III Methodology ### _Problem definition_ The problem of an integrated ITS control, considering plentiful heterogeneous intelligent agents, is defined in this section. The problem is defined under the following assumptions: (1) The intelligent agents in ITS are connected through Peer-to-Peer (P2P) communication and the communication delay is negligible (in milliseconds) compare to the control response time (in seconds). (2) The connection intensities, which reflect the exchange rates of control effort between agents, are known a-priori through learning the history data. Based on the assumptions above, we focus on developing a DAO-based integrated control, which aims to increase and balance the ECE of all intelligent agents in ITS, upon the optimization of their local objectives. For a road network contains a set of \(N\) intelligent agents \(V_{N}=\{v_{1},...v_{n}\}\) with different response time \(\tau_{i}\), its communication topology can be represented by a graph \(G\), that \(G=\{V_{N},E,A_{N\times N}\}\). \(E\subseteq V_{N}\times V_{N}\) denotes the set of edges \(e_{ij}\), which indicates the communication exists between agents \(i\) and \(j\), \(i\subseteq N\), \(j\subseteq N\). \(A_{N\times N}\) is the adjacent matrix, that \(a_{ij}\subseteq\mathbb{R},\forall e_{ij}\subseteq E\) and \(a_{ij}=0\) otherwise. In addition, \(a_{ii}=0\). The value of \(a_{ij}\) indicates the linking intensity, that \(a_{ij}=\frac{u_{i}}{u_{i}}\). The dynamic of agent \(v_{i}\) can be represented by an ODE in Eq. (1). \[\dot{x_{i}}=f(x_{i},\textbf{u}_{i}) \tag{1a}\] \[\textbf{u}_{i}=u_{i}+\sum_{j=1}^{N}a_{ji}u_{j} \tag{1b}\] where \(x_{i}\) is the observed state of agent \(v_{i}\), and the control input \(u_{i}\) is optimized towards its local objective \(J_{i}=\min g(x_{i},u_{i})\). For the integrated control of the defined road network system, all intelligent agents in the road network are controlled towards their local objectives. Meanwhile, a global objective of maximizing the cumulative ECE, meanwhile achieving a consensus on the ECE, is desired, as represented by Eq. (2). \[J_{G}^{cumulative}=\max\int_{t=0}^{\infty}\sum_{i}^{N}r_{i} \tag{2a}\] \[J_{G}^{balance}=\lim_{t\rightarrow\infty}|r_{i}-r_{j}|=0,\forall i\subseteq N, \forall j\subseteq N \tag{2b}\] s.t. \[r_{i}=\frac{\dot{g_{i}}}{|u_{i}|} \tag{2c}\] where \(|\cdot|\) denotes the absolute value. Note that the ECE of an agent \(r_{i}\) represents the rate of its changed local objectives and its changed control effort magnitude. Hence a higher value of ECE indicates that the agent is able to achieve a better performance with less consumed energy in changing its control effort. Therefore, the ECE should be maximized and balanced among all agents in the controlled system with an integrated control, as indicated by Eq. (2) As it is mentioned in section II, solving the defined problem in a centralized scheme is intricate due to the heterogeneity of agents, as well as the difficulties in summarizing Eq. (2a) and Eq. (2b). In addition, the defined problem cannot be effectively solved in a decentralized scheme due to the synchronization issue. Even with the DAO-based control, the issue of structural rigidity with the growing size of set \(A\) and \(E\) posses a critical challenge. Therefore, instead of analytically solving the defined problems, a solving method based on reiteratively executing a DAO-based control protocol and operating the DAO is proposed. ### _Solving framework_ Addressing the issues of heterogeneity, synchronization and DAO's structural rigidity, the problem defined in section III-A can be solved by a framework as in Fig. 3. In general, the operation on DAO extracts a sub-graph \(G^{s}=\{V_{N^{\prime}},E_{N^{\prime}}^{s},A_{N^{\prime}\times N^{\prime}}^{s}\}\) from \(G\) with a frequency of \(\tau_{o}\) by identifying critical agents, which will be elaborated in section III-C. Afterwards, a DAO-based control protocol encoded in the smart contract will be executed in \(G^{s}\). Specifically, the DAO-based control considers consensus among all involved agents (Eq. 2b) and improving local objectives. Meanwhile, the global objective in Eq. (2a) can be improved by distributing incentives, which will be elaborated in section III-D. ### _Operation on DAO_ According to Fig. 3, the operation on DAO aims to identify critical agents by extracting \(G^{s}\) from \(G\), in order to avoid deploying new smart contracts to accommodate the growing amount of agents in DAO. Given the deployed smart contract capability \(\Phi\) (i.e., the amount of intelligent agents in ITS that the deployed smart contract is capable of effectively executing an integrated control), and the tolerance level \(\Psi\) (i.e. the magnitude of global optimization objective that can be compromised during the operation), the objective of the operation on DAO can be formulated in Eq. (3). \[J^{s}=\min(|E^{s}_{N^{s}}|)\] (3a) s.t. \[W_{i}(G^{s})\leq\Psi,\forall i\subseteq N^{s} \tag{3b}\] \[W_{i}(G^{s})=\sum_{j\subseteq N}a_{ij}\cdot|r_{i}-r_{j}|-\sum_{j\subseteq N^{s} }a_{ij}\cdot|r_{i}-r_{j}|\] (3c) \[N^{s}\leq\Phi \tag{3d}\] A heuristic algorithm is proposed in Algorithm 1 to find an approximate solution for achieving Eq.(3). Note that if a feasible \(G^{s}\) is not found by Algorithm 1, the proposed operation on the existing DAO may reach its capacity limit and new smart contracts are necessary to be deployed. ``` 0:\(G\), \(\Phi\),\(\Psi\) 0:\(G^{s}\) 1: Initialization \(G^{s}=G\), \(N^{s}=N\), \(\textbf{N}^{s}=\{1,2,...N\}\) 2:while Maximum iteration is not reached do 3:while\(N^{s}>\Phi\)do 4: Randomly select \(i\subseteq\textbf{N}^{s}\) 5:if\(\forall j,e_{ij}\notin E^{s}\)then 6:\(\textbf{N}^{s}=\textbf{N}^{s}/\{i\}\) 7:\(N^{s}=N^{s}-1\) 8:endif 9:\(w_{i}=\min(a_{ij}\cdot|r_{i}-r_{j}|)\), \(\forall j\) that \(e_{ij}\subseteq E^{s}\) 10:\(\tilde{G^{s}}=\{V^{s},E^{s}/\{e_{ij}\},\{A^{s}:a_{ij}=0\}\}\) 11:if\(W_{i}(\tilde{G})\leq\Psi\)then 12:\(G^{s}=\tilde{G^{s}}\) 13:else 14: break 15:endif 16:endwhile 17:return\(G^{s}\) 18:endwhile 19:return did not find \(G^{s}\) ``` **Algorithm 1** DAO operation algorithm ### _Operation on DAO_ _Operation to Fig. 3, the operation on DAO aims to identify critical agents by extracting \(G^{s}\) from \(G\), in order to avoid deploying new smart contracts to accommodate the growing amount of agents in DAO. Given the deployed smart contract capability \(\Phi\) (i.e., the amount of intelligent agents in ITS that the deployed smart contract is capable of effectively executing an integrated control), and the tolerance level \(\Psi\) (i.e. the magnitude of global optimization objective that can be compromised during the operation), the objective of the operation on DAO can be formulated in Eq. (3). \[J^{s}=\min(|E^{s}_{N^{s}}|)\] (3a) s.t. \[W_{i}(G^{s})\leq\Psi,\forall i\subseteq N^{s} \tag{3b}\] \[W_{i}(G^{s})=\sum_{j\subseteq N}a_{ij}\cdot|r_{i}-r_{j}|-\sum_{j\subseteq N^{s} }a_{ij}\cdot|r_{i}-r_{j}|\] (3c) \[N^{s}\leq\Phi \tag{3d}\] A heuristic algorithm is proposed in Algorithm 1 to find an approximate solution for achieving Eq.(3). Note that if a feasible \(G^{s}\) is not found by Algorithm 1, the proposed operation on the existing DAO may reach its capacity limit and new smart contracts are necessary to be deployed. ### _Consensus and incentive mechanism_ Having the critical agents identified with \(G^{s}\), an integrated control of \(v_{i}\subseteq V^{s}_{i}\) is achieved through a consensus and incentive mechanism. Initially, a control protocol is designed in Eq. (4), which is inspired by the protocol formulated by Xie et al. [30]. \[u_{i}=\sigma_{\Delta}(-d_{i}-\alpha_{i}\bigtriangledown_{u_{i}}g(x_{i},u_{i} )-\beta_{i}\sum_{j=1}^{N^{s}}a_{ij}(r_{i}-r_{j}))\] (4a) s.t. \[\dot{d_{i}}=\alpha_{i}\beta_{i}\sum_{j=1}^{N^{s}}a_{ij}(r_{i}-r_{j}),d_{i}(0)=0 \tag{4b}\] \[\sigma_{\Delta}(u)=\text{sgn}(u)\min\{|u|,\Delta\} \tag{4c}\] where \(N^{s}\) is the amount of agents in \(V^{s}\), \(\bigtriangledown_{u_{i}}g(x_{i},u_{i})\) is the local objective optimization term, \(\sum_{j=1}^{N^{s}}a_{ij}(\frac{r_{i}}{r_{i}}-\frac{r_{j}}{r_{i}})\) is the consensus term, and \(-d_{i}\) is the stabilizing term that maintains the consensus at the optimal point of local objectives. \(\sigma_{\Delta}\) is the saturation function bounding the value of control input with a level \(\Delta\), we set \(\Delta=\min\{|u_{i}^{min}|,|u_{i}^{max}|\}\) for simplicity. In addition, \(\alpha_{i},\beta_{i}\subseteq\mathbb{R}^{+}\) are weighting coefficients with respect to local and global objectives, respectively. According to the framework of DAO-based control in Fig. 2, the value of \(\alpha_{i}\) and \(\beta_{i}\) represent the voting power of agent \(i\), which can be proportional to the mining workload [31], the reputation of following the traffic rules[32], or the discrepancy between its current state and the consensus goal[33]. In this study, the Proof-of-Stack (PoS) scheme by Zhu et al. [33] is adopted and the value of \(\alpha_{i}\) and \(\beta_{i}\) are defined by Eq. (5). \[\alpha_{i}=\gamma_{i}\exp(k_{1}\tilde{r_{i}}) \tag{5a}\] \[\beta_{i}=\gamma_{i}\exp-(k_{2}\tilde{r_{i}})\] (5b) s.t. \[\tilde{r_{i}}=\sum_{j=1}^{N^{s}}a_{ij}(r_{i}-\frac{r_{i}+r_{j}}{2}) \tag{5c}\] \(k_{1},k_{2}\subseteq\mathbb{R}^{+}\) are positive constants. \(\gamma_{i}\subseteq\mathbb{R}^{+}\) is the weighting coefficient. Following the process of DAO-based control, \(\gamma_{i}\) Fig. 3: Problem solving framework updates according to the incentives received by agent \(i\) after \(u_{i}\) is executed, that: \[\gamma_{i}^{update}=\gamma_{i}^{old}(\frac{2*\arctan(h_{i})}{\pi}+1)\] (6a) s.t. \[h_{i}=k_{3}\cdot(\vec{r_{i}}-\vec{r_{i}}^{old})+k_{4}\cdot\sum_{j=1}^{N^{\prime }}a_{ij}(g_{j}-g_{j}^{old}) \tag{6b}\] \(k_{3}\),\(k_{4}\subseteq\mathbb{R}^{+}\) are positive constants. The term \(k_{3}\cdot(\vec{r_{i}}-\vec{r_{i}}^{old})\) presents the reward from achieving a consensus, whereas the term \(k_{4}\cdot\sum_{j=1}^{N^{\prime}}a_{ij}(g_{j}-g_{j}^{old})\) presents the'support' or 'objection' from the connected agents of \(v_{i}\) in \(V^{s}\). With the proposed solving method, the development of an integrated control in ITS can follow the process of repeatedly deploying smart contract and operating DAO with the growth of ITS system complexity, as presented in Fig.4. Following this process, the capability of DAO-based ITS control in achieving optimality and scalability can be extensively enhanced. ## IV Case Study and Results Discussion ### _Application scenario in ITS_ Fig. 5(a) illustrated a typical scenario that can apply the proposed integrated control method. Specifically, a single-lane arterial road with two three-arm intersections is depicted. There exists multiple intelligent agents in the depicted scenario, including CAVs, intelligent traffic lights, variable speed limits (VSL), perimeter gating, etc, which exhibits significant heterogeneity in terms of their scales and objectives. For instance, the acceleration of a CAV can be decided in milliseconds, with the objectives of reducing travel delay and acceleration oscillation; whereas the control of gated flow by the perimeter gating control aims to decrease the total travel delay of vehicles in the controlled network, with a response time in minutes. With the proposed method, intelligent agents can be controlled via reiteratively operating DAO and executing a DAO-based control through a consensus and incentive mechanism, as presented in Fig. 5(b). It is worthwhile to note that except for addressing the issue of heterogeneity and synchronization, the proposed method has practical significance. For instance, if CAV \(v_{9}\) encounters an emergency, the agents \(v_{1}\), \(v_{3}\), \(v_{5}\), \(v_{9}\) may execute excessive control to improve their own objectives, thus a coordination among them is needed to alleviate the ECE; meanwhile other agents' impact on reaching the consensus is minor. ### _Numerical experiment_ A numerical experiment is designed in this section to present the feasibility and efficiency of the proposed consensus and incentive mechanism and the operation of DAO. In this experiment, the traditional decentralized consensus control, the PoS-based consensus control, the DAO based control without operation on DAO, as well as the proposed method are examined. Similar to Fig. 5(a), a total of 10 intelligent agents in four different scales are simulated, that \(V=\{v_{1},...v_{10}\}\), the adjacent matrix of the simulated system is: Fig. 4: Integrated control development process Fig. 5: Application scenario in ITS \[A=\left[\begin{array}{cccccccccccc}0&0&0.1&0&0&0.2&0&0&0&0\\ 0&0&0.3&0.1&0.1&0&0&0&0&0\\ -0.2&0.2&0&0&0.1&0&0&0&0.03&0.1\\ 0&-0.1&0&0&0&0.5&0.2&0&0&0\\ 0&0&-0.1&-0.03&0&0&0&0&0.4&0\\ -0.02&0&0&0&0&0&0&0&0\\ 0&0&0&0.2&0&0&0.4&0&0\\ 0&0&0&0.1&0.2&0&-0.1&0&0.3&0\\ 0&0&-0.1&0&0&0&0&0.2&0&0\\ 0&0&0&0.05&0&0&0&0&0&0\end{array}\right] \tag{7}\] For simplicity, the detailed system dynamic of each agent is not modeled, while hypothetical functions are adopted to represent their system dynamics, as in Eq. 8. \[f(x_{i},\textbf{u}_{i})=x_{i}\cdot\sin i+\textbf{u}_{i}\cdot\cos i \tag{8}\] with \(u_{i}\subseteq\{-3,3\}\). The local objectives \(g_{i}\{x_{i},u_{i}\}\) are designed as: \[g_{i}\{x_{i},u_{i}\}=i\cdot\sin\left(x_{i}\right)+u_{i}^{2}\cdot\cos i \tag{9}\] For the response time of each agents and the operation on DAO, \(\tau_{1}\) is 10 seconds, \(\tau_{2}\), \(\tau_{3}\) are 2 seconds, \(\tau_{4}\) to \(\tau_{6}\) are 1 seconds, \(\tau_{7}\) to \(\tau_{10}\) are 0.5 seconds, and \(\tau_{o}\) is 5 seconds. In addition, for the decentralized consensus control, \(\alpha_{i}\) and \(\beta_{i}\) are time-invariant coefficients that \(\alpha=2\) and \(\beta=1\). For the PoS-based control, \(\gamma_{i}\) is time-invariant that \(\gamma_{i}=1\). In addition, \(k_{1}=2\), \(k_{2}=5\), \(k_{3}=0.3\), \(k_{4}=0.1\), and \(\Psi=2\), \(\Phi=4\). A total time of 500 seconds are simulated. The initial state \(x_{i}\) are randomly selected in \([-20,20]\) and the initial control effort \(u_{i}\) are randomly selected in \([-3,3]\). The performances of four considered methods are presented in Fig. 6. It can be observed from Fig.6(a) that the proposed consensus and incentive mechanism, together with the operation on DAO boosts the speed of achieving consensus significantly. In addition, by operating the DAO reiteratively, the convergence speed is able to improve around 20% comparing to the DAO-based control without such an operation. It is observed from Fig. 6(b) that the reached consensus on the ECE is significantly improved by applying the proposed integrated control method. Additionally, the proposed method achieved a significant improvement in the accumulated value of local objectives by around 50% comparing to the PoS-based control. Although a compromise of the overall ECE can be observed when applying the proposed method, the compromise is negligible. The evaluation results suggest that the cooperation and synchronization of heterogeneous agents in ITS can be extensively achieved by the proposed consensus and incentive mechanism. Specifically, the local objectives can be significantly improved without compromising global objectives, which is realized through balancing the ECE. The performance can be further improved by operating the DAO, which suggests that the capability of DAO-based control in ITS can be enhanced via the proposed operation method. Consequently, the frequency of altering code in the deployed smart contract or deploying new smart contracts can be ultimately reduced with a growing complexity of ITS. ## V Conclusion This paper presents a DAO-based control method that can realize an integrated control of plentiful heterogeneous agents in a complex system, which is applicable in ITS. Specifically, a consensus and incentive control protocol and an operation algorithm on DAO are proposed. By extending the PoS based consensus control protocol, the proposed control protocol aims to increase the cumulative energy consumption efficiency of all intelligent agents within the system. With the consideration of incentives from the rewards of approaching a consensus and the'support/objection' from connected agents, the system is able to achieve a consensus with improved local objectives. In addition, a heuristic algorithm is proposed to identify critical agents within DAO, in order to extend the capability of the proposed control protocol. Fig. 6: Experiment results. (a) presents the convergence curve of the maximum discrepancy in ECE among all agents, which are calculated by \(\max|r_{i}-r_{j}|,\forall i,j\in N\). (b) presents results of three evaluation criteria: (1)The reached consensus on ECE at the end of simulation. (2) The accumulated magnitude of improved local objectives, which are represented by \(\Delta J\), that \(\Delta J=\sum_{i}(g_{i}(t_{end})-\sum_{i}g_{i}(t_{start}))\). (3) The cumulative ECE, which is calculated by Eq. (2a). Note that infinity values are excluded. The experiment results show that the controlled system can achieve a faster consensus, resulting in a higher improved value of local objectives with minor compromise in the global objective, compared to existing decentralized control methods. These results indicate that the proposed method can be applied in practice, with an extensive significance in improving the control performance and extending the control capability of DAO. Moreover, the proposed DAO operation algorithm can be adopted as a quantitative measurement of the DAO-based control's capability, which shows its potential as a guidance in developing the DAO-based integrated control system with ITS becoming growing complex. However, the proposed method is examined by a numerical experiment, whereas its performance in practice needs to be evaluated in a traffic simulation platform, with the consideration of several practical issues, such as communication failures. Further, the proposed method is based on the assumption of static connection intensities known a-priori, which can be time-variant in practice. These issues can be further investigated by future research includes applying the proposed method in a traffic simulation platform, as well as developing a distributed learning algorithm to obtain the time-variant adjacent matrix in ITS.
2307.04683
CORE-GPT: Combining Open Access research and large language models for credible, trustworthy question answering
In this paper, we present CORE-GPT, a novel question-answering platform that combines GPT-based language models and more than 32 million full-text open access scientific articles from CORE. We first demonstrate that GPT3.5 and GPT4 cannot be relied upon to provide references or citations for generated text. We then introduce CORE-GPT which delivers evidence-based answers to questions, along with citations and links to the cited papers, greatly increasing the trustworthiness of the answers and reducing the risk of hallucinations. CORE-GPT's performance was evaluated on a dataset of 100 questions covering the top 20 scientific domains in CORE, resulting in 100 answers and links to 500 relevant articles. The quality of the provided answers and and relevance of the links were assessed by two annotators. Our results demonstrate that CORE-GPT can produce comprehensive and trustworthy answers across the majority of scientific domains, complete with links to genuine, relevant scientific articles.
David Pride, Matteo Cancellieri, Petr Knoth
2023-07-06T13:41:36Z
http://arxiv.org/abs/2307.04683v1
CORE-GPT: Combining Open Access research and large language models for credible, trustworthy question answering. ###### Abstract In this paper, we present CORE-GPT, a novel question-answering platform that combines GPT-based language models and more than 32 million full-text open access scientific articles from CORE1. We first demonstrate that GPT3.5 and GPT4 cannot be relied upon to provide references or citations for generated text. We then introduce CORE-GPT which delivers evidence-based answers to questions, along with citations and links to the cited papers, greatly increasing the trustworthiness of the answers and reducing the risk of hallucinations. CORE-GPT's performance was evaluated on a dataset of 100 questions covering the top 20 scientific domains in CORE, resulting in 100 answers and links to 500 relevant articles. The quality of the provided answers and and relevance of the links were assessed by two annotators. Our results demonstrate that CORE-GPT can produce comprehensive and trustworthy answers across the majority of scientific domains, complete with links to genuine, relevant scientific articles. Footnote 1: [https://core.ac.uk](https://core.ac.uk) ## 1 Introduction LLMs demonstrate a remarkable ability to process and interpret natural language, understanding various nuances and intricacies of human language. They excel at text generation, crafting coherent and contextually relevant responses or content, ranging from casual conversations to technical articles. However, these are predictive models and cannot be relied upon to provided reliable sources or citations for any generated text. In order to better understand the problem, we used the GPT3.5 and GPT4 models to answer 50 questions from across ten different domains, and to provide the five top sources / citations for each of the answers. Each row in Figure 1 shows the results for a single answer. A green dot represents a genuine, factual citation with a paper that exists or a link that goes directly to the paper itself. A red dot represents a completely fictional paper that simply does not exist. The yellow dots were used where there was what we termed _conflation_, meaning the provided citation or source was not real, but used either a mix of real titles or real author names or then linked to a completely different paper entirely. This shows that 22% of references for GPT3.5 and less than 20% for GPT4 were factual. Whilst it can be argued that GPT3.5 and GPT4 were not designed to reference evidence [1], it can be widely observed that people have attempted to use them for these purposes and that it would be valuable if they could be used in this way. In this paper, we address this issue by introducing CORE-GPT. Our main contributions are: * We provided empirical evidence demonstrating that GPT3.5 and GPT4 cannot be relied upon to generate sources of references. * We provide a solution that combines the power of GPT models and a global open research corpus to deliver a credible and trustworthy question-answering solution, accompanied with references to research literature. * Our question-answering solution is capable of providing answers including references to recently published research without the need for retraining the GPT models. ## 2 Related work The term Large Language Model has been in existence for many decades, however the LLMs we focus on here are extensions of the _transformer_ model ar Figure 1: Citations to answers given by LLMs. Each row represents 5 sources / citations for a single answer. Overall, 72.5% of citations provided by GPT3.5 were fictional. This figure was 71.2% for GPT4 chitecture introduced in 2017 by Vaswani et al. in their seminal paper _'All you need is attention'_ which lead to the development of the BERT transformer models and its siblings SciBERT [2] and RoBERTa [3]) and to GPT-2 [4],3 [5] and most recently GPT4 [6]. The advancements and overall recent developments in LLMs have been exhaustively reviewed by several scholars, including Fan et al. [7] and Zhao et al. [8], whose comprehensive surveys offer in-depth analyses of this rapidly evolving discipline. This paper will therefore not reiterate these developments. LLMs have demonstrated remarkable capabilities in many areas. There are however significant challenges associated with the use of LLMs. There has been concerns about the risk of plagiarism and the potential impact on education and assessment [9]. There are also specific concerns about the implications for the medical [10] and legal [11] domains. Beyond these domain-specific concerns, the robustness of LLMs has also been questioned. Issues such as hallucinations, or the generation of statements that appear credible but are in fact entirely fabricated, have been widely reported. In a study of particular interest to scientists and researchers, Gao et al. [12] showed that models based on the Generative Pre-training Transformer (GPT) architecture could generate abstracts for scientific articles that were often indistinguishable from those authored by humans. However, Alkaissi and McFarlane [13] conducted a study to evaluate ChatGPT's ability to answer complex biomedical and healthcare-related questions. Their results demonstrated mixed quality in the generated responses, with answers consisting of a blend of factual information and fabricated content. Crucially, when ChatGPT was asked to provide citations and PubMed IDs to support its answers, all the provided references were fictional, and the given PubMed IDs were simply sequences of random numbers with no relation to existing papers. This research, corroborated by additional studies [14], underscores a profound problem with LLMs generating authentic-sounding but entirely fictional content. These challenges and the results shown in Table 1 highlight a significant hurdle that needs to be overcome in order to be able to leverage the abilities of LLMs for question answering whilst limiting the potential for false or misleading answers.. The focus of our work in this paper is on addressing this credibility gap, by proposing a novel approach that combines Open Access scientific literature with LLMs to enhance the reliability and trustworthiness of these systems. ## 3 Our solution - CORE-GPT ### CORE-GPT Workflow CORE-GPT has been developed specifically to address the problems discussed in the previous sections. We use a three-stage approach to returning answers to user questions with links to relevant full-text papers in CORE. In Stage 1, the original question is passed to the GPT4 API with several instructions. * _Hently the key terms within the question_ * _Enrich with close synonyms_ * _Formulate this into a search query._ A sample question and search formatted response can be seen below: **Original user question** _What strategies can be implemented to improve literacy rates in rural primary schools in developing countries?_ **Formatted query** _strategies improve literacy rates rural primary schools developing countries OR low-income OR underdeveloped OR third-world_ In Stage 2, the formatted search query is then passed to the CORE API which returns the five most relevant papers where the full-text content is available. Stage 3 is the key to the novel solution provided by CORE-GPT. We pass the titles and abstracts returned in Stage 2 back to the GPT4 API with further instructions: _Generate a comprehensive answer to the following question (but no more than 160 words) solely based on the content provided. Format the links to the papers as follows: furl:Surl, abstract:$abstract, $question_ Our evaluation shows that this critical third stage is largely effective at constraining the model to base its reply only on the supplied input. The answer and provided links are then shown to the user. The full workflow can be seen in Figure 2 ### The CORE-GPT User Interface Initially, CORE-GPT will be made available on the CORE website as a new web-based question / answer platform.(Figure 3). Further development will allow for Figure 2: CORE-GPT workflow. the service to be made available via the CORE API. This is discussed in the Future Work section (Section 8.) A sample result is shown in Figure 4. ### Benefits of CORE-GPT The key benefit of CORE-GPT is in ensuring that the content of the generated answers is drawn from published scientific literature, which is then subsequently referenced. This greatly reduces the potential for hallucinations. There are further benefits derived from the constraints placed on the model. In our evaluation, there were instances where, despite the massive-scale corpus that CORE-GPT Figure 4: CORE-GPT Sample results including very recently published papers (less than one month since publication.) Figure 3: CORE-GPT user interface. draws its answers from, there was not enough relevant content to formulate a comprehensive answer. Below is an example question from the questions dataset used for the evaluation were this was the case; "What are the potential long-term health impacts of regular use of over-the-counter pain medications on the liver and kidney function in young adults?" In cases like these, the GPT4 model is capable of recognising the lack of relevant responses. If a complete answer cannot be given, the user will be informed with the following type of message; "Regular use of over-the-counter (OTC) pain medications can potentially impact liver and kidney function in young adults. _However, the provided results do not offer specific information_ on the long-term health impacts of such medications on these organs. _To obtain a comprehensive answer, further research on this topic would be necessary._" In our evaluation we found that whilst this type of answer was understandably low scoring in terms of comprehensiveness and utility, it scored highly for trustworthiness. The key factor here is that the model is forced to be honest when it does not know something. This greatly reduces the potential for hallucinations and increases the overall viability and usability of CORE-GPT in academic question / answering. Another key benefit is intrinsically linked to the way CORE operates as an Open Access infrastructure. Anyone who has used the latest GPT models will almost certainly be familiar with the response _'I'm sorry for the inconvenience_. _As an AI model with a knowledge cutoff in September 2021, I don't have real-time information'_. CORE however is constantly aggregating content from the global network of Open Access repositories and as soon as a document is indexed in CORE, it is available to CORE-GPT to be used in answers and cited. The search shown in Figure 4 was undertaken during the second week of May 2023. The results contain papers published as recently as April 2023. As CORE-GPT is designed to work in this way, this removes the knowledge cut-off date experienced when using just the GPT models themselves. ## 4 Evaluation Methodology ### Data Sources CORE-GPT is designed to provide citations to the research papers used to formulate the answers. All cited research papers are drawn from the CORE corpus of Open Access research literature. CORE is currently the one of the largest aggregators of OA scholarly knowledge, aggregating content from the global network of almost 11,000 institutional, pre-print and publisher repositories and, as of May 2023, hosts over 32 million full-text research papers. [15] ### Question generation Our first task was to generate a dataset of questions that could be used to test the performance of CORE-GPT and also to compare this performance against large language models such as GPT3.5 and GPT4. Additionally, we wanted to ascertain whether the models themselves and also CORE-GPT were more successful in some domains and less successful in others. We therefore generated a dataset of questions based on the split of domains in the CORE dataset. The domains with the largest amount of full text content in CORE were selected. We added education as the final domain to give 20 domains. To aid in the rapid development of the questions dataset, we elected to use a large language model. GPT-4 was chosen for its recency and known abilities for this task. Using the list of domains previously discussed, the OpenAI GPT-4 API was used to generate the questions using the following prompt; \begin{table} \begin{tabular}{|l|r|} \hline Metadata records & 291,151,257 \\ \hline Records with full text & 32,812,252 \\ \hline Records with abstract & 94,521,867 \\ \hline Records with full-text link & 139,000,000\({}^{\dagger}\) \\ \hline Data providers & 10,744 \\ \hline Number of CORE data provider countries & 150 \\ \hline Estimated number of languages of collected content & 118 \\ \hline \end{tabular} \end{table} Table 1: Size of the CORE collection as of January 2023. \({}^{\dagger}\)Estimate based on analysis Figure 5: Subject distribution of a sample of 20,758,666 CORE publications. messages=[ "role": "system", "content": "_write a graduate level research question in the following domain, only reply with the body of the question itself:_", "role": "user", "content": _domain_, ] Five questions were generated from each domain for a total of 100 questions. Overall, the question generation methodology was effective and allowed for rapid generation of the questions dataset. There are however some potential limitations that this method may introduce which are discussed in the Discussion section (Section 6.) The datasets of all questions and answers with accompanying citations can be found in the Github repository for this study2. Footnote 2: [https://github.com/oacore/core-gpt-evaluation](https://github.com/oacore/core-gpt-evaluation) ### Evaluation Metrics Effectively evaluating CORE-GPT requires a two-step approach as both the given answer and the provided citations must be validated. We elected to use three metrics for each of the answers as follows: * **Comprehensive:** How comprehensively is the question answered? * **Trust:** How trustworthy is the answer? * **Utility:** How useful is the answer? For the citations, we use **relevance** as the metric, that is how relevant is the given reference to the original question. To enable evaluation of the results, a browser-based evaluation platform was developed which sequentially displayed each of the 100 questions and answers and the title, abstracts and links to the five papers for each answer. For each question, the three answer metrics shown above and the relevance score for each of the citations could be assigned a value from zero to ten. Two annotators were retained and were given written instructions and training using the evaluation platform with sample data. Inter-annotator agreement for each metric was measured using Cohen's Kappa with quadratic weights. This measure was chosen for the task as it accounts for both small and large differences of opinion more accurately than unweighted Kappa. The results for the inter-annotator agreement can be seen in Table 2. \begin{table} \begin{tabular}{|l|l|} \hline **Class** & _Agreement (k)_ \\ \hline Comprehensiveness & 0.792 \\ \hline Trust & 0.760 \\ \hline Utility & 0.748 \\ \hline Cite 1 & 0.808 \\ \hline Cite 2 & 0.727 \\ \hline Cite 3 & 0.665 \\ \hline Cite 4 & 0.717 \\ \hline Cite 5 & 0.651 \\ \hline \end{tabular} \end{table} Table 2: Inter-annotator agreement for each classification Results ### Quality of answers Using the evaluation platform, the annotators were asked to rank each answer according to the three metrics introduced previously, _comprehensiveness_, _trust_ and _utility_. Each of these metrics could be scored from 0 (not at all) to 10 (completely) for each answer. Figure 6 shows the mean comprehensiveness, trust and utility scores for the answers from each of the 20 domains. CORE-GPT performs exceptionally well across most domains, but is less successful in a few areas. In 75% of the domains, the mean comprehensive, trust and utility score was 8 points or greater, and 9 points or greater in over half of the domains, indicating that CORE-GPT provides highly relevant, factual and, most importantly, referenced answers. A full breakdown of all scores is shown in Tables 3 and 4. It is worth noting that in the domains where the answers were deemed by the annotators to be less comprehensive and less useful, the trust scores remained fairly high (>6 across all domains) indicating that overall the given answers were considered trustworthy. We investigated whether there was a relationship between the domain scores for comprehensiveness, trustworthiness and utility and the number of research papers in CORE for each respective domain (Figure 5). However, we found only a weak correlation (Pearson's R0.23, n=20), indicating that having less research content in some domains does not fully explain the lower performance in these areas. CORE is a comprehensive source of multidisciplinary research content [16] and it might be that the domains in which there is genuinely less content are not necessarily insuffici Figure 6: Mean comprehensiveness, trust and utility scores for each domain ordered by mean comprehensiveness.) We further examined whether the length of the abstracts given to the model to generate the answers had an impact on the quality scores for the answers. There is a wide variance in mean abstract length across the domains, from economics (171 words) to engineering (329 words), we were therefore interested to see if this influenced the scores for comprehensiveness and utility. However, we observed no correlation between these scores and the mean abstract lengths in each domain.(Pearson's \(r=-0.02,n=20\)) ### Citation Relevance In contrast to the results for GPT3.5 and GPT4 shown in Figure 1, all citations provided by CORE-GPT are, by design, links to genuine research papers. Therefore the evaluation was based on testing not the existence of these papers, but their relevance to the user's original question. The annotators were asked to rank each citation from 0 (not relevant at all) to 10 (completely relevant). Figure 7 shows the mean relevance score for each of the five citations across all domains. Based on the previously discussed Figure 6 we observed that CORE-GPT provides comprehensive, trustworthy and useful answers for the majority of the domains. However, in some domains, such Geology, History and Art, comprehensiveness and utility were lower. We were therefore interested to find out to what extent the ability of CORE-GPT to provide good-quality answers is linked to the quality of the retrieved references. We found that there is a very strong correlation between the relevance of the retrieved references and comprehensiveness, trust and utility across domains respectively (Pearson \(r=0.77\) (comp.); \(r=0.83\) (trust); \(r=0.80\) (utility), \(n=20\)). This suggests that the ability to Figure 7: Mean citation relevance scores for each domain. (Ordered by relevance score for first citation.) retrieve relevant references with respect to a user's question has a major impact on the quality of CORE-GPT's answers. The annotators were asked to score the relevance of each of the five retrieved references separately, enabling us to test the performance of our reference retrieval functionality. A well optimised ranking function should retrieve the most relevant references first. As a result, we expected to observe that the top retrieved references would be assigned higher relevance scores than the latter references by the annotators on average. The results reported in Table 4 indeed confirms this trend. ## 6 Discussion Whilst the overall performance of CORE-GPT is very good, there are still some limitations to consider. CORE-GPT draws its answers and references from the body of Open Access literature. Whilst OA now covers a growing proportion of published scientific articles, there is still a significant quantity that is locked behind publishers' paywalls which CORE-GPT cannot currently access. However this problem, and the issues with current publishing paradigms in general, extend far beyond the scope of this study. It should be noted that whilst CORE-GPT was tested across a wide range of domains, only five questions per domain were used for the evaluation. This was to limit the burden on the annotators who validated 100 answers and checked all 500 links to references. Further evaluation could therefore be undertaken with a larger cohort of annotators. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline **Domain** & **cite1** & **cite2** & **cite3** & **cite4** & **cite5** & **Mean** \\ \hline **Pol. Sci.** & 9.5 & 9.3 & 9.5 & 9.2 & 8.6 & 9.22 \\ \hline **Mathematics** & 8.3 & 9.2 & 9.4 & 9.6 & 9.4 & 9.18 \\ \hline **Mat. Sci.** & 9.6 & 9.7 & 8.5 & 9.1 & 8.6 & 9.10 \\ \hline **Psychology** & 9.2 & 9.2 & 7.8 & 8.1 & 9.1 & 8.68 \\ \hline **Sociology** & 9.1 & 8.4 & 8.8 & 8 & 8.8 & 8.62 \\ \hline **Business** & 9.1 & 8.9 & 8.3 & 8.7 & 8 & 8.60 \\ \hline **Geography** & 9.4 & 8.7 & 8.6 & 7.3 & 7.6 & 8.32 \\ \hline **Chemistry** & 8.1 & 8.3 & 8.8 & 6.9 & 7.8 & 7.98 \\ \hline **Medicine** & 8.4 & 7.6 & 7.4 & 7.5 & 8.9 & 7.95 \\ \hline **Env. Sci.** & 8.8 & 8.1 & 7.8 & 8 & 6.9 & 7.92 \\ \hline **Engineering** & 9 & 8 & 7.3 & 7.6 & 7.6 & 7.90 \\ \hline **Philosophy** & 7 & 8 & 6.8 & 6.6 & 7.4 & 7.16 \\ \hline **Physics** & 7.5 & 7.2 & 7.3 & 5.9 & 5.6 & 6.70 \\ \hline **Comp. Sci.** & 6.8 & 6.3 & 5.5 & 5.7 & 5.7 & 6.00 \\ \hline **Art** & 5.5 & 6.4 & 5.6 & 5.2 & 5.7 & 5.68 \\ \hline **History** & 5.4 & 6.1 & 5.1 & 5.2 & 4.8 & 5.32 \\ \hline **Geology** & 5.2 & 5.7 & 5.2 & 4.7 & 5.2 & 5.20 \\ \hline **Economics** & 6.5 & 5.7 & 5 & 4.2 & 3.1 & 4.90 \\ \hline **Biology** & 5.5 & 5.2 & 4.7 & 4.3 & 4.2 & 4.78 \\ \hline **Education** & 5.5 & 4.9 & 3.8 & 4.0 & 2.7 & 4.17 \\ \hline **Mean** & **7.68** & **7.54** & **7.06** & **6.79** & **6.78** & \\ \hline \end{tabular} \end{table} Table 4: Mean citation relevance scores for all domains. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline **Domain** & **Comp** & **Trust** & **Utility** & **Mean** \\ \hline **Pol. Sci.** & 9.9 & 9.7 & 9.8 & 9.80 \\ \hline **Business** & 9.8 & 9.8 & 9.8 & 9.80 \\ \hline **Chemistry** & 9.7 & 9.9 & 9.6 & 9.73 \\ \hline **Psychology** & 9.9 & 9.5 & 9.7 & 9.70 \\ \hline **Mathematics** & 9.6 & 9.7 & 9.6 & 9.63 \\ \hline **Sociology** & 9.6 & 9.8 & 9.5 & 9.63 \\ \hline **Mat. Sci.** & 9.7 & 9.4 & 9.7 & 9.60 \\ \hline **Medicine** & 9.6 & 9.3 & 9.2 & 9.37 \\ \hline **Engineering** & 9.5 & 9 & 9.1 & 9.20 \\ \hline **Env. Sci.** & 9.5 & 8.8 & 9.2 & 9.17 \\ \hline **Physics** & 9.8 & 8.1 & 9.4 & 9.10 \\ \hline **Geography** & 9.2 & 9.0 & 8.3 & 8.83 \\ \hline **Education** & 8.7 & 7.4 & 8.3 & 8.13 \\ \hline **Comp. Sci.** & 8.2 & 8.4 & 7.8 & 8.13 \\ \hline **Biology** & 8.2 & 8.8 & 7.4 & 8.13 \\ \hline **Economics** & 7.6 & 8.4 & 7.1 & 7.70 \\ \hline **Philosophy** & 7.6 & 8.4 & 6.8 & 7.60 \\ \hline **Art** & 6.9 & 7.7 & 6.8 & 7.13 \\ \hline **History** & 6.7 & 8 & 5.9 & 6.87 \\ \hline **Geology** & 6.2 & 7.3 & 6.4 & 6.63 \\ \hline \end{tabular} \end{table} Table 3: Mean answer quality scores for all domains. In the questions dataset, a small number of questions are somewhat basic and not really at the level that would be expected of a research question. Further, it can be seen that there is overlap in the phrasing of some questions, leading to similar questions in some domains. Whilst this reduced the variety of questions by a small margin, we remain confident in the overall results presented here. Across all domains there is very strong correlation between the comprehensiveness, trust and utility scores for the answers and the relevance of the citations (Pearson \(r=0.77\) (comp.); \(r=0.83\) (trust); \(r=0.80\) (utility), \(n=20\)) This indicates that it is access to high quality, relevant literature that is central to delivering high quality answers. ## 7 Conclusion In this paper we introduce CORE-GPT a framework that combines LLMs and massive-scale Open Access scientific corpora to deliver a trustworthy, evidence-based question-answering platform. CORE-GPT is an overtly simple, yet elegant solution to the problems that arise when LLMs are asked to provide factual, evidence-based answers. Our evaluation results demonstrate that the answers provided by CORE-GPT are, on the whole, comprehensive, useful and most importantly trustworthy. Further, all references generated by the platform are, by design, genuine research papers held within CORE. ## 8 Future Work The results from the evaluation show that CORE-GPT performs well across the majority of scientific domains. This provides a strong foundation to now develop a range of applications using the central CORE-GPT architecture. The initial version of CORE-GPT uses the titles and abstracts of the five most relevant papers as source for the given answers. Due to the limitations in the number of tokens that can be passed to the GPT4 model it is not currently possible to pass the entire full-text content of all papers. This is something that will undoubtedly change in the future and may lead to even stronger results. Our initial plan includes making the current version of CORE-GPT available as an addition to the CORE API V3.0. Further, CORE provides a range of management tools for repositories and we see strong potential in developing both an embedded repository version of the service and also a recommender system for repositories based on the CORE-GPT architecture. ## 9 Data and Code Availability All data and software code used for the evaluation of CORE-GPT are available to promote transparency and reproducibility of the findings. The dataset of questions and answers and the source code used for the analysis and visualisations in this study are accessible on the CORE-GPT GitHub repository3. Any questions or requests for further information can be addressed to the corresponding author. Footnote 3: [https://github.com/oacore/core-gpt-evaluation](https://github.com/oacore/core-gpt-evaluation)
2307.15956
Analyzing Cryptocurrency trends using Tweet Sentiment Data and User Meta-Data
Cryptocurrency is a form of digital currency using cryptographic techniques in a decentralized system for secure peer-to-peer transactions. It is gaining much popularity over traditional methods of payments because it facilitates a very fast, easy and secure way of transactions. However, it is very volatile and is influenced by a range of factors, with social media being a major one. Thus, with over four billion active users of social media, we need to understand its influence on the crypto market and how it can lead to fluctuations in the values of these cryptocurrencies. In our work, we analyze the influence of activities on Twitter, in particular the sentiments of the tweets posted regarding cryptocurrencies and how it influences their prices. In addition, we also collect metadata related to tweets and users. We use all these features to also predict the price of cryptocurrency for which we use some regression-based models and an LSTM-based model.
Samyak Jain, Sarthak Johari, Radhakrishnan Delhibabu
2023-07-29T11:04:35Z
http://arxiv.org/abs/2307.15956v1
# Analyzing Cryptocurrency trends using Tweet Sentiment Data and User Meta-Data ###### Abstract Cryptocurrency is a form of digital currency using cryptographic techniques in a decentralized system for secure peer-to-peer transactions. It is gaining much popularity over traditional methods of payments because it facilitates a very fast, easy and secure way of transactions. However, it is very volatile and is influenced by a range of factors, with social media being a major one. Thus, with over four billion active users of social media, we need to understand its influence on the crypto market and how it can lead to fluctuations in the values of these cryptocurrencies. In our work, we analyze the influence of activities on Twitter, in particular the sentiments of the tweets posted regarding cryptocurrencies and how it influences their prices. In addition, we also collect metadata related to tweets and users. We use all these features to also predict the price of cryptocurrency for which we use some regression-based models and an LSTM-based model. Keywords:Cryptocurrency, LSTM, Tweet Sentiment Data, Linear Regression, SGD Regressor, Random Forest Regressor, Principal Component Analysis, Mean Absolute Error, Root Mean Squared Error, Maximum Percentage Error. ## 1 Introduction ### Motivation With the digitization of the world and the market, most of the financial operations are moving to the digital space. Cryptocurrency has emerged as a secure form of currency that allows end-to-end secured transactions. It has also emerged as a form of a financial asset just like traditional stocks in the stock market. However, the cryptocurrency exchange is an extremely volatile market that operates very differently compared to the traditional market. While traditional markets use technical indicators for calculating price fluctuations, the prices and valuations in cryptocurrency can be influenced by a wide range of factors ranging from the demand-supply balance, legal and regulatory factors, to the sentiments about it in news and social media. It has been clearly observed in the case of many popular virtual currencies that their prices can fluctuate heavily just on the basis of related activity on popular social media platforms like _Twitter, Facebook, etc._ Thus, our motivation for this research is to analyze the value fluctuations of cryptocurrencies from each bracket of market capitalization based on tweet sentiment analysis and use the user metadata for these tweets. We will analyze 2 coins from the large-cap like _Solana and Avalanche_, and 3 coins from the mid-cap range like _DogeCoin, Matic, and Shiba Inu_. ### Problem Statement Our project aims to capture the sentiment of the text in the tweet to analyze the correlation between the price and sentiment of the cryptocurrency, and finally use that sentiment along with tweet metadata to conclude about the fluctuation in its price. We then use this sentiment coupled with user metadata as a combined feature to then predict the price of the cryptocurrency. Our tasks include firstly collecting the relevant twitter data for the cryptocurrencies and the corresponding cryptocurrency values. Given a set of tweets that are related to a cryptocurrency coin, we would like to find the associated sentiment for that coin by performing sentiment analysis for the tweet text and then analyze the fluctuations in their value which occur in the near future as a result of these sentiments. We map the sentiment to a sentiment score ranging between zero and one which reflects the strength of the sentiment(positive/negative/neutral) and combine this with the tweet metadata to use the whole as a combined feature in our model for concluding about the fluctuation in the cryptocurrency's value. Also, we would separately analyze this effect for both mid-cap range and high-cap range cryptocurrencies which differ in their market values. The problem we are trying to solve is novel compared to the work previously done in this research area as we are not only focusing on the sentiment of the Figure 1: This figure shows the high fluctuation in bitcoin price and number of trades after Elon Musk, an influential person, changed his Twitter bio to #bitcoin tweet but also incorporating user metadata and tweet data along with it to provide a deeper insight into user data and cryptocurrency fluctuation. The metadata includes details like count of followers, verification status, the number of likes, retweets etc. Including this as features in our models can provide better results which is because in terms of social media, the influence and effect created by the Twitter activity is very much dependent on the user who performs that as well (whether they are famous or influential or not). We are also trying to find out how high-cap range and mid-cap range coins are affected by the tweets and if the impact of a tweet is similar in both these cases. ### Major Contributions Following are some major contributions of our work on this topic : 1. We have collected the tweets and related data along with the price related data for two large-cap and three mid-cap cryptocurrencies. We ourselves have curated this dataset and released it for further use. 2. We perform sentiment analysis of the tweet text to understand the context, and opinion by analyzing the polarity of the text. For this, we use RoBERTA based pre-trained model and finetuned the same. We tried to also unfreeze some layers of the already pretrained model while finetuning and then use the model to generate sentiment scores that are further used for our prediction and analysis. 3. We analyze the effect of the sentiment of the tweets on the corresponding cryptocurrency price. Figure 2: Trends of Dogecoin prices based on Twitter activity of Elon Musk, an influential person with a good reach. 4. We also use the sentiment analysis based features along with collected metadata related to the tweet and user to perform the task of price prediction for these cryptocurrencies. For this, we use regression-based models (linear regression, SGD regression and Random Forest regressor) and also to include a sense of time and sequence in our model we use LSTM to model this task as a time-series forecasting. 5. We have tried out various approaches to train the LSTM based models with and without metadata. Also, we have accommodate the metadata after calculating the sentiment for a certain period of time and then weighing the sentiment by the number of retweets and likes based so as to capture the variation of sentiment and also metadata. ## 2 Literature Review [1] The paper predicts changes in Bitcoin and Ethereum (the two largest cryptocurrencies) prices using Twitter data and Google trends. As Twitter is used widely as a news source and judging popularity, they influence the purchase/sell decisions of the user. They used three models to find the correlation with the cryptocurrency's price. First, they collected tweets from twitter's API using tweepy. After cleaning the collected tweets, they analyzed using the VADER (Valence Aware Dictionary for sEntiment Reasoning) sentiment analysis. Then they analyzed if the tweets actually have a sentiment or not and then established a relation between the sentiments of tweets with the price change of cryptocurrency. They found a positive correlation of prices with the sentiments when the price was rising. To have a better model input, they also considered the tweet volumes and used it as a metric to see the price fluctuation. They concluded that the relationship is robust to periods of high variance and non-linearity. With these inputs, a multiple linear regression model accurately reflected future price changes with the addition of lagged variables. Figure 3: Project Pipeline [4] This paper explores the use of social sentiment data as a better predictor as compared to the traditional methods of using technical financial indicators and uses the non-linear relation between sentiments and bitcoin price to predict prices in the future. **TRMI index construction** For this research, they have used an index called as Thomson Reuters Marketpsych Index (TRMI). It is evaluated on news, social media, and a combination of both. TRMI is defined as the ratio of the sum of all relevant variables to the sum of absolute value sum of the TRMI constituent variables, which is defined as Buzz. These are given as : \[Buzz(a)=\sum_{c\in C(a),v\in V}|Var_{c,v}| \tag{1}\] \[TRMI_{t}(a)=\frac{\sum_{c\in C(a),v\in V(t)}(I(t,v)*PsychVar_{v}(C))}{Buzz(Asset)} \tag{2}\] For the features obtained, they use ARIMA(Autoregressive Integrationg Moving Averages) and RNN models. ARIMA has parameters such as autoregression, moving average, and integration. They also use variations of ARIMA such as ARIMAX which has an exogenous variable along with a time series variable attached to it. RNN is an artificial neural network-based model which takes the current input data and also the previous input data for making a prediction, which allows it to perceive data at time t-1. The paper shows that sentiment analysis is a key part of data-enabled algorithmic systems for cryptocurrency investments and trading. [5] This paper uses historical tweet data fetched from twitter, user meta data containing the number of followers, number of retweets of a given tweet and corresponding Bitcoin prices, XRP prices and Ethereum prices at that given time instance because of the high correlation between their prices. The paper broadly works up on two aspects: 1. Implementing a predictive model which uses momentum metric to predict actions of buying, selling and holding a given crypto currency. If the momentum is above 5% they predict buying, if less than -5% they predict selling and if in between they predict holding. The threshold is decided based on the volatility of the crypto markets. \[momentum=\frac{Price_{close}-Price_{open}}{Price_{open}}\] (3) 2. Providing explainability on the above implementation by using unsupervised deep learning clustering models to determine the underlying patterns. More formally, the paper compares each tweet representation obtained from DistilBERT to the buy, sell and hold category obtained from the section 1) and then finds the maximum similarity between groups of tweets and assigns the given tweet to the group having highest cosine similarity. [6] The paper aims to study the correlation between Twitter sentiment and the changes it can bring to the prices of cryptocurrencies such as Bitcoin. Their motivation was that their research could help predict the price of Bitcoin in the future using past sentiments and Bitcoin prices. They wanted to develop a time series analysis for which tweets and Bitcoin prices along with their timestamps were collected from 12th March 2018 to 12th May 2018 as there was aggressive fluctuation on the prices of Bitcoin and this helped them in making an effective model. The data was scraped using APIs and web scraping techniques. Bitcoin prices were collected from four different sources namely - "BITSTAMP" "COINBASE", "ITBIT" and "KRAKEN" and only close price of Bitcoin was used. In order to get a curve that is smooth the average of prices from the four sources was taken. Sentiment analysis using VADER (Valence Aware Dictionary and Sentiment Reasoner ) was performed. VADER was chosen as it was highly used when dealing with data from social media sites. A score between -1 to 1 was given by VADER. Finally, a Random Forest Regressor was used to perform evaluations of their model, Random Forest Regression was used as it was more adaptable to inputs of various kinds. A 62.48% accuracy was observed while making the predictions using tweet sentiments and past prices of Bitcoin. [7] The paper covers the study of Twitter sentiment about a particular altcoin (NEO) and how it correlates with its price in the crypto exchange market. They follow a straightforward methodology in their paper. The Twitter tweet data related to this particular altcoin is collected by scraping using fourteen different variations of a related hashtag for e.g - #neo, #neo, #NEO etc. The tweet text along with some other tweet related data like username, language is collected using this way. This was followed by various preprocessing steps which included filtering based on the most frequent word, negative/positive crypto terms and removal of punctuation and spaces. Also, bot account tweets were also identified and filtered based of the frequency of tweets. Using a Random Forest classifier they were able to obtain 82% train set and 77% test set accuracy for the sentiment analysis task on a subset. The pre-trained BERT in comparison had an accuracy of 45% in the test set. The tweets' sentiments are classified into three classes: positive, negative and neutral. This sentiment is aggregated for a particular day and combined with cryptocurrency price and volume data(particularly bitcoin, Ethereum and NEO). The sentiment correlation with the price is observed and it was found that tweets with neutral sentiment had the highest correlation with NEO prices. Also, a high correlation was observed between Bitcoin and NEO prices. BERT based sentiment analysis showed that positive sentiment tweets had the highest correlation with price. Basically because neutral sentiment class is the most dominating, it was concluded that that is why probably it corresponded to a high correlation. [3] The paper aims to predict the two-hour price of cryptocurrency, namely Bitcoin and Litecoin on the basis of social factors such as tweet sentiments with the help of a multi-linear regression model. Bitcoin and Litecoin were chosen in particular due to their heavy popularity and reach among the public.The prices of the two bitcoins are extracted with the help of CoinDesk and the tweets are extracted with the help of rest APIs. After the tweets have been extracted they are classified into 3 classes on the basis of sentiments. Those sentiments being positive, negative and neutral. Textblob sentiment polarity is used for this purpose. This gives a score to a tweet between -1 and 1. All tweets with polarity \(>\) 0 are classified as positive, all with polarity = 0 are classified as neutral and all tweets with polarity \(<\) 0 are classified as negative. After this all these tweets are put into different groups based on the time they were posted and each group contains tweets posted within a span of 2 hours, the count of positive, negative and neutral tweets are kept as features. The average price during these 2 hours of both bitcoin and litecoin is also calculated and used as labels of the dataset. In the next phase, real-time tweets are used for testing and their metrics, accuracy and R2-score are used for the evaluation. The proposed multilinear regression model is able to predict the 2-hour prices of bitcoin and litecoin upto an R2 value of 44% and 59% respectively. ## 3 Methodology ### Dataset We extracted the data for six different cryptocurrencies, 3 coins from the large-cap range: Avalanche, Ripple, Solana and 3 coins from the mid-cap range: DogeCoin, Matic and Shiba Inu. For extracting the Twitter data, the Tweepy library is used which makes access of the Twitter API. We search for hashtags containing the symbol of the coin ('#<Name/Symbol of Coin>') for searching the tweets relevant for the coin. The cryptocurrency price data for these coins are collected using the CryptoCompare API which provides historical cryptocurrency price data by minute, hour and day. We collected the cryptocurrency price data by the minute i.e at the interval of one minute. **Fields in the tweet data include:** * id: tweet id * text: tweet text * favourite_count: The number of times the tweet has been favourited (liked). * retweet_count: The number of times the tweet has been retweeted. * created_at: The datetime of the moment the tweet has been tweeted. * User: User related data for the user that tweeted it. It includes around sixty user-related information fields like id, name, screen_name, location, followers_count, friends_count, favourites_count, verification status, following_count etc. * place: The place (geographical location) from where the tweet is tweeted. **Fields collected in the cryptocurrency price data include:** * time: The datetime for which the crypto data is recorded * high: The highest price during that time period (here minute) * low: The lowest price during that time period (here minute) * open: The price at the start of the minute * volume_from: Total amount of base currency (USD) traded into the cryptocurrency during that minute. * volume_to: Total amount of cryptocurrency traded into the base currency (USD) during that minute. * close: The price at the end of the minute #### 3.2.1 Preprocessing The collected Twitter data included the tweet text which was preprocessed because it is needed for the part of sentiment analysis. Also, the 'User' data in the tweet data was present in the JSON form and the keys of the JSON were parsed into columns of our pandas dataframe. The preprocessing steps applied to the tweet text include: * Removed any user mentions present and handled retweets: Tweets tend to have user mentions "@sarthakj01" such mentions are removed from the tweets * Removed any links/URLs from the tweet text: Tweets have links, all HTTP or bitly links are removed from the tweets * Separating the hashtags in a different column, these hashtags are useful as they contain very important information regarding the tweet: Hashtags such as # bitcoin, # DOGECOIN are handled, they are removed from the tweets and kept in a separate column for that tweet as they can hold vital information * Converted tweets to lowercase * Removed punctuation * Classified Emojis and Emoticons: Emoticons like :-)) is written as 'Very_happy' and symbol based emojis are also appropriately expanded. Python's emot and emoji libraries are used for these respective classifications #### 3.2.2 Outlier Detection * In the Avalanche dataset, we found out that there were many tweets which contained the word Avalanche but weren't related to the cryptocurrency Avalanche * Upon further research we found out that there were multiple other meanings related to the word Avalanche (such as a football team) and hence the tweets were out of context * For this problem we created a list of crypto/financial terms and classified the data into crypto or non crypto category and discarded the non crypto category data because that would have led to wrong results while sentiment and further analysis Note that we deliberately chose not to remove the stopwords because in the case of sentiment analysis they can hold important sentiment-related information. Removing them can lead to capturing the sentiment wrongly. For example: **Original sentence:** Bitcoin is not a good investment (Negative sentiment) **After stopword removal:** Bitcoin good investment (Positive sentiment) The cyrptocurrency price data did not need any preprocessing. The cryptocurrency price data is finally joined with the tweets data on the basis of the time of the tweet. The cryptocurrency price data of the very next minute of the time given by the 'created_at' column of the tweet is joined with that tweet's data. A tweet made at the time 'hh:mm:ss' will have the price data of that cryptocurrency at 'hh:(mm+1):00' joined with it. #### 4.2.2 Analysis * We generate the wordcloud of our twitter dataset for each coin. The tweet text was first preprocessed according to the preprocessing steps explained above. (Figure 5) Figure 4: This figure shows the most occurring words in the crypto context Figure 5: Word clouds * Then we went for topic modelling of our tweet text. For this we used pyLDAvis library which gives a visual representation to the topic modelling performed by Latent Dirichlet Allocation. This helps us to understand about the most popular topics in our dataset, The preprocessed text is used for LDA. * To understand if the fact of a user being verified have any impact to the way people view a tweet, we plotted a graph (Figure 6), it can be clearly seen that people engage more with a tweet when the user is verified hence there is a much greater retweet count and favourite count when compared to the tweet made by a user who is not verified. * The average tweet text length is also calculated for all the different coins' tweet data (Figure 7). ### Sentiment Analysis * Sentiment Analysis as a NLP task where we identify, and categorize opinions expressed in a given text, with respect to the overall sentiment of the corpus. This means that for a review based dataset, a positive tweet will have a different emotion attached to it, as compared to a finance based dataset. * For our problem statement the preprocessed dataset has an undefined sentiment score. Using a pretrained sentiment analysis model might not lead to accurate results for the same reasons mentioned above. Hence, we decided to fine tune a pretrained sentiment analysis model using a previously collected and labeled dataset for crypto currency tweets. This will further improve the sentiment analysis results. * To this extent, we have used a BitCoin dataset, with tweets and their sentiment category (_positive, neutral, negative_) and a related sentiment score. The given dataset had a total of 50,859 tweets. Out of these tweets, we use 24917 tweets for training, 15257 tweets for testing, and 10679 tweets for validation. * _Pre-Trained Model_: For our pre-trained model, we use the _twitter-roberta-base-sentiment_ model, which was previously trained on around 58M tweets, and fine tuned for sentiment analysis with the TweetEval benchmark. * _RoBERTa_ : A futher improvement on BERT model to improve pre-training in self-supervised NLP systems. RoBERTa improves on BERT by training larger mini-batches, improved learning rates, and removing next-sentence pre-training object in a BERT model. * For training the given model, we use a learning rate of 1e(-5), batch size of 8 and a total of 4 epochs. The maximum length is set to 256, which is under the average tweet text length we have. After fine tuning the model, we use it on our current dataset. * In the predictions obtained from our model, we get a sentiment label and the corresponding sentiment score which is a value in the range [0,1] on the tweet text of the tweet dataset we have for all the coins. * We used various models which are BERT, Roberta based and available on hugging face, we finally used the model which was Roberta based and pre-trained on twitter sentiment analysis and finetuned it on our crypto dataset by splitting it into train and validation sets as it gave the best accuracy * Finally we obtained our predictions which are the sentiment label (Positive, Negative, or Neutral) and the corresponding sentiment score which is a value in the range [0,1] on the dataset we had obtained by scraping tweet data and computing sentiment scores for cryptocurrencies such as Avalanche, Doge Coin, Matic, Solana and Shiba Inu. ### Prediction Models We use regression based machine learning models from scikit-learn to make a prediction on crypto coin prices. These models use the sentiment scores and sentiment labels of the tweet text along with tweet metadata(favourite_count, retweet_count) and basic user metadata (user_follower_count, user_verified) as feature set. The columns of sentiment label is one-hot encoded and user_verified status is mapped to a binary integer value (0,1). Regression models used for baseline price prediction include: * **Linear Regression (LR)**: It tries to fit a linear model with the help of coefficients \(W=(W_{1},W_{2},W_{3},W_{4},\ldots..,W_{n})\) so as to reduce the residual sum of squares between the actual values and the values predicted using the linear approximation. * **SGD Regressor (SGD-R)**: It works towards building an estimator using a regularized linear model. The regularizer adds a penalty to the loss to help shrink the model parameters. It follows a stochastic gradient descent method where gradients are computed for each sample one at a time and weights are updated using that. We use Huber loss during training of the model. It is less sensitive to outliers and is computed as: \[L_{\mathcal{B}}(y,f(x))=\begin{cases}\frac{1}{2}(y-f(x))^{2}&\text{for }|y-f(x)|\leq\delta\\ \delta\cdot\left(|y-f(x)|-\frac{1}{2}\delta\right),&\text{otherwise.}\end{cases}\] (4) * **Random Forest Regressor (RF-R)**: It is an ensemble method that tries to improve the predictive power by fitting a number of classifying decision trees and averaging. We split our data into training and testing sets for running through the machine learning models.We have taken a 70:30 train:test split ratio.The parameters used for training SGDRegressor include 'l2' penalty, initial learning rate of 0.01, maximum_iter as 100 and a alpha(regularization weight) of 0.01. We take the max_depth of the Random Forest Regressor as 5 and the Linear Regression model is trained on default settings. The metrics we use for evaluation of the performance of these predictive models are Mean Absolute Error (MAE) and Root Mean Squared Error(RMSE). In addition to this, we also use a metric called percentage error (\(\delta\)) for comparisons. **LSTM based model** : As the problem of price prediction has a time variable and also is related to the previous values and features in the real world, we have used LSTM based model. Although RNN models can theoretically remember about all previous occurrences but in practice that is not the case, the LSTM is an optimized version of RNNs that can in practice also remember the past occurrences and by a mechanism of hidden state, forget input and output gates. \[MAE=\frac{1}{n}\sum_{j=1}^{n}|y_{j}-\hat{y}_{j}| \tag{5}\] \[RMSE=\sqrt{\frac{1}{n}\sum_{l=1}^{n}(y_{l}-\hat{y}_{l})^{2}} \tag{6}\] \[\delta=\frac{|y_{l}-\hat{y}_{l}|}{y_{l}}X100 \tag{7}\] We tried various stacked combinations and then used fully connected layers with combinations over it to finally obtain the predictions. We train for 200 epochs with a learning rate of 8e-4, hidden size 16. We also add price related features like high, low, volume_to/from etc to the data. Also, we both standard scale the data and do min-max normalization on the price values while training. However, we also take appropriate inverse transforms while inference of the results whenever needed. The model architecture can be seen in Figure 8 ## 4 Evaluation We evaluate and analyze the result of the price prediction task and the sentiment analysis part for all the cryptocurrencies. ### Prediction Models The above mentioned linear models are trained on the features dataset and the corresponding cryptocurrency coin price is predicted. Following are the values of mean absolute error, root mean squared error and maximum percentage error obtained for each model: Figure 8: LSTM based model architecture From the loss values we observe that learning is happening the best for ShibaInu, DogeCoin and Matic. Their predicted values are close to the actual values. Avalanche and Solana have high error/loss value which indicates the model is not learning to predict their prices properly. An important thing to note here is that the price data is time dependent i.e. the price of the cryptocurrency coin is dependent on the previous prices and the previous feature data (here sentiment data) as well. Thus here in our regression-based baselines we have not considered this information (dependency on past data and values) and there is no context of time or sequence in our current models which work considering the features as simple numerical data. This leads to loss of contextual information. Following are the values of mean absolute error, root mean squared error and maximum percentage error (max \(\delta\)) obtained for LSTM based model: We observe a significant improvement in the MAE and RMSE values when compared with baseline models. Most significantly, we can see an improvement in the value of maximum percentage error for all the crypto coins. Also, when we plot all the price values of the cryptocurrency (both training set and predicted values along with actual values), we can see that the model is capturing the trend of the prices and is predicting values in a good range. This holds true for both Avalanche and Solana as well that had high MAE and RMSE values in the Figure 9: SGD Regressor training loss Curves regression based models. The fluctuations in the prices are also being properly captured. Figure 10 depicts this for two of the crypto coins. Also, on analyzing the effect of metadata on the predictions, we find that the error values remain almost similar but introducing metadata adds a bit of noise to the predictions i.e- we observe that there are some spikes in the predictions. Since, we do not find any other works that use the same large-cap and mid-cap coins as used by us, we cannot make a direct comparison for our results. However, the recent work on bitcoin price prediction using twitter sentiment analysis reports a maximum percentage error of 43.83% [6]. The actual and predicted price value's fluctuation for Bitcoin and its plot is also presented in this work [6].Another work on bitcoin price prediction report the best MAE value of 2.7526 and RMSE value 13.7033 [2]. Comparing to the cryptocurrencies we have chosen, we can see that we achieve significantly better(lower) MAE and RMSE value for all of them in our best case LSTM based model. Also, we have a notably lower value of the maximum percentage error than these. Also, our price plots are more smoother and have lesser noise and capture the tend more accurately. ### Sentiment Analysis **Principal Component Analysis** : After evaluating our dataset on the fine tuned sentiment analysis model, we find the sentiment score and sentiment label for each tweet, for all the coins. However, we need to analyze whether the text and their corresponding sentiments are actually trained well or not. To do this we can use clustering to find whether similar sentiments are clustered together. For this, we use PCA, a dimensionality reduction technique. First, we use a sentence tokenizer(BERT based) on all the tweets, and pick up 1000 points randomly from each of the sentiment classes. Figure 10: Price value over time for Dogecoin and Avalanche This results in an embedded matrix of size 3000 X 786 (no. of tweets X embedding size). We apply PCA, reducing their total dimensions to 2. Using these dimensions we plot the points, and observe that similar sentiments get clustered together. We plotted prices of crypto currency and the weighted sentiment score against created_at time of the tweet (Figure 14 ). As we hoped, we were able to find some correlation in between the weighted sentiment score. It could be seen from the graph of almost all the currencies that the peaks and declines for both weighted sentiment score and the crypto currency coincided with time, thus showing that positive sentiments have led to an increase in the prices of cryptocurrency and and the negative sentiments have led to a decline. We also not that they do not coincide at the exact same moment rather, the effect of sentiment on price of the cryptocurrency comes after some hours. This causal relation is visible more clearly for DogeCoin and Avalanche. The weighted sentiment score is calculated by assigning certain weights to positive, negative and neutral sentiment labels and then scaling this according to the cryptocurrency price (so that the sentiment value and prices are in comparable range for plotting). The weights are decided empirically for all coins. ## 5 Conclusion We have computed the sentiment score and label of the collected tweets for the 2 large-cap crypto coins Solana and Avalanche, and 3 coins from mid-cap range like Dogecoin, Matic and Shiba Inu. It is done using RoBERTa based Figure 11: Actual Price Plot and Predicted Price Plot for Bitcoin as presented in [6] pretrained sentiment analysis model finetuned on a crypto-sentiment (Bitcoin-based) dataset. We observe that similar sentiments get clustered together. We combine this sentiment analysis derived features with the metadata we collect which include features like favourite_count, retweet_count and user metadata (follower_count, user_verified). We have used this for the task of price prediction and use three regression based models and an LSTM based model. We obtain satisfactory results in terms of the metrics like MAE, RMSE and percentage error. We also see from the price plots (Figure 10) that the model is able to capture the trend of price change of the cryptocurrency and is appropriately predicting its future value. Now moving forward things that can be worked upon more are that our model predicts a decently well accuracy but the predictions are shaky, that is the model is not able to capture well that the price fluctuations are almost continuous in Figure 12: PCA for crypto sentiments Figure 13: Sentiment label distribution for crypto nature. More complex models might be able to capture this and as the duration of our project was less and more time was occupied by other things we could not deep dive into more complex model architectures that might be able to capture this well. Also some mechanism to incorporate the sentiment and corresponding tweet metadata can be tinkered with as the tweets seem noisy. This can be because of the twitter bots tweeting similar kind of posts that makes the sentiment inflated to either the positive or negative side. Some more advanced and complex time series forecasting methods can also be used for this task in the future. Figure 14: Crypto Prices and Weighted Sentiment Score vs Created at time
2305.11391
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation
Large Language Models (LLMs) have exploded a new heatwave of AI for their ability to engage end-users in human-level conversations with detailed and articulate answers across many knowledge domains. In response to their fast adoption in many industrial applications, this survey concerns their safety and trustworthiness. First, we review known vulnerabilities and limitations of the LLMs, categorising them into inherent issues, attacks, and unintended bugs. Then, we consider if and how the Verification and Validation (V&V) techniques, which have been widely developed for traditional software and deep learning models such as convolutional neural networks as independent processes to check the alignment of their implementations against the specifications, can be integrated and further extended throughout the lifecycle of the LLMs to provide rigorous analysis to the safety and trustworthiness of LLMs and their applications. Specifically, we consider four complementary techniques: falsification and evaluation, verification, runtime monitoring, and regulations and ethical use. In total, 370+ references are considered to support the quick understanding of the safety and trustworthiness issues from the perspective of V&V. While intensive research has been conducted to identify the safety and trustworthiness issues, rigorous yet practical methods are called for to ensure the alignment of LLMs with safety and trustworthiness requirements.
Xiaowei Huang, Wenjie Ruan, Wei Huang, Gaojie Jin, Yi Dong, Changshun Wu, Saddek Bensalem, Ronghui Mu, Yi Qi, Xingyu Zhao, Kaiwen Cai, Yanghao Zhang, Sihao Wu, Peipei Xu, Dengyu Wu, Andre Freitas, Mustafa A. Mustafa
2023-05-19T02:41:12Z
http://arxiv.org/abs/2305.11391v2
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation ###### Abstract Large Language Models (LLMs) have exploded a new heatwave of AI, for their ability to engage end-users in human-level conversations with detailed and articulate answers across many knowledge domains. In response to their fast adoption in many industrial applications, this survey concerns their safety and trustworthiness. First, we review known vulnerabilities of the LLMs, categorising them into inherent issues, intended attacks, and unintended bugs. Then, we consider if and how the Verification and Validation (V&V) techniques, which have been widely developed for traditional software and deep learning models such as convolutional neural networks, can be integrated and further extended throughout the lifecycle of the LLMs to provide rigorous analysis to the safety and trustworthiness of LLMs and their applications. Specifically, we consider four complementary techniques: falsification and evaluation, verification, runtime monitoring, and ethical use. Considering the fast development of LLMs, this survey does not intend to be complete (although it includes 300+ references), especially when it comes to the applications of LLMs in various domains, but rather a collection of organised literature reviews and discussions to support the quick understanding of the safety and trustworthiness issues from the perspective of V&V. ###### Contents * 1 Introduction * 2 Large Language Models * 2.1 Categories of Large Language Models * 3 ###### Abstract We present a new method for constructing the \(\mathcal{O}(\log(\log(\log(\log(\log(\log(\log(\log(\log(\log(\log(\log(\log(\log(\log(\log(\log( \log(\log(\log(\log(\log(\log(\log( \log(\log(\log( (\log( (\log( ( \log( ( \log( ( \log( ( ( \log( ( ( ( Regulations and Ethical Use * 8.1 Regulate or Ban? * 8.2 Responsible AI Principles * 8.3 Transparency and Explainability * 9 Conclusions ## 1 Introduction A Large Language Model (LLM) is a deep learning model equipped with a massive amount of learnable parameters (commonly reaching more than 10 billion, as indicated in Figure 1). LLMs are attention-based sequential models based on the transformer architecture [116], which consistently demonstrated the ability to learn universal representations of language. The universal representations of language can then be used in various natural language processing tasks (NLP). The recent scale-up of these models, in terms of both numbers of parameters and pre-trained corpora, has confirmed the universality of transformers as mechanisms to encode language representations. At a specific scale, these models started to exhibit in-context learning [184, 278], and the properties of learning from few examples (zero/one/few-shot - without the need for fine-tuning) and from natural language prompts (complex instructions which describe the behavioural intent that the model needs to operate). Recent works on Reinforcement Learning via Human Feedback (RLHF) [190] have further developed the ability of these models to align and respond to increasingly complex prompts, leading to their popularisation in systems such as ChatGPT and their use in a large spectrum of applications. The ability of LLMs to deliver sophisticated linguistic and reasoning behaviour, has pushed their application beyond their intended operational envelope. While being consistently fluent, LLMs are prone to hallucinations [228], stating factually incorrect statements [227], lacking necessary mechanisms of safety, transparency and control [239], among many others. The goal of this paper is to provide a review of known vulnerabilities of LLMs and, more importantly, to investigate how the V&V techniques can be adapted to improve the safety and trustworthiness of LLMs. While there are several surveys on LLMs [296, 291], as well as a categorical archive of ChatGPT failures [47], to the best of our knowledge, this is the first work that provides a comprehensive discussion on the safety and trustworthiness issues, from the perspective of the V&V. With the rising of LLMs and its wide applications, the need to ensure their safety and trustworthiness become prominent. Considering the broader subject of deep learning systems, to support their safety and trustworthiness, a diverse set of technical solutions have been developed by different research communities. For example, the machine learning community is mainly focused on adversarial attacks, outlier detectors, adversarial training, and explainable AI. The human-computer interaction community is focused on engaging the learning systems in the interactions with end users to improve the end users' confidence. Formal methods community treats ML models as yet another symbolic system (evidenced by their consideration of neurons, layers, etc.) and adapts existing formal methods tools to work on the new systems [125]. While research has been intense on these individual methods, a synergy among them has not been addressed. _Without a synergy, it is hard, if not impossible, to rigorously understand the causality between methods and how they might collectively support the safe and trusted autonomous systems at runtime_. This survey is rooted in the field of AI assurance, aiming to apply a collection of rigorous V&V methods throughout the lifecycle of ML models, to provide assurance on the safety and trustworthiness. An illustrative diagram is given in Figure 1 for general ML models. V&V techniques have been successful in supporting the reliable and dependable development of software and hardware that are applied to safety-critical systems, and have been adapted to work with machine learning models, mainly focusing on the convolutional neural networks for image classification (see surveys such as [125, 169] and textbooks such as [124]), but also extended to consider e.g., object detection, deep reinforcement learning, and recurrent neural networks. This paper discusses how to extend further V&V to deal with the safety and trustworthiness challenges of LLMs. V&V are independent procedures that are used together for checking that a system (or product, service) meets requirements and specifications and that it fulfills its intended purpose [7]. Among them, verification techniques check the system against a set of design specifications, and validation techniques ensure that the system meets the user's operational needs. From software, convolutional neural networks to LLMs, the scale of the systems grows significantly, which makes the usual V&V techniques less capable due to their scalability issues. White-box V&V techniques that take the learnable parameters as their algorithmic input will not work well in practice. Instead, the research should focus on black-box techniques, on which some research has started for convolutional neural networks. In addition, V&V techniques need to consider the _non-deterministic nature_ of LLMs (i.e., different outputs for two tests with identical input), which is a noticeable difference with the usual neural networks, such as convolutional neural networks and object detectors, that currently most V&V techniques work on. Figure 1: Summarisation of lifecycle V&V methods to support AI Assurance The structure of the paper is as follows. In Section 2, we review the LLMs and its categories, its lifecycle, and several techniques introduced to improve safety and trustworthiness. Then, in Section 3, we present a review of existing vulnerabilities. This is followed by a general verification framework in Section 4. The framework includes V&V techniques such as falsification and evaluation (Section 5), verification (Section 6), runtime monitor (Section 7), and ethical use (Section 8). We conclude the paper in Section 9. ## 2 Large Language Models This section summarises the categories of machine learning tasks based on LLMs, followed by a discussion of the lifecycle of LLMs. We will also discuss a few fundamental techniques relevant to the safety analysis. ### Categories of Large Language Models LLMs have been applied to many tasks, such as text generation, content summary, conversational AI (i.e., chatbots), and image synthesis. Other LLMs applications can be seen as their adaptations or further applications. In the following, we discuss the two most notable categories of LLMs. #### 2.1.1 Text-based Conversational AI LLMs are designed to understand natural language and generate human-like responses to queries and prompts. Almost all Natural Language Processing (NLP) tasks (e.g., language translation [48], chatbots [168, 104] and virtual assistants [244]) have witnessed tremendous success with Transformer-based pretrained language models (T-PTLMs), relying on Transformer [248], self-supervised learning [130, 173] and transfer learning [115, 213] to process and understand the nuances of human language, including grammar, syntax, and context. The well-known text-based LLMs include GPT-1 [199], BERT [78], XLNet [275], RoBERTa [175], ELECTRA [69], T5 [201], ALBERT [152], BART [159], and PEGASUS [287]. These models can learn general language representations from large volumes of unlabelled text data through self-supervised learning and subsequently transfer this knowledge to specific tasks, which has been a major factor contributing to their success in NLP [138]. Kaplan et al. [142] demonstrated that simply increasing the size of T-PTLMs can lead to improved performance [138]. This finding has spurred the development of LLMs such as GPT-3 [49], PANGU [282], GShard [158], SwitchTransformers [90] and GPT-4 [189]. #### 2.1.2 Text-based Image Synthesis The transformer model [247] has become the standard choice for Language Modelling tasks, but it has also found widespread integration in text-to-image tasks. We present a chronological overview of the advancements in text-to-image research. DALL-E [204] is a representative approach that leverages Transformers for a text-to-image generation. The methodology involves training a dVAE [209] and subsequently training a 12B decoder-only sparse transformer supervised by image tokens from the pre-trained dVAE. The transformer generates image tokens solely based on text tokens during inference. The resulting image candidates are evaluated by a pretrained CLIP model [198] to produce the final generated image. StableFusion [210] differs from DALL-E [204] by using a diffusion model instead of a Transformer to generate latent image tokens. To incorporate text input, StableFusion [210] first encodes the text using a transformer then conditions the diffusion model on the resulting text tokens. GLIDE [187] employs a transformer model [247] to encode the text input and then trains a diffusion model to generate images that are conditioned on the text tokens directly. DALL-E2 [203] effectively leverages LLMs by following a three-step process. First, a CLIP model is trained using text-image pairs. Next, using text tokens as input, an autoregressive or diffusion model generates image tokens. Finally based on these image tokens, a diffusion model is trained to produce the final image. Imagen [217] employs a pre-trained text encoder, such as BERT [77] or CLIP [198], to encode text. It then uses multiple diffusion models to train a process that generates images that start from low-resolution and gradually progress to high-resolution. Parti [280] demonstrates that a VQGAN [88] and Transformer architecture can achieve superior image synthesis outcomes compared to previous approaches, even without utilising a diffusion model. The eDiff-I model [36] has recently achieved state-of-the-art performance on the MSCOCO dataset [167] by leveraging a combination of CLIP and diffusion models. In summary, text-to-image research commonly utilises transformer models [247] for encoding text input and either the diffusion model or the decoder of an autoencoder for generating images from latent text or image tokens. ### Lifecycle of LLMs Figure 2 illustrates the lifecycle stages of LLMs. The offline model construction is formed of three steps [291]: pre-training, adaptation tuning, and utilisation improvement, such that each step includes several interleaving sub-steps. In general, the _pre-training_ step is similar to the usual machine learning training that goes through data collection, architecture selection, and training. On _adaptation tuning_, it might conduct instruction tuning [178] to learn from task instructions, and alignment tuning [190, 67] to make sure LLMs are aligned with human values, e.g., fair, honest, and harmless. Beyond this, to improve the interaction with the end users, _utilisation improvements_ may be conducted through e.g., in-context learning [49], chain-of-thought learning [257]. Once an LLM is trained, an _evaluation_ is needed to ensure that its performance matches the expectation. Usually, we consider the evaluation from three perspectives: evaluation on basic performance metrics, safety analysis to evaluate the consequence of applying the LLM in an application, and the evaluation through publicly available benchmark datasets. The evaluation will determine if the LLM is acceptable (for pre-specified criteria), and if so, the process will move forward to the deployment stage. Otherwise, at least one failure will be identified, and the process will move back to either of the three training steps. On the _deployment_ stage, we will determine how the LLM will be used. For example, it could be available in a web platform for direct interaction with end users, such as the ChatGPT1. Alternatively, it may be embedded into a search engine, such as the new Bing2. Nevertheless, according to the common practice, a _guardrail_ is imposed on the conversations between LLMs and end users to ensure that the AI regulation is maximally implemented. Footnote 1: [https://openai.com/blog/ChatGPT](https://openai.com/blog/ChatGPT) Footnote 2: [https://www.bing.com/new](https://www.bing.com/new) ### Key Techniques Relevant to Safety and Trustworthiness In the following, we discuss two fundamental techniques that are distinct from the usual deep learning models and have been used by e.g., ChatGPT to improve safety and trustworthiness. #### 2.3.1 Reinforcement learning from human feedback (RLHF) RLHF [68, 190, 33, 34, 35, 189, 151, 300] plays a crucial role in the training of language models, as it allows the model to learn from human guidance and avoid generating harmful content. In essence, RLHF assists in aligning language models with safety considerations through fine-tuning with human feedback. OpenAI initially introduced the concept of incorporating human feedback to tackle complex reinforcement learning tasks in [68], which subsequently facilitated the development of more sophisticated LLMs, from InstructGPT [190] to GPT4 [189]. According to InstructGPT [190], the RLHF training process typically begins by learning a reward function intended to reflect what humans value in the task, utilising human feedback on the model's outputs. Subsequently, the language model is optimised via an RL algorithm, such as PPO [220], using the learned reward function. Reward model training and fine-tuning with RL can be iterated continuously. More comparison data is collected on the current best policy, which is used to train a new reward model and policy. The InstructGPT models Figure 2: Large Language Models: Lifecycle and Vulnerabilities demonstrated enhancements in truthfulness and reductions in generating toxic outputs while maintaining minimal performance regressions on public NLP datasets. Following InstructGPT, Red Teaming language models [34] introduces a harmlessness preference model to help RLHF to get less harmful agents. The comparison data from red team attacks is used as the training data to develop the harmlessness preference model. [33] utilised the helpful and harmless datasets in preference modelling and RLHF to fine-tune LLMs. They discovered that there was a significant tension between helpfulness and harmlessness. Experiments showed helpfulness and harmlessness model is significantly more harmless than the model trained only on helpfulness data. They also found that alignment with RLHF has many benefits and no cost to performance, like combining alignment training with programming ability and summarisation. [96] found that LLMs trained with RLHF have the capability for moral self-correction. They believe that the models can learn intricate normative concepts such as stereotyping, bias, and discrimination that pertain to harm. Constitutional AI [35] trains the preference model by relying solely on AI feedback, without requiring human labels to identify harmful outputs. To push the process of aligning LLMs with RLHF, an open-sourced modular library, RL4LMs, and evaluation benchmark, GRUE, designed for optimising language generator with RL are introduced in [202]. Inspired by the success of RLHF in language-related domains, fine-tuning approaches that utilise human feedback to improve text-to-image models [154, 270, 268] have gained popularity as well. To achieve human-robot coexistence, [105] proposed a human-centred robot RL framework consisting of safe exploration, safety value alignment, and safe collaboration. They discussed the importance of interactive behaviours and four potential challenges within human-robot interactive procedures. Although many works indicate that RLHF could decrease the toxicity of generations from LLMs, the induced RLHF, like introducing malicious examples by annotators [55], may cause catastrophic performance and risks. We hope better techniques that lead to transparency, safe and trustworthy RLHF will be developed in the coming future. #### 2.3.2 Guardrails Considering that some LLMs are interacting directly with end-users, it is necessary to put a layer of protection (as was done by OpenAI on ChatGPT) when the end users ask for information about violence, profanity, criminal behaviours, race, or other unsavoury topics. In such cases, a response is provided with the LLM refuses to provide information. While this is a very thin layer of protection because there are many tricks (such as prompt injections that will be reviewed in Section 5.2) to circumvent it, it enhances the social responsibility of LLMs. ## 3 Vulnerabilities This section presents a review of the known types of vulnerabilities. The vulnerabilities can be categorised into inherent issues, intended attacks, and unintended bugs. _Inherent issues_ are vulnerabilities that cannot be readily solved by the LLMs themselves. However, they can be gradually improved with, e.g., more data and novel training methods. Inherent issues include performance weaknesses, which are those aspects that LLMs have not reached the human-level intelligence, and sustainability issues, which are because the size of LLMs is significantly larger than the usual machine learning models. Their training and daily execution can have non-negligible sustainability implications. Moreover, trustworthiness and responsibility issues are inherent to the LLMs. _Intended attacks_ are initiated by malicious attackers, which attempt to implement their goals by attacking certain stages in the LLMs lifecycle. Known intended attacks include robustness gap, backdoor attack, poisoning, disinformation, privacy leakage, and unauthorised disclosure of information. Finally, with the integration of LLMs into broader applications, there will be more and more _unintended bugs_ that are made by the developers unconsciously but have serious consequences, such as bias and discrimination (that are usually related to the quality of training data), and the recently reported incidental exposure of user information. Figure 2 suggests how the vulnerabilities may be exploited in the lifecycle of LLMs. While inherent issues and unintended bugs may appear in any stage of the lifecycle, the intended attacks usually appear in particular stages of the lifecycle. For example, a backdoor attack usually occurs in pre-training and adaptation tuning, in which the backdoor trigger is embedded, and poisoning usually happens in training or alignment tuning, when the LLMs acquires information/data from the environment. Besides, many attacks occur upon the interaction between end users and the LLMs using specific, well-designed prompts to retrieve information from the LLMs. We remark that, while there are overlapping, LLMs and usual deep learning models (such as convolutional neural networks or object detectors) have slightly different vulnerabilities, and while initiatives have been taken on developing specification languages for usual deep learning models [40, 127], such efforts may need to be extended to LLMs. ### Performance Issues Unlike traditional software systems, which run according to the rules that can be deterministically verified, neural network-based deep learning systems, including large-scale LLMs, have their behaviour determined by the complex models learned from data through optimisation algorithms. It is unlikely that an LLM performs 100% correctly. Performance issues include at least the following two categories: factual errors and reasoning errors. #### 3.1.1 Factual errors Factual errors refer to situations where the output of an LLM contradicts the truth. E.g., when asked to provide information about the expertise in the computer science department at the University of Liverpool, the ChatGPT refers to people who were never affiliated with the department. More serious errors can be generated, including notably wrong medical advice. Additionally, it is interesting to note that while LLMs can perform across different domains, their reliability may vary across domains. For example, [225] shows that ChatGPT significantly under-performs in law and science questions. Investigating if this is related to the training dataset will be interesting. #### 3.1.2 Reasoning errors It has been discovered that, when given calculation or logic reasoning questions, ChatGPT may not always provide correct answers. This is mainly because, instead of actual reasoning, LLMs fit the questions with prior experience learned from the training data. If the statements of the questions are close to those in the training data, it will give correct answers with a higher probability. Otherwise, with carefully crafted prompt sequence, wrong answers can be witnessed [170, 94]. ### Sustainability Issues Sustainability issues, which are measured with, e.g., economic cost, energy consumption, and carbon dioxide emission are also inherent to the LLMs. While excellent performance, LLMs require high costs and consumption in all the activities in its life-cycle. Notably, ChatGPT was trained with 30k A100 GPUs (each one is priced at around $10k), and every month's energy consumption cost at around $1.5m. In Table 1, we summarise the hardware costs and energy consumption from the literature for a set of LLMs with varied parameter sizes and training dataset sizes. Moreover, the carbon dioxide emission can be estimated with the following formula: \[tCO_{2}eq=0.385\times GPU_{h}\times(GPU\,power\,\,consumption)\times PUE \tag{1}\] where \(GPU_{h}\) is the GPU hours, GPU power consumption is the energy consumption as provided in Table 1, and PUE is the Power Usage Effectiveness (commonly set as a constant 1.1). Precisely, it has been estimated that training a GPT-3 model consumed 1,287 MWh, which emitted 552 (= 1287 * 0.385 * 1.114) tons of CO\({}_{2}\)[192]. ### Other Inherent Trustworthiness and Responsibility Issues Some issues occur during the lifecycle that could lead to concerns about the trustworthiness and responsibilities of LLMs. Generally, these can be grouped into two sub-classes concerning the training data and the final model. For the training data, there are issues around the copyright [145], quality, and privacy of the training data. There is a significant difference between LLMs and other ML models regarding the data being used for training. In the latter case, specific (well-known/-structured) datasets are usually used in the training process. Ideally, these datasets are properly pre-processed, anonymised, etc.; if needed, users have also given consent about using of their data. It is well known that ChatGPT crawls the internet and uses the gathered data to train. On the other hand, for LLMs, the data used for training needs to be more understood. In most cases, users have not provided any consent; most likely they are even unaware that their data contain personal information and that their data have been crawled and used in LLM training. This makes ChatGPT, and LLMs in general, privacy-nightmare to deal with and opens the door to many privacy leakage attacks. Even the model owners would need to determine the extent of private risk their model could pose. For the final model, significant concerns include, e.g., LLMs' capability of independent and conscious thinking [111], LLMs' ability to be used to mimic human output including academic works [153], use of LLMs to engage scammers in automatised and pointless communications for wasting time and resources [53], etc. Similar issues can also be seen in image synthesis tools such as DALL-2, where inaccuracies, misleading information, unanticipated features, and reproducibility have been witnessed when generating maps in cartography [141]. These call for not only the transparency of LLMs development but also the novel technologies to verify and differentiate the real and LLMs' works [245, 1]. The latter is becoming a hot research topic with many (practical) initiatives such as [17, 14, 15] whose effectiveness requires in-depth study [193]. These issues inherent to the LLMs, as they are neither intended attacks nor unintended bugs. ### Unauthorised Disclosure and Privacy Concerns For LLMs, it is known that by utilising, e.g., prompt injection [195] or prompt leaking [18] that we will discuss in Section 5.2, it is possible to disclose the sensitive information of LLMs. For example, with a simple conversation [24], the new Bing leaks its codename "Sydney" and enables the users to retrieve the prompt without proper authentication. More importantly, privacy concerns also become a major issue for LLMs. First, privacy attacks on convolutional neural networks, such as membership inference attacks where the attacker can determine whether an input instance is in the training dataset, have been adapted to work on diffusion models [84]. Second, an LLM may store the conversations with the users, which already leads to concerns about privacy leakage because users' conversations may include sensitive information [26]. ChatGPT has mentioned in its privacy policy that the conversations will be used for training unless the users explicitly opt out. Due to such concerns, Italy has reportedly banned ChatGPT [1]. Most recently, both [160] and [103] illustrate that augmenting LLMs with retrieval and API calling capabilities (so-called Application-Integrated LLMs) may induce even more severe privacy threats than ever before. ### Robustness Gap An adversarial attack is an intentional effort to undermine the functionality of a DNN by injecting distorted inputs that lead to the model's failure. Multiple input perturbations are proposed in NLP for adversarial attacks [207, 102], which can occur at the character, word, or sentence level [64, 129, 54]. These perturbations may involve deletion, insertion, swapping, flipping, substitution with synonyms, concatenation with characters or words, or insertion of numeric or alphanumeric characters [165, 165, 86, 157]. For instance, in character level adversarial attacks, [39] introduces natural and synthetic noise to input data, while [97, 161] identifies crucial words within a sentence and perturbs them accordingly. Moreover, [114] demonstrates that inserting additional periods or spaces between words can result in lower toxicity scores for the perturbed words, as observed with the "Perspective" API developed by Google. For word level adversarial attacks, they can be categorised into gradient-based [165, 218], importance-based [128, 137], and replacement-based [31, 147, 194] strategies based on the perturbation method employed. In addition, in sentence level adversarial attacks, some attacks [133, 256] are created so that they do not impact the original label of the input and can be incorporated as a concatenation in the original text. In such scenarios, the expected behaviour from the model is to maintain the original output, and the attack can be deemed successful if the label/output of the model is altered. Another approach [294] involves generating sentence-level adversaries using GANs (generative adversarial networks) [99], which produce outputs that are both grammatically correct and semantically similar to the input text. As mentioned above, the robustness of small language models has been widely studied. However, given the increasing popularity of LLMs in various applications, evaluating their robustness has become paramount. For example, [225] suggests that ChatGPT is vulnerable to adversarial examples, including the single-character change. Moreover, [253] extensively evaluates the adversarial robustness of ChatGPT in natural language understanding tasks using the adversarial datasets AdvGLUE [250] and ANLI [188]. The results indicate that ChatGPT surpasses all other models in all adversarial classification tasks. However, despite its impressive performance, there is still ample room for improvement, as its absolute performance is far from perfection. In addition, when evaluating translation robustness, [136] finds ChatGPT does not perform as well as the commercial systems on translating biomedical abstracts or Reddit comments but exhibits good results on spoken language translation. Moreover, [58] finds that the ability of ChatGPT to provide reliable and robust cancer treatment recommendations falls short when compared to the guidelines set forth by the National Comprehensive Cancer Network (NCCN). ChatGPT is a strong language model, but there is still some space for robustness improvement, especially in certain areas. ### Backdoor Attack The goal of a backdoor attack is to inject malicious knowledge into the LLMs through either the training of poisoning data [60, 224, 70] or modification of model parameters [149, 274]. Such injections should not compromise the model performance and must be bypassed from the human inspection. The backdoor will be activated only when input prompts to LLMs contain the trigger, and the compromised LLMs will behave maliciously as the attacker expected. Backdoor attack on DL models is firstly introduced on image classification tasks [106], in which the attacker can use a patch/watermark as a trigger and train a backdoored model from scratch. However, LLMs are developed for NLP tasks, and the approach of pre-training followed by fine-tuning has become a prevalent method for constructing LLMs. This entails pre-training the models on vast unannotated text corpora and fine-tuning them for particular downstream applications. To consider the above characteristics of LLMs, the design of the backdoor trigger is no longer a patch/watermark but a character, word or sentence. In addition, a backdoor attack on LLMs should dispose the of retraining strategy from scratch and turn to embedding the backdoor into pre-trained models. Finally, the backdoor is not merely expressed to tie with a specific label due to the diversity of downstream NLP applications. #### 3.6.1 Design of Backdoor Trigger [60] introduces three categories of triggers that can be utilised to execute the backdoor attack: BadChar (triggers at the character level), BadWord (triggers at the word level), and BadSentence (triggers at the sentence level), each consisting of basic (non-semantic) and semantic-preserving patterns. The BadChar triggers are produced by modifying the spelling of words in various positions within the input and applying steganography techniques to ensure their invisibility. The BadWord triggers involve selecting a word from the ML model's dictionary, and increasing their adaptability to different inputs, [60] proposes MixUp-based and Thesaurus-based triggers. The BadSentence triggers are generated by inserting or substituting sub-sentences, with a fixed sentence chosen as the trigger. To preserve the original content, [60] employs Syntax-transfer to alter the underlying grammatical rules. These three types of triggers allow the flexibility to tailor their attacks to different applications. [233] introduces two new concealed backdoor attacks: the homograph and dynamic sentence attacks. The homograph attack uses a character-level trigger that employs visual spoofing homographs, effectively deceiving human inspectors. However, for NLP systems that do not support Unicode homographs, [233] proposes the dynamic sentence backdoor attack, which employs language models to generate highly natural and fluent sentences to act as the backdoor trigger. #### 3.6.2 Backdoor Embedding Strategies [224] is the first to propose a backdoor attack on pre-trained NLP models that do not require task-specific labels. Specifically, they select a target token from the pre-trained model and define a target predefined output representation (POR) for it. They then insert triggers into the clean text to generate the poisoned text data. While mapping the triggers to the PORs using the poisoned text data, they simultaneously use the clean pre-trained model as a reference, ensuring that the backdoor target model maintains the normal usability of other token representations. After injecting the backdoor, all auxiliary structures are removed, resulting in a backdoor model indistinguishable from a normal one in terms of model architecture and outputs for clean inputs. [149] introduces a method called Restricted Inner Product Poison Learning (RIP-PLe) to optimise the backdoor objective function in the presence of fine-tuning dataset. They also propose an extension called Embedding Surgery to improve the backdoor's resilience to fine-tuning by replacing the embeddings of trigger keywords with a new embedding associated with the target class. The authors validate their approach on several datasets and demonstrate that pre-trained models can be poisoned even after fine-tuning on a clean dataset. #### 3.6.3 Expression of Backdoor In contrast to prior works that concentrate on backdoor attacks in text classification tasks, [162] investigates the applicability of backdoor attacks in more complex downstream NLP tasks such as toxic comment detection, Neural Machine Translation (NMT), and Question Answer (QA). By replicating thoughtfully designed questions, users may receive a harmful response, such as phishing or toxic content. In particular, a backdoored system can disregard toxic comments by employing well-crafted triggers. Moreover, backdoored NMT systems can be exploited by attackers to direct users towards unsafe actions such as redirection to phishing pages. Additionally, Transformer-based QA systems, which aid in more efficient information retrieval, can be susceptible to backdoor attacks. [233] firstly introduces the backdoor attack on LLMs for text-based image synthesis tasks. The authors employ a teach-student approach to integrate the backdoor into the pre-trained text encoder and demonstrate that when the input prompt contains the backdoor trigger, e.g., the underlined Latin characters are replaced with the Cyrillic trigger characters, the generation of images will follow a specific description or include certain attributes. ### Poisoning and Disinformation Among various adversarial attacks against DNNs, poisoning attack is one of the most significant and rising security concerns for technologies that rely on data, particularly for models trained by enormous amounts of data acquired from diverse sources. Poisoning attacks attempt to manipulate some of the training data, which might lead the model to generate wrong or biased outputs. As LLM are often fine-tuned based on publicly accessible data [56, 51], which are from unreliable and un-trusted documents or websites, the attacker can easily inject some adversaries into the training set of the victim model. Microsoft released a chatbot called Tay on Twitter [156]. Still it was forced to suspend activity after just one day because it was attacked by being taught to express racist and hateful rhetoric. Gmail's spam filter can be affected by simply injecting corrupted data in the training mails set [52]. Consequently, some evil chatbots might be designed to simulate people to spread disinformation or manipulate people, resulting in a critical need to evaluate the robustness of LLMs against data poisoning. [186] demonstrates how the poisoning attack can render the spam filter useless. By interfering with the training process, even if only 1% of the training dataset is manipulated, the spam filter might be ineffective. The authors propose two attack methods, one is an indiscriminate attack, and another is a targeted attack. The indiscriminate attack sends spam emails that contain words commonly used in legitimate messages to the victim, to force the victim to see more spam and more likely to mark a legitimate email as spam. As for the target attack, the attacker will send training emails containing words likely to be seen in the target email. With the increasing popularity of developing LLMs, researchers are becoming concerned about using chatbots to spread information. Since these LLMs, such as Chat-GPT, MidJourney, and Stable Diffusion, are trained on a vast amount of data collected from the internet, monitoring the quality of data sources is challenging. A recent study [55] introduced two poisoning attacks on various popular datasets acquired from websites. The first attack involves manipulating the data viewed by the customer who downloads the data to train the model. This takes advantage of the fact that the data observed by the dataset administrator during collection can differ from the data retrieved by the end user. Therefore, an attacker only needs to purchase a few domain names to gain control of a small portion of the data in the overall data collection. Another attack involves modifying datasets containing periodic snapshots, such as Wikipedia. The attacker can manipulate Wikipedia articles before they are included in the snapshot, resulting in the internet storing perturbed documents. Thus, a significant level of uncertainty and risk is involved when people use these LLMs as search engines. ### Incidental Exposure of User Information In addition to the above attacks that an attacker actively initiates, ChatGPT was reported [3] to have a "chat history" bug that enabled the users to see from their ChatGPT sidebars previous chat histories from other users, and openAI recognised that this chat history bug may have also potentially revealed personal data from the paid ChatGPT Plus subscribers. According to the official report from OpenAI [2], the same bug may have caused inadvertent disclosure of payment-related information for 1.2% of ChatGPT Plus subscribers. The bug was detected within the open-source Redis client library, redis-py. This cannot be an isolated incident, and we are expecting to witness more such "bugs" that could have severe security and privacy implications. ### Bias and Discrimination Similar to the usual machine learning algorithms, LLMs are trained from data, which may include bias and discrimination. If not amplified, Such vulnerabilities will be inherited by the LLMs. For example, Galactica, an LLM similar to ChatGPT trained on 46 million text examples, was shut down by Meta after three days because it spewed false and racist information [16]. A political compass test [215] reveals that ChatGPT is biased towards progressive and libertarian views. In addition, ChatGPT has a self-perception [215] of seeing itself as having the Myers-Briggs personality type ENFJ. ## 4 General Verification Framework Figure 3 provides an illustration of the general verification framework that might work with LLMs, by positioning the few categories of V&V techniques onto the lifecycle. In the Evaluation stage, other than the activities that are currently conducted (as mentioned in Figure 2), we need to start with the _falsification and evaluation_ techniques, in parallel with the _explanation_ techniques. Falsification and evaluation techniques provide diverse, yet non-exhaustive, methods to find failure cases and have a statistical understanding potential failures. Explanation techniques are to provide human-understandable explanations to the output of a LLMs. While these two categories are in parallel, they can interact, e.g., a failure case may require an explanation technique to understand the root cause, and the explanation needs to differentiate between different failure and non-failure cases. The _verification_ techniques, which are usually high cost, may be only required when the LLMs pass the first two categories. In addition to offline verification, _runtime monitor_ needs to be deployed, on top of the guardrail, to support discovering failure cases at the operational time. This is mainly due to (1) the offline methods can be incomplete when working with the large set of and evolving. Finally, ethical principles and AI regulations are imposed throughout the lifecycle to ensure the _ethical use_ of LLMs. ## 5 Falsification and Evaluation This section summarises the known methods for identifying and evaluating of the vulnerabilities of LLM-based machine learning applications. We also discuss on how the V&V can, and should, be adapted. ### Red Teaming Instead of having annotators label pre-existing texts, a red team interacts with a model and actively finds examples that fail. Then, the model is retrained on these examples. The above process is iterated until it is nearly impossible to find failures. ### Prompt Injection This section discusses using prompts to direct ChatGPT to generate outputs that do not align with human values. This includes the generation of malware, violence instruction, and so on. Conditional misdirection has been successfully applied which misdirects the AI by creating a situation where a certain event needs to occur to avoid violence. Prompt injection for LLMs is not vastly distinct from other injection attacks commonly observed in information security. It arises from the concatenation of instructions and data, rendering it arduous for the underlying engine to distinguish them. Consequently, attackers can incorporate instructions into the data fields they manage and compel the engine to carry out unforeseen actions. Within this comprehensive definition of injection attacks, prompt engineering work can be regarded as instructions (analogous to a SQL query, for instance). At the same time, the input information provided can be deemed as data. Figure 3: Large Language Models: Verification Framework in Lifecycle Several methods for mis-aligning LLMs via Prompt Injection (PI) attacks have been successfully applied [5]. In these attacks, the adversary can prompt the LLM to generate malicious content or override the initial instructions and the filtering mechanisms. Recent studies have demonstrated that these attacks are difficult to mitigate since current state-of-the-art LLMs are programmed to follow instructions. Therefore, most attacks were based on the assumption that the adversary can directly inject prompt to the LLMs. For example, [195] reveals two kinds of threats by manipulating the prompts. The first one is _goal hijacking_, aiming to divert the intended goal of the original prompts towards a target goal, while _prompt leaking_ endeavours to retrieve information from private prompts. [140] explores the programmatic behaviour of LLMs, demonstrating that classical security attacks such as obfuscation, code injection, and virtualisation can be used to circumvent the defence mechanisms of LLMs. This further exhibits that instruction-based LLMs can be misguided to generate natural and convincing personalised malicious content by leveraging unnatural prompts. Moreover, [75] suggests that by assigning ChatGPT a persona, say that of the boxer Muhammad Ali (with a prompt "Speak like Muhammad Ali."), the toxicity of generations can be significantly increased. [182] develops a black-box framework for producing adversarial prompts for unstructured image and text generation. Employing a token space projection operator provides a solution from mapping the continuous word embedding space into the discrete token space, such that some black-box attacks method, like square attacks, can be applied to explore adversarial prompts. Experimental results found that those adversarial prompts encourage positive sentiments or increase the frequency of the targeted letter in the generated text. [262] also suggests the existence of a fundamental limitation on mitigating such prompt injection to trigger undesirable behaviour, i.e., as long as the length of the prompts can be increased, the behaviour has a positive probability to be exhibited. [160] claims that in the previous versions of ChatGPT, some personal private information could be successfully extracted via direct prompting. However, with the improved guardrails, some behaviours have been well-protected in the March 2023 version of ChatGPT, where ChatGPT is aware of leaking privacy when direct prompts are applied, it will tend to refuse to provide the answer that may contain private information. Although some efforts have been conducted to prevent training data extraction attacks with direct prompts, [160] illustrates that there is still a sideway to bypass ChatGPT's ethical modules. They propose a method named _jailbreak_ to exploit tricky prompts to set up user-created role plays to alter ChatGPT's ego and programming restrictions, which allows it to answer users' queries unethically. More recently, [103] proposes a novel indirect prompt injection, which required the community to have an urgent investigation and evaluation of current mitigation techniques against these threats. When LLMs are integrated with other plugins or using its API calling, the content retrieved from the Web (public source) may already be poisoned and contain malicious prompts pre-injected and selected by adversaries, such that these prompts can be indirectly used to control and direct the model. In other words, prompt injection risks may occur not only in situations where adversaries explicitly prompt LLMs but also among users, developers, and automated data processing systems. ### Comparison with Human Experts Another evaluation thread is to study how LLMs are compared with human experts. For example, for ChatGPT, [107] conducts the comparison on questions from open-domain, financial, medical, legal, and psychological areas, [89] compares on the bibliometric analysis, [180] evaluates on university education with a primary focus on computer security-oriented specialisation, [132] considers the ranking of contents, and [265] compares on the grammatical error correction (GEC) task. It is surprising to note that, in all these comparisons, the conclusion is that, ChatGPT does not perform as well as expected. One step further, to study collaboration rather than only focus on comparisons, [197] explores how ChatGPT's performance on safety analysis can be compared with human experts, and concludes that the best results are from the close collaboration between ChatGPT and the human experts. A Similar conclusion was also drawn by [131] when studying ChatGPT's logically consistent behaviours. In some cases, LLMs can outperform human experts in specific tasks, like processing enormous amounts of data or doing repeated activities with great accuracy. For example, LLMs can be used to analyse massive numbers of medical records to uncover patterns and links between different illnesses, which can aid in medical diagnosis and therapy [177, 28]. On the other hand, human experts may outperform LLMs in jobs requiring more complicated reasoning or comprehension of social and cultural contexts. Human specialists, for example, may better interpret and respond to delicate social signs in a conversation, which can be difficult for LLMs. It is important emphasising that LLMs are intended to supplement rather than replace human competence [223]. LLMs can automate specific processes or help human professionals accomplish things more efficiently and precisely [291]. For example, [197] studies how ChatGPT's performance on safety analysis can be compared with human experts and concludes that the best results are from the close collaboration between ChatGPT and the human experts. [113] also shows that huge language models have a lot of potential as knowledgeable assistants collaborating with subject specialists. ### Benchmarks Benchmark datasets have been used to evaluate the performance of LLMs. For example, in [253], AdvGLUE and ANLI benchmark datasets are used to assess adversarial robustness, and Flipkart review and DDXPlus medical diagnosis datasets are used to evaluate out-of-distribution evaluation. In [234], eight kinds of typical safety scenarios and six types of more challenging instruction attacks are used to expose safety issues of LLMs. In [94], The GHOSTS dataset is used to evaluate the mathematical capability of ChatGPT. ### Testing and Statistical Evaluation As mentioned above, most existing techniques on the falsification and evaluation are heavily rely on human intelligence and therefore have a significant level of human involvement. In red teaming, the red team must be creative in finding bad examples. In prompt injection, the attacker needs to design specific (sequence of) prompts to retrieve the information they need. Unfortunately, human expertise and intelligence are expensive and scarce, which calls for automated techniques to have an intensive and fair evaluation, and to find corner cases as exhaustive as possible. In the following, we discuss how testing and statistical evaluation methods can be adapted for a fair evaluation of LLMs. To simplify it, we assume an LLM is a system that generates an output given an input. Let \(\mathbf{D}\) be the space of nature data, an LLM is a function \(M:\mathbf{D}\rightarrow\mathbf{D}\). In the meantime, there is another function \(H:\mathbf{D}\rightarrow\mathbf{D}\) representing human's response. For an automated generation of test cases, we need to have an oracle \(\mathbf{O}\), a test coverage metric \(\mathbf{C}\), and a test case generation method \(\mathbf{A}\). The oracle \(\mathbf{O}\) determines if an input-output pair \((\mathbf{x},\mathbf{y})\) is correct. The implementation of oracle is related to both \(M\) and \(H\), by checking whether given any input \(\mathbf{x}\) their outputs \(M(\mathbf{x})\) and \(H(\mathbf{x})\) are similar under certain criteria. We call an input-output pair a test case. Given a set of test cases \(\mathbf{P}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1,\ldots,n}\), the coverage metric \(\mathbf{C}\) returns a probability representing the percentage of cases in \(\mathbf{P}\) over the cases that should be tested. Finally, the test case generation method \(\mathbf{A}\) generates the set \(\mathbf{P}\) of test cases. Usually, the design of coverage metric \(\mathbf{C}\) should be based on the property to be verified. Therefore, the verification problem is reduced to determining of whether the percentage of test cases in \(\mathbf{P}\) that passes the oracle \(\mathbf{O}\) is above a pre-specified threshold. Statistical evaluation applies statistical methods in order to gain insights into the verification problem we are concerned about. In addition to the purpose of determining the existence of failures (i.e., counterexamples to the satisfiability of desirable properties) in the deep learning model, statistical evaluation assesses the satisfiability of a property in a probabilistic way, by, e.g., aggregating sampling results. The aggregated evaluation result may have the probabilistic guarantee, e.g., the probability of failure rate lower than a threshold \(l\) is greater than \(1-\epsilon\), for some small constant \(\epsilon\). While the study on LLMs is just started [205], statistical evaluation methods have been proposed for the general machine learning models. Sampling methods and testing methods have been considered for convolutional or recurrent neural networks. Sampling methods, such as [258], are to summarise property-related statistics from the samples. There are many ways to determine how the test cases are generated, including, e.g., fuzzing, coverage metrics [236, 121], symbolic execution [100], concolic testing [238], etc. Testing methods, on the other hand, generate a set of test cases and use the generated test cases to evaluate the reliability (or other properties) of deep learning [235]. While sampling methods can have probabilistic guarantees via, e.g., Chebyshev's inequality, it is still under investigation on associating test coverage metrics with probabilistic guarantees. Moreover, ensuring that the generated or sampled test cases are realistic is necessary, i.e., on the data distribution [122, 293]. For LLMs, the key technical challenges are on the design of test coverage metrics and the test case generation algorithms because (1) LLMs need to be considered in a black-box manner, rather than white-box one; this is mainly due to the size of LLMs that cannot be reasonably explored, and therefore an exploration on the input space will become more practical; (2) LLMs are for natural language texts, and it is hard to define the ordering between two texts; the ordering between two inputs are key to the design of test case generation algorithms; and (3) LLMs are non-deterministic, i.e., different outputs are expected in two tests with identical input. ## 6 Verification This section discusses if and how more rigorous verification can be extended to work on LLM-based machine-learning tasks. So far, the verification or certification of LLMs is still an emerging research area. This section will first provide a comprehensive and systematic review of the verification techniques on various NLP models. Then we discuss a few pioneering black-box verification methods that can be workable on large-scale language models. These are followed by a discussion on how to extend these efforts towards LLMs and a review of the efforts to reduce the scale of LLMs to increase the validity of verification techniques. ### Verification on Natural Language Processing Models As discussed in previous sections, an attacker could generate millions of adversarial examples by manipulating every word in a sentence. However, such methods may still fail to address numerous unseen cases arising from exponential combinations of different words in a text input. To overcome these limitations, another class of techniques has emerged, grounded in the concept of "certification" or "verification" [222, 126]. For example, via certification or verification, these methods train the model to provide an upper bound on the worst-case loss of perturbations, thereby offering a certificate of robustness without necessitating the exploration of the adversarial space [229]. By utilising these certification-driven methods, we can better evaluate the model's robustness in the face of adversarial attacks [98]. #### 6.1.1 Verification via Interval Bound Propagation The first technique successfully adapted from the computer vision domain for verifying NLP models is Interval Bound Propagation (IBP). It is a bounding technique that has gained significant attention for its effectiveness in training large, robust, and verifiable neural networks [101]. By striving to minimise the upper bound on the maximum difference between the classification boundary and input perturbation region, IBP allows the incorporation of a loss term during training. This enables the minimisation of the last layer of the perturbation region, ensuring it remains on one side of the classification boundary. As a result, the adversarial region becomes tighter and can be considered certified robust. Notably, Jia et al. [134] proposed certified robust models while providing maximum perturbations in text classification. The authors employed interval bound propagation to optimise the upper bound over perturbations, providing an upper bound over the discrete set of perturbations in the word vector space. Later on, Huang et al. [120] introduced a verification and verifiable training method for neural networks in NLP, proposing a tighter over-approximation in the form of a'simplex' in the embedding space for input perturbations. To make the network verifiable, they defined the convex hull of all the original unperturbed inputs as a space of delta perturbation. By employing the IBP algorithm, they generated robustness bounds for each neural network layer. Furthermore, as shown in Figure 4, Ye et al. [277] proposed structure-free certified robust models, which can be applied to any arbitrary model, overcoming the limitations of IBP-based methods that are not applicable to character-level and sub-word-level models. This work introduced a perturbation set of words using synonym sets and top-K nearest neighbours under the cosine similarity of GloVE vectors, which could subsequently generate sentence perturbations using word perturbations and train a provably robust classifier. Very recently, Wallace et al. [249] highlighted the limitations of IBP-based methods in a broader range of NLP tasks, demonstrating that IBP methods have poor generalisability. In this work, the authors performed a systematic evaluation of a various of sentiment analysis tasks. They pointed out some insights regarding the promising improvements and adaptations for IBP methods in the NLP domain. #### 6.1.2 Verification via Abstract Interpretation Another popular verification technique applied to various NLP models is based on abstract interpretation or functional over-approximation. The idea behind abstract interpretation is to approximate the behaviour of a program by representing it using a simpler model that is easier to analyse. Specifically, this technique can represent the network using an abstract domain that captures the possible range of values the network can output for a given input. This abstract domain can then be used to reason about the network's behaviour under different conditions, such as when the network is under adversarial perturbation. One notable contribution in this area is POPQORN [146]. It can find a certificate of robustness for RNN-based networks, which utilised 2D planes to bound the cross-nonlinearity in Long Short-Term Memory (LSTM) networks so a certificate within an \(l_{p}\) ball can be located if the lower bound on the true label output unit is larger than the upper bounds of all other output units. Later on, Cert-RNN [82] introduced a robust certification framework for RNNs that overcomes the limitations of POPQORN [146]. The framework maintains inter-variable correlation and accelerates the non-linearities of RNNs for practical uses. This work utilised Zonotopes [87] to encapsulate input perturbations. Cert-RNN can verify the properties of the output Zonotopes to determine certifiable robustness. Using Zonotopes, as opposed to boxes, allows improved precision and tighter bounds, leading to a significant speedup compared to POPQORN. Figure 4: Pipeline for robustness verification in [277] Recently, Abstractive Recursive Certification (ARC); was introduced to verify the robustness of RNNs [289]. using those transformations, ARC defined a set of programmatically perturbed string transformations and constructed a perturbation space. By memorising the hidden states of strings in the perturbation space that share a common prefix, ARC can efficiently calculate an upper bound while avoiding redundant hidden state computations. Roughly at the same time, Ryou et al. proposed a similar method called Polyhedral Robustness Verifier (PROVER) [216]. PROVER can represent input perturbations as polyhedral to generate a certifiably verified network for more general sequential data. To certify large transformers, DeepT was proposed by Bonaert et al. [46]. It was specifically designed to verify the robustness of transformers against synonym replacement-based attacks. DeepT employed multi-norm Zonotopes to achieve larger robustness radii in the certification. For the transformers with self-attention layers, Shi et al. developed a verification algorithm that can provide a lower bound to ensure the probability of the correct label is consistently higher than that of the incorrect labels. This method can obtain a tighter bound than those obtained from IBP-based methods. #### 6.1.3 Verification via Randomised Smoothing Randomised smoothing (RS) is another sensible technique for verifying the robustness of deep language models. For example, as shown in Figure 5, in WordDP developed by Wang et al. [255], the authors introduced a novel approach to providing a certificate of robustness by leveraging the concept of differential privacy. In this work, the researchers considered a sentence as a database and the individual words within it as records. They demonstrated that if a predictive model satisfies a specific threshold of epsilon-differential privacy for a perturbed input, it can be inferred that the input is identical to the clean, unaltered data. This methodology offers a certification of robustness against L-adversary word substitution attacks. In another recent study, Zeng et al. [281] introduced RanMASK, a certifiably robust def Figure 5: Pipeline of wordDP for word-substitution attack and robustness verification [255] adversarial attacks, which employs a novel randomised smoothing technique specifically tailored for NLP models. The input text is manually perturbed in this approach and subsequently fed into a mask language model. Random masks are then generated within the input text to create a large set of masked copies, which are subsequently classified by a base classifier. A "majority vote" mechanism determines the final robust classification. Furthermore, the researchers utilised pre-trained models such as BERT and RoBERTa to generate and train with the masked inputs, showcasing the practical applicability and effectiveness of the RanMASK technique in some real-world NLP scenarios. ### Black-box Verification Many existing verification techniques impose specific requirements on DNNs, such as targeting a specific network category or networks with particular activation functions [126]. With the increasing complexity and scale of large language models (LLMs), traditional verification methods based on layer-by-layer search, abstraction, and transformation have become computationally impractical. Consequently, we envision that black-box approaches have emerged as a more feasible alternative for verifying such models [261, 266, 271]. In the black-box setting, adversaries can only query the target classifier without knowing the underlying model or the feature representations of inputs. Several studies have explored more efficient methods for black-box settings, although most of current approaches focus on vision models [261, 266, 271]. For instance, DeepGO, a reachability analysis tool, offers provable guarantees for neural networks with deep layers and nonlinear activation functions [211]. Its extended version, DeepAgn, is compatible with various networks, including feedforward and recurrent neural networks, as long as they exhibit Lipschitz continuity [285]. Subsequently, an anytime algorithm was developed to approximate global robustness by iteratively computing lower and upper bounds [212]. This algorithm returns intermediate bounds and robustness estimates that improve as computation proceeds. For neural network control systems (NNCSs), the DeepNNC verification framework utilises a black-box optimisation algorithm and demonstrates comparable efficiency and accuracy across a wide range of neural network controllers [286]. GeoRobust, another black-box analyser, efficiently verifies the robustness of large-scale DNNs against geometric transformations [251]. This method can identify the worst-case manipulation that minimises adversarial loss without knowledge of the target model's internal structures and has been employed to systematically benchmark the geometric robustness of popular ImageNet classifiers. Recently, some researchers have attempted to develop black-box verification methods for NLP models, although these methods are not scalable to LLMs. For example, one study introduced a framework for evaluating the robustness of NLP models against word substitutions [150]. By computing a lower and upper bound for the maximal safe radius for a given input text, this verification method can guarantee that the model prediction does not change if a word is replaced with a plausible alternative, such as a synonym. ### Robustness Evaluation on LLMs Given the prominence of large-scale language models such as GPT, LLaMA, and BERT, some researchers have recently started exploring the robustness evaluation of these models. One such investigation is the work of Cheng et al. [63], who developed a seq2seq algorithm based on a projected gradient method combined with group lasso and gradient regularisation. To address the challenges posed by the vast output space of LLMs, the authors introduced innovative loss functions to conduct non-overlapping and targeted keyword attacks. Through applications in machine translation and text summarisation tasks, their seq2seq model demonstrated the capability to produce desired outputs with high success rates by altering fewer than three words. The preservation of semantic meanings in the generated adversarial examples was further verified using an external sentiment classifier. Another notable contribution comes from Weng et al. [259, 260], as shown in Figure 6. They proposed a self-verification method that leverages the conclusion of the chain of thought (CoT) as a condition for constructing a new sample. The LLM is then tasked with re-predicting the original conditions, which have been masked. This approach allows for the calculation of an explainable verification score based on accuracy, providing valuable insights into the performance of LLMs. Finally, Jiang et al. [135] introduced an approach that addresses both auto-formalisation (the translation of informal mathematics into formal logical notation) and the proving of "proof sketches" resulting from the auto-formalisation of informal proofs. To the best of our knowledge, there remains a conspicuous absence of research on verifying large language models (LLMs). As such, we encourage the academic Figure 6: Example of Self-Verification proposed in [259]. In Stage-1, LLM generates some candidate conclusions. Then LLM verifies these conclusions and counts the number of masked conditions that reasoning is correct to as the verification score in Stage-2. community to prioritise this vital research domain by developing practical black-box verification methods tailored specifically to LLMs. ### Towards Smaller Models The current LLMs are of large scale with billions or trillions of parameters. This will make the verification hard, even with the above-mentioned verification techniques. Another possible thread of research to support the ultimate verification is to use smaller LLMs. A prevailing strategy of developing a smaller LLM is to apply techniques that reduce the parameters of a pre-trained model. One typical method is model compression, such as quantisation [185, 176, 92]. However, directly applying quantisation techniques on LLMs leads to performance degradation. To this end, ZeroQuant [276] utilise kernel fusion [252] to compress weights and activations before data movement, to maximise memory bandwidth utilisation and speed up inference. Similarly, [191] introduces a new LUT-GEMM kernel that allows quantised matrix multiplications with either uniform or non-uniform weight quantisation. Both [252, 191] require custom CUDA kernels. In contrast, [76] improves predictive performance on billion-scale 8-bit transformers. [93] further improves GPT model with near-zero performance drop on 3 or 4-bit precision by deploying Optimal Brain Quantisation [92], Lazy Batch-Updates and Cholesky Reformulation. Other than quantisation techniques, Low-rank adaptation (LORA) [117] involves decomposing the weights into low-rank matrices, which has been shown to reduce the number of parameters while maintaining model performance significantly. It is worth noting that Spiking Neural Networks (SNNs), as the third generation neural networks, offer a complementary approach to improve computing efficiency, e.g., utilising sparse operation [214, 264, 263]. Recent research has introduced SpikeGPT [299], the largest SNN-based model with 260 million parameters, to demonstrate the performance of SNNs on GPT models, comparable to that of traditional neural networks. In contrast, SNNs require implementation on specialised hardware, such as neuromorphic chips like TrueNorth [30] and Loihi [72], which have been designed to mimic biological neurons at the circuit level. While the development of SNNs on LLM is still in its early stages, it presents an alternative approach to achieving computing efficiency that works parallel to compression techniques. ## 7 Runtime Monitor Guardrails mentioned in Section 2.3.2 provide a safeguard for the LLMs to interact with the end users while retaining its social responsibility. This section discusses a V&V method, i.e., runtime monitor, that is somewhat similar to the guardrails in that, it provides the safeguards on the behaviour of the LLMs against vulnerabilities such as those discussed in Section 3. The key motivation for using runtime monitors, rather than the verification, is two-fold. First, verification methods require significant computation and hence can become impractical when dealing with large models such as LLMs. Second, a deep learning model might be applied to scenarios different from where the training data is collected. These suggest the need for a runtime monitor to determine the satisfiability of a specification _on the fly_. Similar to evaluation and verification, there is no existing work on LLMs, but there are proposals for e.g., the convolutional neural networks. Given the missing specifications (although the attempts to formalise specifications started [40, 37, 127]), the current runtime monitoring methods for deep learning start from constructing an abstraction of a property, followed by determining the failure of the property by checking the distance between the abstraction and the original learning model. There are a few existing methods for abstraction of deep learning. For example, in [61], a Boolean abstraction on the ReLU activation pattern of some specific layer is considered and monitored. Conversely of Boolean abstraction, [110] consider box abstractions. In [43], a Bayesian network based abstraction, which abstracts hidden features as random variables, is considered. The construction of a runtime monitor requires the specification of the failures. Other than direct specifications such as [127], which requires additional efforts to convert the formulas into runtime monitors, this can usually be done by collecting a set of failure data and then summarising (through either learning or symbolic reasoning or a combination of them) the relation between failure data and the part of the LLMs to be monitored, e.g., some critical layers of the LLMs or the output [164, 62]. ### Monitoring Out-Of-Distribution In the following, we discuss how runtime monitoring techniques have been developed for a specific type of failure, i.e., out of distribution, which suggests that the runtime data is on a different distribution from the training data. It is commonly believed that ML models cannot be reliable when working with data drifted from the training data. Therefore the occurrence of out-of-distribution suggests the existence of risks. Neural networks, used in computer vision (CV) or natural language process (NLP) tasks, are known to make overconfident predictions on out-of-distribution (OoD) samples that do not belong to any of the training classes, i.e., in-distribution (ID) data. For security reasons, such inputs and their corresponding predictions must be monitored at runtime, especially for networks deployed in safety-critical applications. Runtime monitoring or detection of out-of-distribution (OoD) samples have been extensively studied in CV [108, 79, 166, 206, 172, 273]. Recently, researchers have paid more attention to this problem for NLP models [109], although large-scale language models (ChatGPT) have shown continuous improvement on most adversarial and OoD classification tasks [254]. Generally, to monitor OoD samples, one has to devise an ID confidence score function \(S(\mathbf{x})\) such that an input \(\mathbf{x}\) is classified as OoD if the value \(S(\mathbf{x})\) is less than a predefined threshold \(\gamma\), as shown in Equation 2. \[M(\mathbf{x})=\left\{\begin{array}{ll}\text{ID}&\text{if }S(\mathbf{x}) \geq\gamma\\ \text{OoD}&\text{otherwise}\end{array}\right. \tag{2}\] According to what information is used to construct this confidence function \(S(\mathbf{x})\), the current OoD monitoring methods for NLP models [32, 119, 57, 65, 83, 59] can be roughly divided into the following three categories. The first category includes _input density estimation methods_[206, 155, 95, 32]. These methods usually involve a density estimator, either directly in the input space or in the latent space of the generative models for the ID data. The probability value of the input given by such a density estimator can be used as the ID score. One of these examples is [32] that uses the _token perplexity_[155], avoiding the bias of text length as the ID confidence score. The second category includes _feature or embedding space approximation methods_[269, 196, 284, 297, 57, 298]. These methods first approximate the seen features by some distribution function, and then use the distance (e.g., Euclidean and Mahalanobis distances) between this distribution and the input feature as the ID confidence score. For instance, [57] extracts holistic sentence vector embeddings from all intermediate layers and shadow states of all tokens to enhance the general semantics in sentence vectors, thereby improving the performance of OoD text detection algorithms based on feature space distance. The third category includes _output confidence calibration methods_[109, 74, 71, 163, 226, 279]. These methods use the model's prediction confidence (usually calibrated) as the ID score. The classic is the _maximum softmax probability_, often used as a strong baseline for OoD detection. Despite a lot of work and effort, the current results can still be improved. Moreover, no single method is better than the other at present, which is understandable, given the infinity of OoD data and the ambiguous boundaries of ID data. Finally, we remark that OoD detection task in the field of NLP still requires greater efforts in the following two aspects. First, the community ought to reach a consensus on a fine-grained definition of the OoD problem for NLP models, by precisely considering the sources of OoD data and the tasks of NLP models. For example, existing work is done on NLP classification tasks. How to define the OoD problem for the generative NLP models, e.g., what kind of data should be called OoD data to these generative models? Second, a fair evaluation method is needed, given the fact that the training datasets for most large language models (LLM) are unavailable, i.e., it is unclear whether the used test dataset for evaluating OoD methods are OoD data to the tested models or not. ### Monitoring Output Failures As we mentioned in previous sections, although LLMs have shown strong performance in many domains [38, 174, 136, 231, 295], they are also found to be prone to various types of failures after scrutiny and evaluation [47, 225, 290], such as factual errors [290], coding [171, 144], math [94], and reasoning [170]. These failures can spell fatal disaster for downstream tasks, especially in safety-critical applications. To address these issues, one way is to devise a mechanism to generate constrained outputs [118, 148, 179]. However, LLMs generate output by selecting appropriate words from a vocabulary rather than grabbing corresponding snippets from sources of truth, or reasoning on them. This generative nature makes it challenging to control the output, and even more challenging to ensure that the generated output is, in fact consistent with the information source. Another way is to monitor the output of the models and take necessary actions. In the following, we first summarize the limited amount of existing work on runtime monitoring of such failures, and then discuss how to proceed from a future perspective. In addition to the generative nature of LLMs, the diversity of downstream tasks also makes it extremely difficult, if not impossible, to have a general monitoring framework for such generative outputs. Such output failures need to be addressed in a targeted manner, according to different application scenarios and the specific scientific knowledge accumulated by humans in various fields such as science and technology. Regarding factual errors, [242] proposed a testbed for fact verification. However, this remains an unsolved challenge. Similar to fact-checking, we argue that for code generation failures, the fruitful methods, techniques, and tools accumulated in the field of formal methods related to _compilers design_[29] and _program verification_[246] can be adapted to check whether the generated code is executable or satisfies some specified invariants [181, 42, 41], respectively. As for math-related failures, existing tools in _automated theorem proving_[91, 44] (e.g., Z3 [73] and Prover9 [183]) may help. Finally, we point out that the current research on the output failures of large-scale language models is still blank. More research is needed, such as configuring a runtime monitor for the output of a specific application, or combining symbolic reasoning and causal reasoning with the model's learning process to ensure that the output avoids failures from the source. ### Perspective Since LLMs are still in their infancy and have many known vulnerabilities, monitoring these models in real time is a longstanding challenge. In this section, we outline topics for future work to call on more researchers to address this challenge from three perspectives: why, what, and how. _Why does a model need to be monitored_? The first thing we want to highlight is whether at some point the LLMs can be trained intelligent enough, so that there is no need to design a separate runtime monitor for these models. For instance, the model is endowed with abilities to automatically detect "illegal inputs" (e.g., out-of-distribution inputs) and guarantee the correctness of its outputs. From our authors' perspective, achieving such level of intelligent models in the foreseeable future is very difficult, if not impossible. The main reasons are as follows. Existing LLMs are still learned from observations, i.e., a training dataset containing partial information. There is no evidence that current learning methods can infer from parts to wholes, even in the case of massive data, nor is there evidence that a training dateset captures all the information. Furthermore, existing learning methods do not characterize their generalization bounds but instead measure the so-called generalization error, which prevents the identification of "illegal inputs". Therefore, it is necessary to monitor the model in real-time. _What should be monitored_? One needs to overcome various vulnerabilities listed in Section 3 to reliably use LLMs in safety-critical applications. Equipping the model with a corresponding runtime monitor provides a possible solution complementary to offline verification methods. For example, there have been some works on monitoring whether the model's prediction is made for out-of-distribution inputs and whether the model's output is consistent with some existing fact base. However, to our knowledge, there is no monitoring work on other output failures, e.g., reasoning and code errors; on intended attacks, e.g., robustness, backdoor, and data poisoning. Thus, we call on researchers and practitioners to investigate more in these topics. _How to better design a monitor for a model?_ The state-of-the-art methods are based on the uncertainty model's predictions. Unfortunately, low uncertainty cannot assure the model's prediction is reliable, and vice versa. To better design monitors for LLMs, we need the following efforts. First, some fundamental intrinsic issues of deep learning models must be better addressed, such as model implicit generalisation and decision boundaries and explainability of model decisions, which may provide more rigorous and formal characterisation and specification for building monitors. Specific to LLMs, some special issues need to be tackled, such as the unavailability of training datasets, the non-transparency of models, the generative nature of multi-modality, etc. Regarding specific tasks, such as the most studied problem of monitoring out-of-distribution inputs, principled methods for system design and evaluation of monitors still needs to be included, as current work is based on calibration of predictive confidence scores and evaluation on one-sided test datasets. Last, we call for great attention to unexplored topics, such as how to monitor other trustworthiness and responsibility issues, intended attacks, and unintended bugs, along with the model's social and ethical alignments with human society. ## 8 Regulations and Ethical Use V&V provides a set of technical means to support the alignment of LLMs with human interests. However, it has been argued that constructing LLMs that cannot be abused can be impossible. This suggests that technical means are necessary, but can be insufficient. To this end, it is needed to have _ethical means_, to supplement the technical means, in ensuring that the _complete alignment_ of the use of LLMs with human interests. In the following, we discuss several recent signs of progress. ### Regulate or Ban? A recent debate on "a 6-month suspension on the development [4] vs. a regulated development" has shown the anxiety of, and the difference of opinions from, the community upon the possibilities of AI development being misaligned with human interests. More radical actions have also been taken. For example, Italy has reportedly banned the ChatGPT [1]. In a US Senate Hearing on May 2023, OpenAI CEO Sam Altman asked the government to regulate AI [221]. Actually, on AI regulations, major players such as the EU, US, UK, and China all have their respective approaches and initiatives, e.g., the EU's GDPR [8], AI Act [21]. Data Act [22], the UK's Data Protection Act [9] and pro-innovative approach to regulate AI [23], the US's Blueprint for an AI Bill of Rights [19] and AI Risk Management Framework [12], and China's regulations for recommendation algorithms [13], deep synthesis [11], and algorithm registry [20]. It is unclear (1) whether these regulations on the more general AI/ML, or other AI/ML algorithms, can automatically work for LLMs without any changes, and (2) how the regulations can be projected onto each other in a rigorous, yet operational, way. More importantly, even for general AI/ML, it still needs to be clarified how to _sufficiently and effectively_ address regulatory requirements (such as robustness and transparency) with technical means. The V&V framework proposed in this survey will be one viable solution. Nevertheless, significant issues raised by the LLMs, notably the ChatGPT, include copyright and privacy. The ChatGPT developers reportedly use data from the internet, and it is unclear if the _copyrights of the training data_ have been carefully dealt with, especially when the ChatGPT is eventually for commercial use. Moreover, as a conversational AI, the _privacy of the end users_, when engaged in a dialogue, is a serious concern. The end-users should be informed on whether and how their dialogues will be stored, used, and redistributed. ### Responsible AI Principles Responsible and accountable AI has been a topic of discussion for the past years (see e.g., [10, 25, 27]), with a gradual convergence to include properties such as transparency, explainability, fairness, robustness, security, privacy, etc. A governance framework is called for to ensure that these properties are implemented, evaluated, and monitored. A comprehensive discussion and comparison is out of the scope of this survey, but we note that, many properties are required, consistent definitions to many of them are still missing, and properties can be conflicting (i.e., the improvement of one property may compromise others). It is therefore not surprising that it can still be a long way to turn the principles into operational rules. Specific to LLMs, ChatGPT and the like have led to severe concerns on e.g., potential misuse, unintended bias, and fair access. To this end, on the enterprise level, ethical principles are needed to guide the development and use of LLMs, including questioning whether something should be done rather than whether it can be done, as requested in [6]. ### Transparency and Explainability First, OpenAI's decision to not open-source GPT-3 and beyond has already led to concerns on the transparent development of AI. However, OpenAI said it plans to make more technical details available to other third parties for them to advise on how to weigh the competitive and safety considerations against the scientific value of further transparency. It is also important to note that, no technical details are available on how the guardrail is designed and implemented. It will also be interesting to discuss on whether the guardrail itself should undergo a verification process. Second, it has been hard to interpret and explain the decisions of the deep learning models such as image classifiers. The situation becomes worsens when dealing with LLMs [139], which have emergent and hard-to-explain behaviours. For example, it has been observed that adding an incarnation, such as "Let's think step by step", to the prompt can achieve improved responses from GPT-3. Techniques are needed to explain such a phenomenon. This calls for extending explainable AI techniques to work with LLMs. In particular, it is necessary to consider the explanations' robustness to explain why such incarnation can lead to improved, yet different, answers. To this end, some prior works on image classifiers, such as [292, 123], can be considered. Conclusions This paper provides an overview of the known vulnerabilities of LLMs, and discusses how the V&V techniques might be adapted to work with them. Given the LLMs are quickly adopted by applications that have direct or indirect interactions with end users, it is imperative that the deployed LLMs undergoes sufficient verdict processes to avoid any undesirable safety and trustworthy consequences. Considering the size and complexity of LLMs, white-box V&V techniques will likely become impractical and the community may need to develop black-box, non-determinism sensitive, V&V techniques. Moreover, multi-disciplinary development will ensure that all trustworthy issues are fully considered. ## Acknowledgements This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 956123. It is also financially supported by the U.K. EPSRC through End-to-End Conceptual Guarding of Neural Architectures [EP/T026995/1].
2306.08390
Evolving laws and cosmological energy
We couple the issue of evolution in the laws of physics with that of violations of energy conservation. We define evolution in terms of time variables canonically dual to ``constants'' (such as $\Lambda$, the Planck mass or the gravitational coupling), mimicking a procedure associated with one formulation of unimodular gravity. We then introduce variability via a dependence of {\it other} fundamental ``constants'' on these clocks. Although this is not needed, sharper results are obtained if this procedure violates local Lorentz invariance, which we define in the spirit of Horava-Lifshitz theories (modifying a $3+1$ split action, so that a Lorentz invariant 4D reassembly is no longer possible). We find that variability in the ``laws of physics'' generically leads to violations of energy conservation if either a matter parameter varies as a function of a gravitational clock, or a gravity parameter depends on a matter clock, with the other combinations sterile. We illustrate this with a variety of clocks (associated with matter, the speed of light, the Ricci scalar, etc) and parameters (mainly the gravitational and matter speed of light, but also the cosmological constant). We can accommodate in this construction (and improve) several early Varying Speed of Light solutions to the flatness and cosmological constant problem, allowing for variability effects related to the spatial curvature and $\Lambda$ to cause creation of radiation and a Hot Big Bang. But we can also go well beyond, for example modelling signature change by a change in the sign of $c^2$, thereby obtaining a {\it classical} realization of the Hartle-Hawking no-boundary proposal, among other developments.
Joao Magueijo
2023-06-14T09:28:01Z
http://arxiv.org/abs/2306.08390v2
# Evolving laws and cosmological energy ###### Abstract We couple the issue of evolution in the laws of physics with that of violations of energy conservation. We define evolution in terms of time variables canonically dual to "constants" (such as \(\Lambda\), the Planck mass or the gravitational coupling), mimicking a procedure associated with one formulation of unimodular gravity. We then introduce variability via a dependence of _other_ fundamental "constants" on these clocks. Although this is not needed, sharper results are obtained if this procedure violates local Lorentz invariance, which we define in the spirit of Horava-Lifshitz theories (modifying a \(3+1\) split action, so that a Lorentz invariant 4D reassembly is no longer possible). We find that variability in the "laws of physics" generically leads to violations of energy conservation if either a matter parameter varies as a function of a gravitational clock, or a gravity parameter depends on a matter clock, with the other combinations sterile. We illustrate this with a variety of clocks (associated with matter, the speed of light, the Ricci scalar, etc) and parameters (mainly the gravitational and matter speed of light, but also the cosmological constant). We can accommodate in this construction (and improve) several early Varying Speed of Light solutions to the flatness and cosmological constant problem, allowing for variability effects related to the spatial curvature and \(\Lambda\) to cause creation of radiation and a Hot Big Bang. But we can also go well beyond, for example modelling signature change by a change in the sign of \(c^{2}\), thereby obtaining a _classical_ realization of the Hartle-Hawking no-boundary proposal, among other developments. ## I Introduction John Wheeler reputelly stated that "everything comes out of higgledy-pigledy", to convey the view that the laws of Nature are subject to (possibly random) mutation, rather than being set in stone [1]. In this paper we investigate whether such a mutability could be the origin of the matter and energy in our Universe. Symmetries and conservation laws are intimately connected (as enshrined in Noether's theorem); specifically the time-translation symmetry of the laws of physics implies conservation of matter and energy. This suggests a logical connection between Wheeler's view and cosmic creation out of nothing. A major question hanging over this possibility is how to define the "time in terms of which" the laws evolve. The definition of time is closely related to the laws one accepts (to the extent that many propositions in any system are circular), so the construction of evolving laws and their associated times may indeed be higgledy-pigledy. In this paper we take a rather conservative view on the matter (for more radical alternatives see, for example, [2; 3]). We propose interweaving two strands of thought. Along one strand, it has for long been known that one can frame evolution in the physical laws in terms of variability in the fundamental constants they employ. This goes back to at least Dirac's seminal paper [4] (see [5] and references therein). Following another strand, one can obtain robust definitions of time by demoting the constants of nature to mere constants of motion (as done for the cosmological constant in unimodular gravity [9; 10; 11; 12; 13; 14; 15; 16; 17], specifically according to the procedure of [9]). Employing the recipe in [9], one finds that the canonical conjugates of the demoted physical constants are excellent candidates for physical, relational clocks, capable of surviving quantum gravity, among other blessings (see [30; 31], for example). As an exploratory hypothesis, in this paper we propose a "parting of constants": perhaps the fate of some constants is to provide clocks via the procedure of [9; 10; 11; 12; 13; 14; 15; 16; 17], whereas the fate of others is to vary in terms of such clocks. This orderly evolution might not have been to Wheeler's taste, but, as we will see, it will allow us a measure of pragmatic progress. ## II A possible stage for evolution We consider variability within a set of target parameters \(\mathbf{\beta}\), which could be any fundamental constant. The starting point is an action \(S_{0}\) (which can be standard General Relativity or not) with all "constants" usually set to 1 restored, placed inside the space-time integral, and their different roles dissociated in the following sense. For example, \(c\), colloquially the "speed of light", plays several roles that are usually conflated, but which have no reason to be identified once we consider variability in "\(c\)". This was stressed in [6], and applies to other "constants" demoted from their status: to give another example, Newton's constant \(G\) has the double role of defining the Planck scale (appearing in the gravitational commutation relations) and the gravitational coupling to matter, and the two can be dissociated [18; 7]. We may thus consider a target \(\mathbf{\beta}\) which includes \(c_{P}\) and \(G_{P}\) (appearing in the Planck scale, multiplying the gravity action), \(c_{g}\) (the \(c\) in the gravity metric), \(c_{m}\) (the \(c\) in the matter metric) and the coupling between matter and gravity, \(G_{M}\). We may then fix some of these, identify others, or impose a constraint between them. We could also consider further parameters, such as the electron charge \(e\), and further dissociations of roles. ### The unimodular time prototype As already stressed, the question then is: variability as a function of what? Rather than allowing a coordinate time \(t\) to provide the brutal answer (thereby breaking time reparameterization invariance, as in [25]), we propose the use physical, "relational" time(s). These may be defined mimicking the Henneaux and Teitelboim (HT) formulation [9] of unimodular gravity [9; 10; 11; 12; 13; 14; 15], well known for producing a physical measure of time dual to the cosmological constant \(\Lambda\): the so-called 4-volume or unimodular time [16; 17; 9]. In the HT formulation of unimodular theory full diffeomorphism invariance is preserved (i.e. they are not restricted to volume preserving ones), but one adds to the base action \(S_{0}\) an additional term: \[S_{0}\to S=S_{0}+S_{U}=S_{0}-\int d^{4}x\,\rho_{\Lambda}(\partial_{\mu}{\cal T }^{\mu}). \tag{1}\] Here \({\cal T}^{\mu}\) is a density, so that the added term is diffeomorphism invariant without the need of the metric or the connection. Since the metric and connection do not appear in the new term, the Einstein equations (and other field equations) are left unchanged. In the standard theory \(S_{0}\) does not depend on \({\cal T}^{\mu}\); hence, one equation of motion states the on-shell constancy of \(\rho_{\Lambda}\). In addition, the gauge-invariant zero-mode of \({\cal T}^{0}\): \[T\equiv\int d^{3}x\,{\cal T}^{0} \tag{2}\] provides a definition of time, canonically dual to \(\rho_{\Lambda}\). For standard General Relativity we have: \[T_{\Lambda}=-\int d^{4}x\sqrt{-g} \tag{3}\] that is, the 4-volume to the past, or unimodular time [9]. More generally [18; 19], we may select a set of \(D\) constants \(\mathbf{\alpha}\) and perform a similar exercise: \[S_{0}\to S=S_{0}+S_{U}=S_{0}-\int d^{4}x\,\mathbf{\alpha}\cdot\partial_{\mu}{\cal T }^{\mu}_{\mathbf{\alpha}} \tag{4}\] where the dot denotes the Euclidean inner product in \(D\) dimensional space. Just as it happens for unimodular theory, the zero-modes of the zero components of the density \({\cal T}^{\mu}_{\mathbf{\alpha}}\) provide definitions of time \({\mathbf{T}}_{\mathbf{\alpha}}\), dual to the on-shell constants \(\mathbf{\alpha}\). Besides including \(\rho_{\Lambda}\) in \(\mathbf{\alpha}\) (as in unimodular theory) we can consider a more general setting, including the "sequester" [22; 23], where \(c_{P}^{2}/G_{P}\) (responsible for the Planck mass) is an element of \(\mathbf{\alpha}\), its canonical conjugate providing a Ricci time clock. ### Evolution potentials We define "evolution in the laws of physics" by making the parameters \(\mathbf{\beta}\) vary according to specified functions of the relational times \({\mathbf{T}}_{\mathbf{\alpha}}\): \[\mathbf{\beta}=\mathbf{\beta}({\mathbf{T}}_{\mathbf{\alpha}}). \tag{5}\] If \(\mathbf{\beta}\) overlap with \(\mathbf{\alpha}\) this amounts to imposing a second-class constraint, but we do not consider this situation here. (The implications of such constraints vary from a technical nuisance to downright inconsistency, depending on the case; this is the reason for our parting of the constants.) Instead, we investigate a sub-class of theories where _some_ constants (the \(\mathbf{\alpha}\)) are decontanatized so that their duals provide clocks, \({\mathbf{T}}_{\mathbf{\alpha}}\); and where a _distinct set_ of "constants" (the \(\mathbf{\beta}\)) are allowed to vary as functions of the physical clocks provided by the former. There are many combinations and choices for \(\mathbf{\alpha}\) and \(\mathbf{\beta}\) (to be explored in this paper and in a sequel), but given our emphasis on Lorentz invariance violation (LLIV) in this paper we will take as a starting point the selection (to be changed later): \[\mathbf{\beta} = \left(c_{g}^{2},c_{m}^{2},...\right) \tag{6}\] \[\mathbf{\alpha} = \left(\rho_{\Lambda},\frac{3c_{P}^{2}}{8\pi G_{P}},\frac{G_{M}}{G _{P}},...\right), \tag{7}\] where the extra factor of 6 in \(\alpha_{2}\) is included for later convenience. For standard General Relativity: \[S_{0}=\int d^{4}x\,\left[\frac{\alpha_{2}}{6}R+\alpha_{3}{\cal L}_{M}-\rho_{ \Lambda}\right], \tag{8}\] and the expressions for \(T_{\mathbf{\alpha}}\) are well-known. Beside \(T_{1}\equiv T_{\Lambda}\), we have \[T_{2}\equiv T_{R} = \frac{1}{6}\int d^{4}x\sqrt{-g}R \tag{9}\] \[T_{3}\equiv T_{N} = \int d^{4}x\sqrt{-g}{\cal L}_{M} \tag{10}\] i.e.: Ricci and Newton1 times (see [22; 23; 24; 25; 7; 8] in addition to [9; 10; 11; 12; 13; 14; 15; 16; 17]). \(T_{\Lambda}\) and \(T_{R}\) are nothing but the "fluxes" used in the sequester model [18; 19; 22; 23]. The ellipsis in \(\mathbf{\alpha}\) refers to further measures of time we will use later, tied to specific matter contents popular in phenomenological cosmology. We will also toy with \(c_{m}\) being included in the \(\mathbf{\alpha}\) rather than \(\mathbf{\beta}\) and other combinations. Footnote 1: Since \(T_{3}\) is dual to Newton’s \(G_{N}\) we propose this nomenclature. The evolution functions \(\mathbf{\beta}({\mathbf{T}}_{\mathbf{\alpha}})\) are "givens", in the same sense that classical potentials in field theory are givens. This is more than an analogy, as we shall see in Sec. V.3. If LLIV is present, the \(\mathbf{\beta}\) functions can be seen as "LLIV-potentials". In most cases we will assume they are step functions, modelling abrupt phase transitions, just as in [25]. ### Gauge-invariance The unimodular-like extensions do not add new local degrees of freedom. This was remarked in [9] for \(\Lambda\) (see also [26; 27]) and is a general feature of all extensions (4). It follows from the invariance of \(S_{U}\) under the gauge symmetry: \[{\cal T}^{\mu}_{\Lambda}\to{\cal T}^{\mu}_{\Lambda}+\epsilon^{\mu}\quad\mbox{ with}\quad\partial_{\mu}\epsilon^{\mu}=0. \tag{11}\] It leads to first class constraints cancelling out the apparent new local degrees of freedom. However, the zero-mode of the zeroth component, \[T_{\Lambda}(\Sigma)\equiv\int_{\Sigma}d^{3}x\,{\cal T}^{0}_{\Lambda} \tag{12}\] is gauge-invariant, so globally (and in the minisuperspace approximation) we do have an extra degree of freedom. For example, in standard General Relativity with a pure \(\Lambda\) we have no degrees of freedom under homogeneity and isotropy (de Sitter with the pre-given \(\Lambda\) is the only solution), whereas in unimodular theory we can specify the value of \(\Lambda\). Had we made the \(\mathbf{\beta}\) (now functions of space as well) depend on the local \(T^{0}({\bf x})\) (and not just its zero mode) we would have broken the gauge symmetry. This is interesting (and was investigated in a different context in [26]), but it will not be considered here. Finally, we note that, although the gauge-invariant zero mode depends on the foliation \(\Sigma\), the theory does not need to break local Lorentz invariance. However, it can do so, and the presence of a preferred \(\Sigma\) may be a motivation for doing so. We now explain what we mean by this, and how it relates to the choice of \(\mathbf{\beta}\). ## III Breaking local Lorentz invariance We define local Lorentz invariance violation (LLIV) in the spirit of Horava-Lifshitz (HL) theory [36]. The idea is to take a local Lorentz invariant action, choose a foliation \(\Sigma\), perform a 3+1 split as in the Hamiltonian formulation, and then change an aspect of the split action so that a local Lorentz invariant 4D reassembly is no longer possible. The kinetic term of HL theory is an example of this. One considers a generic kinetic term of the form: \[S_{K}=\frac{c^{3}}{16\pi G}\int dt\,d^{3}x\,\sqrt{|g|}[K_{ij}K^{ij}-\lambda(K ^{i}_{i})^{2}], \tag{13}\] so that when \(\lambda=1\) it reduces to the kinetic term of General Relativity, and so complies with local Lorentz invariance. One then allows \(\lambda\neq 1\), so that the theory no longer derives from the Einstein-Hilbert action. Although the action is still invariant under foliation-preserving (3D) diffeomorphisms and time reparameterization, it lost full 4D diffeomorphism invariance. This is merely an illustration of the process we have in mind, and we ignore other aspects of HL theory, such as detailed balance. ### A gravitational example In the same spirit, we take the Einstein-Cartan action subject to the operations described at the start Section II: \[S_{EC}=\int\frac{c_{P}^{2}}{32\pi G_{P}}\epsilon_{ABCD}e^{A}e^{B}R^{CD}[\Gamma] \tag{14}\] (here \(e^{A}\) is the tetrad and \(\Gamma^{A}_{\phantom{A}B}\) is the spin-connection). We then split it as [28]: \[S_{EC} = \int dt\,d^{3}x\,\frac{c_{P}^{2}}{16\pi G_{P}}[2\dot{K}^{i}_{a}E^ {a}_{i}-(NH+N^{a}H_{a}+ \tag{15}\] \[+N^{ij}G_{ij})],\] where \(K^{i}_{a}\) is the extrinsic curvature connection2, \(E^{a}_{i}\) is the densitized inverse triad, and the last three terms are the Hamiltonian, Diffeomorphism and Gauss constraints: Footnote 2: From which the Ashtekar connection can be built by a canonical transformation [28], should we wish to generalize the theory we are about to propose. \[H = \frac{1}{\sqrt{h}}(K^{j}_{a}K^{i}_{b}-K^{i}_{a}K^{j}_{b})E^{a}_{i }E^{b}_{j}-c_{g}^{2}\sqrt{h}R_{3} \tag{16}\] \[H_{a} = -2D_{b}(K^{i}_{a}E^{b}_{i}-\delta^{a}_{b}K^{i}_{c}E^{c}_{i})\] (17) \[G_{ij} = K_{a[j}E^{a}_{k]} \tag{18}\] (where \(R_{3}\) is the 3D curvature, \(h\) is the determinant of the 3 metric and \(D_{a}\) the 3D covariant derivative) enforced by corresponding Lagrange multipliers \(N\), \(N^{a}\) and \(N^{ij}\). In writing (15) we have chosen units as follows: the coordinates satisfy \([x^{0}]=T\) and \([x^{i}]=L\); the metric is dimensionless and so \([e^{A}_{\mu}]=[E^{a}]=[N]=[N^{i}]=L^{0}\); the extrinsic curvature has units of inverse time squared \([K^{a}_{b}]=1/T^{2}\) (derivatives with respect to time), but the 3D curvature has units of inverse length squared, \([R_{3}]=1/L^{2}\); the volume element is written in terms of time and 3D spatial volume. With these choices the pre-factor of the action is proportional to \(c_{P}^{2}/G_{P}\), as written in (15), and a factor of \(c_{g}^{2}\) appears in the last term of the Hamiltonian constraint (16) (with the Hamiltonian constraint having units of \([H]=1/T^{2}\)). If \(c_{g}\) is constant none of this matters. But if we allow its variation (even if only as a function of a global time variable, \(c_{g}=c_{g}(\mathbf{T_{\alpha}})\), as in Section II), then we have an analogy with the kinetic term in HL for \(\lambda\neq 1\). Although we started from (14), once we allow \(c_{g}=c_{g}(\mathbf{T_{\alpha}})\) the split action (15) can no longer arise from (14). Any attempt to do so would produce extra terms in the derivatives of \(c_{g}\). Since the connection \(\Gamma^{A}_{\ B}\) has units of \(1/T\), the extra terms in the action arise from: \[\int\epsilon_{ABCD}e^{A}e^{B}\Delta R^{CD} \tag{19}\] where: \[\Delta R^{A}_{Bi}=\partial_{\mu}\frac{1}{c_{g}}\Gamma^{A}_{Bi}dx^{\mu}dx^{i}. \tag{20}\] These terms break local Lorentz invariance. Hence the action (15) obtained by neglecting these terms when starting from the LI Eq. (14) must be LLIV. The above choice of units is crucial to LLIV. If \(K^{i}_{a}\) and \(R_{3}\) are defined with the same units, then an overall common factor of \(c_{g}\) appears in (16), so that the time variations in \(c_{g}\) can be absorbed into a redefinition of \(N\). Also, if we define time as \(x^{0}\), that is time with units of length, no extra terms appear going from (14) to (15) (but see [37]). With such choices, in effect the notation has absorbed the assumption of a fixed speed of light, with time and space rendered equivalent and the speed of light losing its meaning. Upon quantization the gravitational commutation relations inferred from (15) are: \[[K^{i}_{a}(\mathbf{x}),E^{b}_{j}(\mathbf{y})] = i\frac{8\pi G_{P}\hbar}{3c_{P}^{2}}\delta^{b}_{a}\delta^{i}_{j} \delta(\mathbf{x}-\mathbf{y}) \tag{21}\] \[= il_{P}^{2}c_{P}\delta^{b}_{a}\delta^{i}_{j}\delta(\mathbf{x}-\mathbf{y}),\] where \(l_{P}^{2}=8\pi G_{P}\hbar/c_{P}^{3}\) is the Planck length (note that the delta function has units of \(1/L^{3}\), so it all matches). This justifies the statement that \(G_{P}\) and \(c_{P}\) are the relevant variables setting the Planck scale. Indeed this "speed of light", \(c_{P}\), is nothing but a conversion factor between different parameterizations of the Planck scale with different dimensions, and it will be absorbed into a single parameter (\(\alpha_{2}\), as in (7), and ignored for the rest of this paper. ### A matter example Theories where \(\alpha_{3}=G_{N}/G_{P}\) and its dual, \(T_{N}\), come into play offer another example of LLIV in the same spirit (see (7) and (8) for definitions). Integrations by parts in \({\cal L}_{M}\), which are innocuous in the standard theory, lead to different theories if \(\alpha_{3}\) is included and there is \(T_{N}\) dependence. This induces LLIV if the integration by parts is designed to do so. Let us consider extension (4) applied to \(\alpha_{3}\), and a foliation \(\Sigma_{t}\) used to define the associated gauge-invariant \(T_{N}\) (as in (12)). The 3+1 split of this term in \(S_{U}\) is: \[S=S_{0}-\int dt\,\alpha_{3}(t)\dot{T}_{N} \tag{22}\] where we have _not_ assumed homogeneity anywhere. If there is \(T_{N}\) dependence in \(S_{0}\), then \(\alpha_{3}\) will change in time _but not in space_ in this foliation, according to the corresponding Hamilton equation, \(\dot{\alpha}_{3}=\partial H/\partial T_{N}\). Now take a massless scalar field in flat spacetime as an example. Upon a 3+1 split in the same foliation its action becomes: \[S_{0}=\int d^{3}x\,dt\left(\dot{\phi}\Pi_{\phi}-\frac{N}{2}\left(\Pi_{\phi}^{ 2}+(\partial_{i}\phi)^{2}\right)\right) \tag{23}\] and we could perform an integration by parts in time in the first term to bring this to: \[S_{0}=\int d^{3}x\,dt\left(\dot{\Pi}_{\phi}\phi-\frac{N}{2}\left(\Pi_{\phi}^{ 2}+(\partial_{i}\phi)^{2}\right)\right) \tag{24}\] (with a \(\Pi_{\phi}\to-\Pi_{\phi}\) redefinition). Both actions are (at least classically) equivalent if \(\alpha_{3}\), as defined in (8), is not deconstantized. Both lead to a massless Klein-Gordon equation. Under (22), however, it matters if in (8) we take (23) or (24) for \({\cal L}_{M}\). If \(S_{0}\) depends on \(T_{N}\), so that \(\alpha_{3}=\alpha_{3}(t)\) in this foliation, they are different theories. The first is still Lorentz invariant and has equation of motion \(\partial_{\mu}(\alpha_{3}\partial^{\mu}\phi)=0\). The second has LLIV, and satisfies: \[-\alpha_{3}(t)\ddot{\phi}+\nabla\phi=0 \tag{25}\] in this foliation. Thus, \(\alpha_{3}\) theories may be used to implement LLIV in the matter sector, even though clearly they do not have to. Note that even when \(\alpha_{3}\) is a constant, the on-shell value of the action is different in the two theories. In the first, the on-shell Lagrangian is the pressure (should there be a potential, the Lagrangian would be \(p=K-V\)). In the second, it is minus the energy density (minus the Hamiltonian, since the canonical pair terms vanish). This affects the on-shell expression for \(T_{N}\). ## IV Reduction to homogeneity and isotropy Reducing the action \(S=S_{0}+S_{U}=S_{G}+S_{M}+S_{U}\) (where \(S_{G}\) and \(S_{M}\) are the gravity and the matter base actions) to homogeneity and isotropy we find: \[S_{G} = V_{c}\int dt\,\alpha_{2}\bigg{(}\dot{b}a^{2}+Na(b^{2}+kc_{g}^{2} )\bigg{)} \tag{26}\] \[S_{M} = V_{c}\int dt\,\alpha_{3}\bigg{(}\dot{m}_{i}\phi_{i}-Na^{3}( \rho_{\Lambda}+\rho_{M})\bigg{)}\] (27) \[S_{U} = V_{c}\int dt\,\dot{\mathbf{\alpha}}\cdot\mathbf{T}=V_{c}\int dt\,\dot{ \rho}_{\Lambda}T_{\Lambda}+....\,. \tag{28}\] Here \(a\) is the expansion factor, \(b\) is the minisuperspace connection variable (an off-shell version of the Hubble parameter, since \(b=\dot{a}\) on-shell, if there is no torsion), \(k\) is the spatial curvature, \(N\) is the lapse function and \(V_{c}=\int d^{3}x\) is the comoving volume of the region under study, assumed finite throughout this paper (in the quantum cosmology classical literature one usually chooses \(k=1\) and \(V_{c}=2\pi^{2}\); see [38] for a discussion of the criteria for the choice of \(V_{c}\)). We allow for a matter action \(S_{M}\) with multi-fluids and possibly scalar fields (see later applications), with \(\rho_{M}=\sum_{i}\rho_{i}\) with \[\rho_{i}=\frac{m_{i}}{a^{3(1+w_{i})}} \tag{29}\] with \(w_{i}\) the fluid's equation of state (here assumed constant), \(m_{i}\) a constant of motion in the usual theory, dual to variables \(\phi_{i}\) as defined in the Lagrangian formulation of perfect fluids of [29], reduced to MSS as in [30; 31; 32]. We will slightly generalize this in Section V.2. For this action, we therefore have total Hamiltonian: \[H=NaV_{c}\bigg{[}-\alpha_{2}(b^{2}+kc_{g}^{2})+\alpha_{3}(\rho_{\Lambda}+\rho _{M})a^{2}\bigg{]} \tag{30}\] and Poisson brackets, \[\{b,A^{2}\} = \frac{1}{V_{c}} \tag{31}\] \[\{\alpha_{i},T_{i}\} = \frac{1}{V_{c}} \tag{32}\] with the conjugate of \(b\) given by \[A^{2}=a^{2}\alpha_{2}. \tag{33}\] Upon quantization the gravitational commutation relation in MSS can be written as: \[\Big{[}\hat{b},\hat{a^{2}}\Big{]} = i\frac{8\pi G_{P}\hbar}{3V_{c}c_{P}^{2}}=\frac{il_{P}^{2}c_{P}}{ 3V_{c}}\equiv i\mathfrak{h}c_{p} \tag{34}\] which is a reduced version of (21). The Planck constant \(\mathfrak{h}\) is the relevant quantity for minisuperspace quantum cosmology. We first assume that \(\alpha_{2}\) is kept constant (this leads to a fork in the formalism, as we will investigate in Section VII.1). Then \(a\) and \(A\) can be used interchangeably. In finding the Poisson equations, we stress that, as per the usual definition of Poisson bracket, \(\mathbf{\alpha}\) and \(\mathbf{T}\) (and so also \(\mathbf{\beta}\)) are kept fixed when evaluating Poisson brackets involving \(b\) and \(a\). As a result their equations of motion are obtained from the usual ones simply by inserting varying constants where usually they appear as fixed parameters. Hence, for all theories based on this set up (with the exclusion of those with varying \(\alpha_{2}\), as we will see), we have a Hamilton constraint (obtained by varying with respect to \(N\)) and Poisson equations for for \(a\) and \(b\): \[b^{2}+kc_{g}^{2} = \frac{\alpha_{3}}{\alpha_{2}}\rho a^{2}\] \[\dot{a} = \{a,H\}=Nb\] \[\dot{b} = \{b,H\}=-\frac{\alpha_{3}}{2\alpha_{2}}(\rho+3p)Na. \tag{35}\] with \[\rho = \rho_{\Lambda}+\rho_{M}=\rho_{\Lambda}+\sum_{i}\rho_{i} \tag{36}\] \[p = p_{\Lambda}+p_{M}=-\rho_{\Lambda}+\sum_{i}w_{i}\rho_{i}. \tag{37}\] This is equivalent to directly inserting varying constants into the unmodified Friedman and Raychaudhuri equations, as in [25]: \[\dot{a}^{2}+kc_{g}^{2} = \frac{\alpha_{3}}{\alpha_{2}}\rho a^{2}\] \[\ddot{a} = -\frac{\alpha_{3}}{\alpha_{2}}(\rho+3p)]a.\] (in the \(N=1\) gauge for simplicity). Assuming further a fixed \(\alpha_{3}\) (not necessarily equal to 1), by dotting the first equation and comparing with the second, a consistency relation can be obtained: \[\dot{\rho}+3\frac{\dot{a}}{a}(\rho+p)=\frac{k\alpha_{2}}{a^{2}\alpha_{3}}\frac {dc_{g}^{2}}{dt} \tag{38}\] (valid in any gauge). This the central Eq.(5) in [25]: \[\dot{\rho}+3\frac{\dot{a}}{a}(\rho+p)=\frac{3kc_{g}^{2}}{4\pi G_{M}a^{2}}\frac {\dot{c}_{g}}{c_{g}} \tag{39}\] given that we fixed \(c_{P}\) (together with \(\alpha_{2}\)) so the barred densities in [25] there can be dropped. It is interesting that the most abstruse assumptions in [25] are in fact natural from the point of view of the construction in Section II. In [25] it was postulated that varying constants did not produce stress-energy-momentum \(T_{\mu\nu}\), or changed the Einstein equations, something undoubtedly odd, say, from the perspective of scalar-tensor theories. This is just what happens here for two reasons. First, the unimodular-like procedure designed in Section II.1 for producing clocks is such that the metric does not enter \(S_{U}\), so no \(T_{\mu\nu}\) is produced. Second the theory has a Hamiltonian structure, so that the Poisson equations for one set of variables are obtained fixing all other variables. Hence the Einstein equations in their 3+1 split form are obtained fixing all other phase space variables including the time variables \(\mathbf{T_{\alpha}}\) and by extension the varying constants \(\mathbf{\beta}\). We add that this is not a miracle and in fact it is not true in general. If we do not fix \(\alpha_{2}\), then new terms do appear in the Einstein equations, unless we go beyond minimal assumptions regarding the sympletic/Hamiltonian structure of the theory, as we will see in Section VII.1. In order to avoid extra terms under a varying \(\alpha_{2}\) one would need to fix the \(\alpha_{2}\) multiplying first term in (26), \(\dot{b}a^{2}\), whilst allowing decontanization for the \(\alpha_{2}\) multiplying the second term and the gravitational Hamiltonian. Such a procedure goes beyond Section VII.1 and clearly requires LLIV. ## V Illustrative theories We thus justify the central equations of [25], but we can go beyond. A major weakness of [25] was an absence of information on how the energy conservation violating last term in (38) would be shared between the various matter components (including \(\Lambda\)). This was not an issue in scenarios where only radiation is present, as envisaged in [25]. It turns out we can be more predictive in this respect, opening up the doors to other scenarios. As a first rule of thumb, this energy is dumped into (or extracted from) whatever matter form is conjugate to the relational time variables upon which \(c_{g}\) depends (should the clocks be "matter clocks", as we now illustrate. Throughout this Section we fix \(\alpha_{2}\) and \(\alpha_{3}\) (see next Section for the associated issues). ### Pure unimodular varying speed of light (VSL) The simplest theory follows from \(\mathbf{\alpha}=\rho_{\Lambda}\) (pure unimodular \(S_{U}\)) and \(\mathbf{\beta}=c_{g}\), so that \(c_{g}=c_{g}(T_{\Lambda})\). Then, beside Equations (35) (which always apply), we have two more Hamilton equations: \[\dot{T}_{\Lambda} = \{T_{\Lambda},H\}=-N\alpha_{3}a^{3} \tag{40}\] \[\dot{\rho}_{\Lambda} = \{\rho_{\Lambda},H\}=\frac{\partial H}{\partial T_{\Lambda}}=-Na \alpha_{2}k\frac{dc_{g}^{2}}{dT_{\Lambda}}. \tag{41}\] The first equation just states that \(T_{\Lambda}\) is the 4-volume time if \(\alpha_{3}=1\) and agrees with (3) modulo a factor of \(V_{c}\). By introducing \(V_{c}\) in Poisson bracket (32) we are defining _intensive_ times, i.e. time per unit of spatial volume. In this case we are defining time as the 4-volume per unit of 3-volume, so that: \[T_{\Lambda}=-\alpha_{3}\int dtN\,a^{3}, \tag{42}\] with units of time. The time dependence of \(c_{g}\) does not affect this equation, since it only cares about the Hamiltonian dependence on \(\rho_{\Lambda}\). The second equation implies that the energy source/sink term contained in (38) goes fully into the vacuum energy, since it implies (when combined with (40)): \[\dot{\rho}_{\Lambda}+3\frac{\dot{a}}{a}(\rho_{\Lambda}+p_{\Lambda})=\dot{\rho }_{\Lambda}=\frac{k\alpha_{2}}{a^{2}\alpha_{3}}\frac{dc_{g}^{2}}{dt}. \tag{43}\] This is true even if there are other forms of matter \(\rho_{i}\) and is the simplest example of a pattern which we now uncover by generalization: the violations of energy conservation are absorbed by the matter ingredient dual to the clock. ### A generic mixture of fluids Let us first fix \(\alpha_{3}=1\) in this subsection, for simplicity. Following [29; 30; 31; 32] we can write a generic mixture of isentropic fluids subject to homogeneity and isotropy as: \[S_{M} = V_{c}\int dt\sum_{i}\left(a^{3}n_{i}\dot{\Theta}_{i}-Na^{3}\rho_{i}(n_{ i})\right). \tag{44}\] For each specie \(i\), \(n_{i}\) is the (spatial) volume density of a conserved particle number (so that \(\Pi_{i}=n_{i}a^{3}\) is conserved). Thermodynamics shows that the chemical potential and pressure are given by: \[\mu_{i} = \frac{p_{i}+\rho_{i}}{n_{i}}=\frac{\partial\rho_{i}}{\partial n_ {i}} \tag{45}\] \[p_{i} = n_{i}\frac{\partial\rho_{i}}{\partial n_{i}}-\rho_{i}. \tag{46}\] Eq. (44) can be made to fit the format of Eq. (28) after an integration by parts: \[S_{M} = V_{c}\int dt\sum_{i}\left(\dot{\Pi}_{i}T_{\pi i}-Na^{3}\rho_{i}\left( \frac{\Pi_{i}}{a^{3}}\right)\right) \tag{47}\] with \(\Pi_{i}=n_{i}a^{3}\) and \(T_{\pi i}=-\Theta_{i}\). As explained in Section III.2, the choice is not innocuous if \(\alpha_{3}\) is decontantized. We will choose (47) in this paper, due to the emphasis on LLIV, but should not do so otherwise. The parallel between (28) and (47) suggests a fluid clock [30; 31]. The canonical pairs \(\{\Pi_{i},T_{\pi i}\}\) lead to equations: \[\dot{T}_{\pi i} = \{T_{\pi i},H\}=-\frac{\partial H}{\partial\pi_{i}}=-N\frac{ \partial\rho_{i}}{\partial n_{i}} \tag{48}\] \[= -N\mu_{i}=-N\frac{(p_{i}+\rho_{i})a^{3}}{\Pi_{i}}\] \[\dot{\Pi}_{i} = \{\Pi_{i},H\}=\frac{\partial H}{\partial T_{\pi i}}. \tag{49}\] These mimic the pair of equations of unimodular theory (cf. Eqs. 40 and 41), one providing a clock equation, the other the variation in the "constant of motion" of the clock, should the Hamiltonian be time dependent. Time \(T_{\pi i}\) could be called "chemical potential time". It does not flow for \(\Lambda\), so it could be useful in model building: just as Ricci time (defined by (9)) does not flow for radiation (or any other conformal fluid), chemical potential time does not flow for the cosmological constant. For fluids other than Lambda, if \(w_{i}\) is constant, we can improve on this. Then \[\rho_{i}=f_{i}n_{i}^{1+w_{i}} \tag{50}\] where \(f_{i}\) is a dimensionful proportionality constant (which we will assume independent of \(\mathbf{\alpha}\) and \(\mathbf{\beta}\) for the moment). A trivial canonical transformation takes the action to3: Footnote 3: Such a canonical transformation does not exist in general. It requires \(\rho(n)\) to be a homogeneous function. One should also beware of the impact of a decontantized \(\alpha_{3}\) on this transformation, as we will see later. \[S_{M}=V_{c}\int dt\,\sum_{i}\left(\dot{n}_{i}T_{mi}-N\frac{m_{i}}{a^{3w_{i}}} \right), \tag{51}\] with: \[\dot{T}_{mi} = \{T_{mi},H\}=-\frac{\partial H}{\partial m_{i}}=-N\frac{1}{a^{3w_{i}}} \tag{52}\] \[\dot{m}_{i} = \frac{\partial H}{\partial T_{mi}}. \tag{53}\] Hence, \(T_{mi}\) can be used as a clock, with a conjugate momentum: \[m_{i}=\rho_{i}a^{3(1+w_{i})}. \tag{54}\] The clocks \(T_{mi}\) are not mere choices of \(N\), but relational clocks, related to physical properties [19; 30; 31]. Specifically: * For \(w=-1\) we recover a Lambda clock ticking 4-volume (or unimodular) time, as in the previous subsection. * For \(w=1/3\) we have a radiation clock, ticking temperature or redshift time. This is identical to conformal time (\(N=a\) leads to \(\dot{T}=-1\)), but it is defined physically with reference to temperature. If \(\theta\) is the temperature it can be written as \[T_{r}=-\int\frac{dt}{a}=-\int dt\frac{\theta_{\star}}{\theta},\] (55) where \(\theta_{\star}\) is some reference temperature. * For \(w=0\), we have a dust clock, ticking the proper time of observers comoving with the dust. For \(N=1\), we get \(\dot{T}=-1\). The times \(T_{mi}\) have units of time (unlike \(T_{\pi i}\)) once the factors of \(V_{c}\) have been excluded. These "matter" clocks allow one to study further VSL models. If \(c_{g}(T_{mi})\) we can combine (52), (53) and (54) to find: \[\dot{\rho}_{i}+3\frac{\dot{a}}{a}(1+w_{i})\rho_{i} = \frac{\dot{m}_{i}}{a^{3(1+w_{i})}}=\frac{1}{a^{3(1+w_{i})}}\frac {\partial H}{\partial T_{mi}} \tag{56}\] \[= -\frac{Na\alpha_{2}k}{a^{3(1+w_{i})}}\frac{\partial c_{g}^{2}}{ \partial T_{mi}}\] \[= \frac{k\alpha_{2}}{a^{2}\alpha_{3}}\frac{dc_{g}^{2}}{dt}\] where the steps in the first line are general, those in the second line are peculiar to LLIV theories with time dependent \(c_{g}\), and the third specializes to the dependence on a single clock. Hence, we see that the source term in (38) goes into the matter term dual to the clock \(c_{g}\) depends upon, once more. This is done by a concomitant change in the clock's "constant of motion", \(m_{i}\). These calculations can be adapted to generic \(w\). Recalling \(\Pi_{i}=n_{i}a^{3}\) and (46), so that: \[\dot{\rho}_{i} = \frac{d\rho_{i}}{dn_{i}}\dot{n}_{i}=(p_{i}+\rho_{i})\left(\frac{ \dot{\Pi}_{i}}{\Pi_{i}}-3\frac{\dot{a}}{a}\right)\] using (49) and (48) leads to: \[\dot{\rho}_{i}+3\frac{\dot{a}}{a}(\rho_{i}+p_{i}) = \frac{p_{i}+\rho_{i}}{\Pi_{i}}\frac{\partial H}{\partial T_{\pi i }}=-\frac{\dot{T}_{\pi i}}{Na^{3}}\frac{\partial H}{\partial T_{\pi i}} \tag{57}\] \[= \frac{\dot{T}_{\pi i}}{a^{2}\alpha_{2}k}\frac{\partial c_{g}^{2}} {\partial T_{\pi i}}\] \[= \frac{k\alpha_{2}}{a^{2}\alpha_{3}}\frac{dc_{g}^{2}}{dt}\] with the same assumptions line by line as in (56). In the formulation above (after the integration by parts proposed), by including \(\alpha_{3}\) all that happens is that the times are renormalized by a factor of \(\alpha_{3}\): \[T_{mi}\rightarrow\alpha_{3}T_{mi};\qquad T_{\pi i}\rightarrow\alpha_{3}T_{\pi i} \tag{58}\] with a similar scaling for the matter Hamiltonian (but not the energy density). But, as stressed in Section III.2, the following theories are all different: \[S_{M} = V_{c}\int dt\,\left(\alpha_{3}\dot{m}T_{m}-N\frac{\alpha_{3}m}{ a^{3w}}\right) \tag{59}\] \[\neq V_{c}\int dt\,\left(\dot{\alpha}_{3}T_{3}+\alpha_{3}\dot{m}T_{m}-N \frac{\alpha_{3}m}{a^{3w}}\right)\] (60) \[\neq V_{c}\int dt\,\left(\dot{\alpha}_{3}T_{3}-\alpha_{3}m\dot{T}_{m}-N \frac{\alpha_{3}m}{a^{3w}}\right). \tag{61}\] In the first \(\alpha_{3}\) is a fixed parameter, in the second and third it is an independent parameter from \(m\), generating an independent time variable. The last example illustrates how integrations by parts which are innocuous in the fixed \(\alpha_{3}\) theory now lead to different theories. In the second theory \(\alpha_{3}\) merely redefines \(T_{m}\). In the third we would have to redefine \(m\to M=m\alpha_{3}\), so \(\alpha_{3}\) would appear in the energy densities, \(\rho_{i}\), and no longer be a purely "gravitational" parameter. ### A massless scalar field The procedure in Section V.2 applied to stiff fluids (\(w=1\)) reproduces the standard results for a massless scalar field. Recall that for the latter (with \(\alpha_{3}=1\)): \[S_{\phi}=V_{c}\int dt\left(\dot{\phi}\Pi_{\phi}-N\frac{\Pi_{\phi}^{2}}{2a^{3} }\right), \tag{62}\] so the field \(\phi\) and its conjugate \(\Pi_{\phi}\) satisfy: \[\dot{T}_{\phi}\equiv\dot{\phi} = N\frac{\Pi}{a^{3}} \tag{63}\] \[\dot{\Pi} = -\frac{\partial H}{\partial\phi}. \tag{64}\] As proposed in the literature (e.g. [30; 31] and references therein), the field \(\phi\) can be used as a "relational clock". This reduces to the treatment in Section V.2 modulo the trivial canonical transformation: \[m_{\phi}=\rho_{\phi}a^{6}=\frac{\Pi^{2}}{2}. \tag{65}\] Since \(\rho_{\phi}=p_{\phi}=\dot{\phi}^{2}/2\), the non-conservation equation (56) can be rewritten as: \[\ddot{\phi}+3\frac{\dot{a}}{a}\dot{\phi}=\alpha_{2}k\frac{\partial c_{g}^{2}}{ \partial T_{\phi}}. \tag{66}\] It suggests interpreting: \[U_{LLIV}=-\frac{3kc_{D}^{2}c_{g}^{2}(\phi)}{8\pi G_{P}} \tag{67}\] as a LLIV potential, source of energy conservation violations. In the reverse direction we can mimic the scalar field treatment for any fluid. This amounts to a canonical transformation: \[m_{i}=\rho_{i}a^{3(1+w_{i})}=\frac{\Pi_{i}^{2}}{2}. \tag{68}\] so that: \[S_{M}=V_{c}\int dt\sum_{i}\left(\dot{\phi}_{i}\Pi_{i}-N\frac{\Pi_{i}^{2}}{2a^{ 3w_{i}}}\right). \tag{69}\] and we have: \[\dot{\phi}_{i} = N\frac{\Pi_{i}}{a^{3w_{i}}} \tag{70}\] \[\dot{\Pi}_{i} = -\frac{\partial H}{\partial\phi_{i}}. \tag{71}\] Beside the sign change, the time \(\phi_{i}\) now depends on \(\Pi_{i}\). As before, an effective potential may be found via (with \(N=1\)): \[\ddot{\phi}_{i}+3w_{i}\frac{\dot{a}}{a}\dot{\phi}_{i}=-\frac{1}{a^{3w_{i}}} \frac{\partial H}{\partial\phi_{i}}\equiv-\frac{\partial U_{LLIV}}{\partial \phi_{i}}. \tag{72}\] ## VI The role of the matter speed of light, \(c_{m}\) Thus far we have not considered the matter speed of light, \(c_{m}\), which could be fixed while \(c_{g}\) is made part of \(\mathbf{\beta}\), recalling that we can dissociate them (e.g. [6; 41]). But there are other options. Crucial to these is the specification of: \[\rho_{i}=\rho_{i}(n_{i},c_{m}), \tag{73}\] in Eq. (44). The derivatives: \[\mu_{ic}=\frac{\partial\rho_{i}}{\partial c_{m}^{2}} \tag{74}\] can be thought of as speed of light "chemical" potentials, in analogy with (45). For example, for dust \(\rho=nm_{0}(c_{m})c_{m}^{2}\) (with \(m_{0}\) a constant in some cases); for black body radiation \(\rho=n^{4/3}\hbar c_{m}\). The \(c_{m}\) chemical potentials are a property of each matter specie just like \(w_{i}\). Thus, detailed particle physics enters the cosmological argument4. Footnote 4: Obviously we could complicate the model by attributing different \(c_{mi}\) to different species as in [42]. The fork is now whether to make \(c_{m}\) an \(\mathbf{\alpha}\) (the generator of a clock) or a \(\mathbf{\beta}\) (an evolution potential). ### Light clocks (\(c_{m}\in\mathbf{\alpha}\)) If we include \(c_{m}^{2}\) in \(\mathbf{\alpha}\), then (44) becomes: \[S_{M}=V_{c}\int dt\left[\dot{c}_{m}^{2}T_{c}+\sum_{i}\left(\dot{\Pi}_{i}T_{\pi i }-Na^{3}\rho_{i}\left(\frac{\Pi_{i}}{a^{3}},c_{m}^{2}\right)\right)\right]\] leading to times with on-shell expressions: \[\dot{T}_{\pi i} = -N\mu_{i} \tag{75}\] \[\dot{T}_{c} = -Na^{3}\sum_{i}\frac{\partial\rho_{i}}{\partial c_{m}^{2}}=-Na^{3 }\sum_{i}\mu_{ic}. \tag{76}\] In the special case (50), i.e. \(\rho_{i}=n_{i}^{1+w_{i}}f_{i}(c_{m}^{2})\), we can perform the canonical transformation leading to (51) taking care that \(m_{i}\) now should not absorb the factors dependent on \(c_{m}\), to get: \[S_{M}=V_{c}\int dt\left[\dot{c}_{m}^{2}T_{c}+\sum_{i}\left(\dot{m}_{i}T_{mi}-N \frac{m_{i}f_{i}(c_{m}^{2})}{a^{3w_{i}}}\right)\right]\] and so: \[\dot{T}_{mi} = -N\frac{f_{i}}{a^{3w_{i}}} \tag{77}\] \[\dot{T}_{c} = -N\sum_{i}\frac{m_{i}f_{i}^{\prime}}{a^{3w_{i}}}=\sum_{i}\dot{T} _{mi}\frac{m_{i}f_{i}^{\prime}}{f_{i}}. \tag{78}\] The last identity tells us how a light clock relates to the various matter clocks on-shell. As before, if we restore \(\alpha_{3}\), then in this formulation all that happens is that the times are rescaled as in (58). For a LLIV scenario (with fixed \(\alpha_{2}\) and \(\alpha_{3}\)) for which \(c_{g}\) only depends on \(T_{c}\) we have: \[\dot{c}_{m}^{2} = \frac{\partial H}{\partial T_{c}}=-Na\alpha_{2}k\frac{\partial c_{g }^{2}}{\partial T_{c}} \tag{79}\] so that: \[\dot{\rho}_{i}+3\frac{\dot{a}}{a}(\rho_{i}+p_{i}) = \frac{m_{i}\dot{f}_{i}}{a^{3(1+w_{i})}}=\frac{m_{i}f_{i}^{\prime}}{ a^{3(1+w_{i})}}\frac{dc_{m}^{2}}{dt} \tag{80}\] \[= \frac{k\alpha_{2}}{a^{2}}\frac{f_{i}^{\prime}m_{i}}{f_{i}}\frac{ \dot{T}_{mi}}{\dot{T}_{c}}\frac{dc_{m}^{2}}{dt}\] which summed over \(i\) leads to Eq. (38), as it should. In this theory the more a specie depends on \(c_{m}\) the more it is affected by energy conservation violations. ### \(c_{m}\) as an evolution potential (\(c_{m}\in\mathbf{\beta}\)) We could instead include \(c_{m}\) in \(\mathbf{\beta}\), and in particularly set \(c=c_{g}=c_{m}\). Since this will form part of our models in Section IX, let us assume a single clock \(T_{r}\), and variations in \(c_{g}\) and \(c_{m}\) as a function of \(T_{r}\), but this can easily be (and will later be) generalized. Then, for radiation: \[\dot{\rho}_{r}+4\frac{\dot{a}}{a}\rho_{r} = \frac{(m_{r}f_{r})^{\cdot}}{a^{4}}=\frac{1}{a^{4}}\left(\dot{m}_ {r}f_{r}+m_{r}f_{r}^{\prime}\frac{dc_{m}^{2}}{dt}\right),\] whereas for the other matter components (\(i\neq r\)): \[\dot{\rho}_{i}+3\frac{\dot{a}}{a}(\rho_{i}+p_{i}) = \frac{m_{i}\dot{f}_{i}}{a^{3(1+w_{i})}}=\frac{m_{i}f_{i}^{\prime}} {a^{3(1+w_{i})}}\frac{dc_{m}^{2}}{dt}.\] The extra terms in \(\dot{c}_{m}\) are \(\partial\mathbf{\beta}/\partial\mathbf{T_{\alpha}}\) terms arising from having included \(c_{m}\) in \(\mathbf{\beta}\). But the first term in the radiation equation is also made up of the usual term due to \(c_{g}(T_{r})\) plus an extra term, since: \[\dot{m}_{r} = \frac{\partial H}{\partial T_{r}}=\frac{\partial H_{g}}{\partial T _{r}}+\frac{\partial H_{m}}{\partial T_{r}} \tag{83}\] \[= -Nak\alpha_{2}\frac{\partial c_{g}^{2}}{\partial T_{r}}+Na^{3} \sum_{i}\frac{\partial\rho_{i}}{\partial c_{m}^{2}}\frac{\partial c_{m}^{2}}{ \partial T_{r}}\] so that, using \(\dot{T}_{r}=Nf_{r}/a\), we find after some algebra: \[\dot{\rho}_{r}+4\frac{\dot{a}}{a}\rho_{r}=\left(\frac{k\alpha_{2}}{a^{2}}- \sum_{i\neq r}\frac{m_{i}f_{i}^{\prime}}{a^{3(1+w_{i})}}\right)\frac{dc^{2}}{ dt}, \tag{84}\] (where we have set \(c_{m}=c_{g}\)) Hence the new term in \(\dot{m}_{r}\) (the clock's constant) arising from \(c_{m}(T_{r})\) cancel the new terms in \(\dot{c}_{m}\) (in the other components, but also in radiation; hence the \(i\neq r\) in the summation). Again, summing over all the \(i\) we get (38), as it should, since the argument leading to (38) only cares about the gravitational equations. Even though the net production of matter is not affected by \(c_{m}(T_{r})\), the energy equations for each specie receive a source/sink term dependent on their \(c_{m}\) chemical potentials. If \(c_{m}\) is part of \(\mathbf{\beta}\) it can be seen as an interaction potential between the different components, allowing for interchange of matter, but ultimately not producing a net violation of energy conservation. ## VII Formal developments The examples in the Section V and VI.1 show a preliminary pattern. Suppose a physical clock is made of matter: it has a matter constant of motion, which is the canonical dual variable to its variable "moving hands". If the laws of nature depend on the time ticked by this clock via a gravitational parameter (\(c_{g}\) in our example), then there are energy conservation violations. This surplus/deficit of energy goes into the clock's constant of motion, and so into the clock system. We saw this happen if the clock is unimodular time (\(\Lambda\) collected or gave off the energy) and in the case of other fluids (the fluid providing the clock receives/supplies the energy). In the case of a \(c_{m}\) clock, the dual constant of motion is shared by several fluids, with the total energy violation apportioned between the matter components depending on how much each contributes to the clock. However, we already saw this pattern break if a matter parameter, instead, depends on a matter clock (Section VI.2). The pattern also fails to apply to gravitational parameters depending on gravitational clocks. To explore this, we need to develop formalism for \(T_{R}\) and \(T_{N}\), that is, when \(\alpha_{2}\) and \(\alpha_{3}\) are brought into the calculation. ### LLIV theories with Ricci and Newton clocks The calculation in Section IV is different if the Planck scale \(\alpha_{2}\) is one of the \(\mathbf{\alpha}\) and the \(\mathbf{\beta}\) depend on its associated time (Ricci time, \(T_{R}\), see (9)). Then, it is \(A\) (as defined in (33)) and not \(a\) that should be used when computing Poisson brackets. It is useful to write the Hamiltonian as a function of \(A\) instead of \(a\) (or simply to evaluate the Euler-Lagrange equations, in this case). The equations of motion are: \[b^{2}+kc_{g}^{2} = \frac{\alpha_{3}}{\alpha_{2}}\rho a^{2} \tag{85}\] \[\dot{a}+\frac{\dot{\alpha}_{2}}{2\alpha_{2}}a = Nb\] (86) \[\dot{b} = -\frac{\alpha_{3}}{2\alpha_{2}}(\rho+3p)Na \tag{87}\] and the central assumption in [25] is violated (a new term appears in the second equation). We remark that for \(\alpha_{3}=1\) these equations of motion are the same as those for a non-dynamical scalar field in the Jordan frame (i.e. Brans-Dicke theory in the first order formulation with \(\omega_{BD}=0\)) as studied in [33; 34; 35]5. Footnote 5: We can also rephrase the new term in the equations as a theory with torsion, due to a non-fixed Lambda potential, (as in [39; 40], in this case with the even-parity torsion given by \(T=\dot{\alpha}/(2N\alpha)\)). The crucial difference here is the unimodular addition, and the fact that the potentials are non-local functions of the conjugate of the dilaton, rather than of the dilaton itself. A non-conservation equation can be obtained as before, dotting the Hamilton constraint (85), and using (86) and (87) to eliminate \(b\) and \(\dot{b}\). It may then be convenient to define densities converted into geometrical quantitities (with units of \(1/T^{2}\)): \[\tilde{\rho}_{i}=\frac{\alpha_{3}}{\alpha_{2}}\rho_{i} \tag{88}\] (and likewise for \(p\)), to find: \[\dot{\tilde{\rho}}+3\frac{\dot{a}}{a}(\tilde{\rho}+\tilde{p})=-\frac{\dot{ \alpha}_{2}}{\alpha_{2}}\frac{\tilde{\rho}+3\tilde{p}}{2}+\frac{k}{a^{2}}\frac {dc_{g}^{2}}{dt}. \tag{89}\] Reverting to the original variables, we have: \[\dot{\rho}+3\frac{\dot{a}}{a}(\rho+p)=-\frac{\dot{\alpha}_{3}}{\alpha_{3}}\rho+ \frac{\dot{\alpha}_{2}}{\alpha_{2}}\frac{\rho-3p}{2}+\frac{k\alpha_{2}}{a^{2} \alpha_{3}}\frac{dc_{g}^{2}}{dt}. \tag{90}\] The first and last terms in the RHS are the same as in (38), for fixed \(G_{P}\). The second term is new, and is not predicted by [25]. We also have the extra Hamilton equations: \[\dot{T}_{R} = -\frac{\partial H}{\partial\alpha_{2}}=\frac{\alpha_{3}}{\alpha_ {2}}Na^{3}\frac{\rho-3p}{2} \tag{91}\] \[\dot{T}_{N} = -\frac{\partial H}{\partial\alpha_{3}}=-Na^{3}\rho\] (92) \[\dot{\alpha}_{2} = \frac{\partial H}{\partial T_{R}}=-Na\alpha_{2}k\frac{\partial c _{g}^{2}}{\partial T_{R}}\] (93) \[\dot{\alpha}_{3} = \frac{\partial H}{\partial T_{N}}=-Na\alpha_{2}k\frac{\partial c _{g}^{2}}{\partial T_{N}}, \tag{94}\] where, we stress, one should keep \(A^{2}\) (and not \(a^{2}\)) fixed evaluating the derivatives. This applies to the \(a\) appearing in the matter Hamiltonian, and it makes a crucial difference for the \(\dot{T}_{R}\) formula. The first formula agrees with (9) reduced to MSS. The second agrees with (10), modulo the issues discussed in Section III.2 (if we do not perform a LLIV integration by parts, \(\dot{T}_{N}=Na^{3}p\), or pressure time). In the last two equations we have assumed that only \(c_{g}\) enters \(\mathbf{\beta}\), but these can (and will) be generalized. Since we can write: \[\frac{dc_{g}^{2}}{dt}=\sum_{i}\frac{\partial c_{g}^{2}}{\partial T_{i}}\dot{T }_{i}. \tag{95}\] in the last term of (90), we find that the terms due to the dependence of \(c_{g}\) on \(T_{R}\) and \(T_{N}\) cancel out the first two terms. That is, the dependence of \(c_{g}\) on Ricci or Newtonian time does not lead to violations of energy conservation, with: \[\dot{\rho}+3\frac{\dot{a}}{a}(\rho+p)=0 \tag{96}\] if \(c_{g}\) only depends on \(T_{R}\) and \(T_{N}\) (if there are other dependences, these still contribute the usual terms). This is a surprising result, but it makes sense. A gravitational parameter depending on a clock leads to energy violations absorbed by that clock, but if the clock is gravitational there is nothing "material" to absorb the violations of energy conservation. So, there are none. In the language of natural selection, the evolution is sterile. ### More general theories with varying Planck scale and gravitational coupling There is a final twist in the pattern: a _matter_ parameter depending on a gravitational clock. The cancellations in the previous subsection do not apply if, for example, \(c_{m}(T_{R},T_{N})\). Then, Eq. (90) still holds true (since it only depends on the Einstein/gravity equations), but (93) and (94) receive a new term: \[\dot{\alpha}_{2} = \frac{\partial H}{\partial T_{R}}=-Na\alpha_{2}k\frac{\partial c _{g}^{2}}{\partial T_{R}}+Na^{3}\alpha_{3}\frac{\partial\rho}{\partial c_{m} ^{2}}\frac{\partial c_{m}^{2}}{\partial T_{R}}\] \[\dot{\alpha}_{3} = \frac{\partial H}{\partial T_{N}}=-Na\alpha_{2}k\frac{\partial c _{g}^{2}}{\partial T_{N}}+Na^{3}\alpha_{3}\frac{\partial\rho}{\partial c_{m} ^{2}}\frac{\partial c_{m}^{2}}{\partial T_{N}},\] resulting in: \[\dot{\rho}+3\frac{\dot{a}}{a}(\rho+p)=Na^{3}\frac{\partial\rho}{\partial c_{m} ^{2}}\left(\frac{\alpha_{3}}{\alpha_{2}}\frac{\partial c_{m}^{2}}{\partial T _{R}}\frac{\rho-3p}{2}-\frac{\partial c_{m}^{2}}{\partial T_{N}}\rho\right).\] Had we considered multiple components, we would find that the general pattern is that if a matter \(\mathbf{\beta}\) depends on a gravitational clock, then there are violations of energy conservation, apportioned between the different components by the dependence of their energy density on the relevant \(\mathbf{\beta}\). ## VIII General formula We now collect the various example we have studied in a single general formula. We should stress that the cancellations in Section VII.1 are not mysterious. As we will show in a sequel to this paper [44], they arise from the sympletic Hamiltonian structure of the theory alone. The same applies to the general formula we will derive in this Section. It results from that structure plus the secondary constraint: \[\dot{H}\approx 0 \tag{97}\] following from \(H\approx 0\). This constraint usually amounts to local matter/energy density conservation and the Bianchi identities for the geometry, whereas here it will lead to a generalized non-conservation formula. ### General formula in Einstein-Cartan theory We first derive the general formula with reference to the Einstein-Cartan action subject to LLIV as defined in Section III (and the assumption of homogeneity and isotropy). The idea is to repeat the calculations in Section VII for a general \(\mathbf{\beta}(T_{\mathbf{\alpha}})\). Defining: \[H=H_{G}+H_{M}=\alpha_{2}H_{g}+\alpha_{3}H_{m}=\alpha_{2}H_{g}+\alpha_{3}Na^{3}\rho\] so that \(H_{g}\) does not depend on \(\alpha_{2}\) (or \(\alpha_{3}\)), and evaluating \(\dot{H}=0\) we find6: Footnote 6: Note that we need to use Hamilton’s equations to eliminate \(b\), and this contains new terms in \(\dot{\alpha}_{2}\); hence the need to separate them. \[\dot{\rho}+3\frac{\dot{a}}{a}(\rho+p) = -\frac{\dot{\alpha}_{3}}{\alpha_{3}}\rho+\frac{\dot{\alpha}_{2}}{ \alpha_{2}}\frac{\rho-3p}{2}\] \[-\frac{\alpha_{2}}{Na^{3}\alpha_{3}}\left(\sum_{I}\frac{\partial H _{g}}{\partial\alpha_{I}}\dot{\alpha}_{I}+\sum_{K}\frac{\partial H_{g}}{ \partial\beta_{K}}\dot{\beta}_{K}\right).\] Using the Poisson equations, the term in brackets can be written as: \[\sum_{IK}\frac{\partial H_{g}}{\partial\alpha_{I}}\frac{\partial\beta_{K}}{ \partial T_{I}}\frac{\partial H}{\partial\beta_{K}}-\frac{\partial H_{g}}{ \partial\beta_{K}}\frac{\partial\beta_{K}}{\partial T_{I}}\frac{\partial H} {\partial\alpha_{I}},\] and for \(I=2,3\) rearranged as: \[-\frac{1}{\alpha_{2}}\sum_{\begin{subarray}{c}I=2,3\\ K\end{subarray}}\frac{\partial}{\partial\beta_{K}}(H-\alpha_{3}Na^{3}\rho) \frac{\partial\beta_{K}}{\partial T_{I}}\frac{\partial H}{\partial\alpha_{I}}\] \[= -\frac{1}{\alpha_{2}}\left(\dot{\alpha}_{2}\dot{T}_{2}+\dot{ \alpha}_{3}\dot{T}_{3}-\alpha_{3}Na^{3}\sum_{\begin{subarray}{c}I=2,3\\ K\end{subarray}}\frac{\partial\rho}{\partial\beta_{K}}\frac{\partial\beta_{K }}{\partial T_{I}}\frac{\partial H}{\partial\alpha_{I}}\right)\] so that the first two terms generally cancel the new terms in the continuity equation. For \(I\neq 2,3\) only the matter component in \(H\) leads to a non-zero contribution. Thus: \[\dot{\rho}+3\frac{\dot{a}}{a}(\rho+p) = \alpha_{2}\sum_{\begin{subarray}{c}I\neq 2,3\\ K\end{subarray}}\frac{\partial\rho}{\partial\alpha_{I}}\frac{\partial\beta_{K }}{\partial T_{I}}\frac{\partial H_{g}}{\partial\beta_{K}}-\frac{\partial \rho}{\partial\beta_{K}}\frac{\partial\beta_{K}}{\partial T_{I}}\frac{ \partial H_{g}}{\partial\alpha_{I}} \tag{98}\] \[-\sum_{\begin{subarray}{c}I=2,3\\ K\end{subarray}}\frac{\partial\rho}{\partial\beta_{K}}\frac{\partial\beta_{K }}{\partial T_{I}}\frac{\partial H}{\partial\alpha_{I}}\] with the last term being nothing but a generalization of the terms discussed in Section VI.2. In fact, since \(\rho\) does not depend on \(\alpha_{2}\) or \(\alpha_{3}\), and \(\frac{\partial H}{\partial\alpha_{2}}=\frac{\partial H_{\alpha}}{\partial \alpha_{2}}=H_{g}\), this can be compressed into: \[\dot{\rho}+3\frac{\dot{a}}{a}(\rho+p) = \sum_{IK}\frac{\partial\rho}{\partial\alpha_{I}}\frac{\partial \beta_{K}}{\partial T_{I}}\frac{\partial H_{G}}{\partial\beta_{K}}-\frac{ \partial\rho}{\partial\beta_{K}}\frac{\partial\beta_{K}}{\partial T_{I}}\frac {\partial H_{G}}{\partial\alpha_{I}} \tag{99}\] \[-Na^{3}\rho\sum_{K}\frac{\partial\rho}{\partial\beta_{K}}\frac{ \partial\beta_{K}}{\partial T_{N}},\] the last term being the only one that does not fit the simple pattern. ### Generalization But this result is general and does not even require an action formulation: the sympletic/Hamiltonian structure is enough. An abridged proof is provided here (with a more general proof presented elsewhere), by ignoring the details of gravity and deriving the same result directly from the matter degrees of freedom. We assume a set of perfect fluids under homogeneity but that is not needed. The crucial inputs are the dependence of each specie energy density \(\rho_{i}\) on \(\alpha_{I}\) and \(\beta_{K}\): \[\rho_{i}=\rho_{i}\left(\alpha_{I},\beta_{K}\right)=\rho_{i}\left(\frac{\Pi_{i} }{a^{3}},\alpha_{j},\beta_{K}\right). \tag{100}\] The diagonal dependence on \(\Pi_{i}\) is non-negotiable, so we separated it and gave it a different index (\(i\)) from the other \(\alpha_{j}\) (within a global \(I\) index for \(\mathbf{\alpha}\)). The other dependences are specific to each theory (see Section VI for examples). For \(\alpha_{3}\) we adopt prescription (60), so the \(\rho_{i}\) do not depend on \(\alpha_{3}\); instead \(T_{mi}\) (or \(T_{\pi i}\)) absorb it in their definition (cf. Eq. 44). Had we preferred (61) we would have to redefine \(\Pi_{i}=n_{i}/(a^{3}\alpha_{3})\) (cf. Eq. 44), so \(\rho_{i}=\rho_{i}(\Pi_{i}/a^{3}\alpha_{3})\) would depend on \(\alpha_{3}\). Then: \[\dot{\rho}_{i} = \frac{\partial\rho_{i}}{\partial n_{i}}\dot{n}_{i}+\sum_{j}\frac{ \partial\rho_{i}}{\partial\alpha_{j}}\dot{\alpha}_{j}+\sum_{K}\frac{\partial \rho_{i}}{\partial\beta_{K}}\dot{\beta}_{K}\] with \[\frac{\partial\rho_{i}}{\partial n_{i}}\dot{n}_{i} = (p_{i}+\rho_{i})\left(\frac{\dot{\Pi}_{i}}{\Pi_{i}}-3\frac{\dot{a} }{a}\right)\] \[\frac{\partial\rho_{i}}{\partial\alpha_{j}}\dot{\alpha}_{j} = \frac{\partial\rho_{i}}{\partial\alpha_{j}}\frac{\partial H}{\partial T _{j}}=\sum_{K}\frac{\partial\rho_{i}}{\partial\alpha_{j}}\frac{\partial H}{ \partial\beta_{K}}\frac{\partial\beta_{K}}{\partial T_{j}}\] \[\frac{\partial\rho_{i}}{\partial\beta_{K}}\dot{\beta}_{K} = -\sum_{I}\frac{\partial\rho_{i}}{\partial\beta_{K}}\frac{\partial \beta_{K}}{\partial T_{I}}\frac{\partial H}{\partial\alpha_{I}} \tag{101}\] where we have used (46) and \(\Pi_{i}=n_{i}a^{3}\) in the first equation, and Hamilton's equations for \(\dot{\alpha}_{j}\) and \(\dot{T}_{I}\) in the others. But the Hamilton equation for \(\Pi_{i}\) implies7: Footnote 7: Or an equivalent expression for \(m_{i}\), had we used it instead of \(\Pi_{i}\); canonical transformations do not affect this formula. \[(p_{i}+\rho_{i})\frac{\dot{\Pi}_{i}}{\Pi_{i}}=\frac{\partial\rho_{i}}{\partial \Pi_{i}}\frac{\partial H}{\partial T_{\pi i}} \tag{102}\] so this term fits into the same pattern as other \(\mathbf{\alpha}\), once the metric dependence (the term in \(\dot{a}/a\)) has been separated. Hence: \[\dot{\rho}_{i}+3\frac{\dot{a}}{a}(p_{i}+\rho_{i})=\sum_{IK}\frac{\partial\rho_{i} }{\partial\alpha_{I}}\frac{\partial\beta_{K}}{\partial T_{I}}\frac{\partial H}{ \partial\beta_{K}}-\frac{\partial\rho_{i}}{\partial\beta_{K}}\frac{\partial \beta_{K}}{\partial T_{I}}\frac{\partial H}{\partial\alpha_{I}}. \tag{103}\] This is the most informative equation we have. It tells us, specie by specie, the violations of energy conservation for arbitrary \(\mathbf{\beta}\) depending on arbitrary times \(\mathbf{T_{\alpha}}\). Summing over \(i\) we finally get: \[\dot{\rho}+3\frac{\dot{a}}{a}(p+\rho) = \sum_{IK}\frac{\partial\rho}{\partial\alpha_{I}}\frac{\partial \beta_{K}}{\partial T_{I}}\frac{\partial H}{\partial\beta_{K}}-\frac{\partial \rho}{\partial\beta_{K}}\frac{\partial\beta_{K}}{\partial T_{I}}\frac{\partial H }{\partial\alpha_{I}} \tag{104}\] \[= \sum_{IK}\frac{\partial\rho}{\partial\alpha_{I}}\frac{\partial \beta_{K}}{\partial T_{I}}\frac{\partial H_{G}}{\partial\beta_{K}}-\frac{ \partial\rho}{\partial\beta_{K}}\frac{\partial\beta_{K}}{\partial T_{I}}\frac{ \partial H_{G}}{\partial\alpha_{I}}\] \[-Na^{3}\rho\sum_{K}\frac{\partial\rho}{\partial\beta_{K}}\frac{ \partial\beta_{K}}{\partial T_{N}},\] where \(n^{\mu}\) is the normalized normal to \(\Sigma\). ### General pattern In view of this general formula we can collate and confirm the various patterns we found in the previous Sections. As its first term indicates, to get net non-conservation of energy one possibility is for a gravitational parameter (a \(\mathbf{\beta}\) appearing in \(H_{G}\)) to depend on one or more matter clocks (i.e. clocks dual to \(\mathbf{\alpha}\) appearing in at least one matter variable). This is the case in Sections V and VI.1, where \(c_{g}\) depends on various matter clocks (\(T_{\Lambda}\), \(T_{\pi i}\), \(T_{mi}\) and \(T_{c}\)). The more a specie \(i\) contributes to the dual constant of these clocks (according to a measure which is like a chemical potential for that constant), the more of a share of violations it gets (as in the example of Section VI.1). The gravitational parameter can also depend jointly on many clocks (e.g. via functions of the form \(c_{g}(T_{\Lambda},T_{mi},T_{c})\)), in which case the partial derivatives with respect to the clock time controls the apportionment. Its second and third terms open up another possibility for a net energy conservation violation: a matter parameter (a \(\mathbf{\beta}\) in \(H_{M}\)) dependent on the time ticked by a gravity clock (dual to a \(\alpha\) appearing in \(H_{G}\), or to the prefactor \(\alpha_{3}\) of \(H_{m}\), describing the strength of the gravitational coupling). This is the case in Section VII.2, where \(c_{m}=c_{m}(T_{R},T_{N})\). The violations are divided by the different species according to how much their energy density depends on this matter parameter. No other combination generates net energy. A purely matter parameter dependent on a purely matter clock is similar to an interaction term, energy exchanging between different components, but with no net gain. This would be the case of Section VI.2 (\(c_{m}(T_{r})\)) if \(c_{g}\) is left fixed (or no other gravity parameter depended on \(T_{r}\)). As we will see, this is not useless: for example Lambda could exchange energy with radiation via this mechanism. Only a purely gravity parameter depending on a gravity clock would lead to no energy conservation violations in _any_ matter component. This is the case explored in Section VII.1, with \(c_{g}=c_{g}(T_{R},T_{N})\) (and \(\alpha_{3}\) treated so that it does not appear in \(H_{m}\)). Such situation is entirely sterile from the point of view of matter production. ## IX Illustrative Lily cosmogonies We now provide some examples of scenarios for the origin of the matter in the Universe, focusing on LLIV (deferring non-LLIV scenarios to [44]). We can think of the potentials \(\mathbf{\beta}(\mathbf{\alpha})\) as random choices for chaotic evolution, selected in/out by their ultimate effects, specifically by whether they lead to "viable" offspring. ### VSL: matter and the flatness problem Foremost we find VSL scenarios relating matter production to departures from spatial flatness. The simplest one follows from Section V, with \(\beta=c_{g}\), \(\alpha=m_{r}\), \(c_{g}=c_{g}(T_{r})\) and \(c_{m}\) fixed. Thus, the gravitational speed of light depends on radiation time (i.e. the temperature), and we investigate a phase transition with: \[c_{g}^{2}=c_{g-}^{2}H(T_{r\star}-T_{r})+c_{g+}^{2}H(T_{r}-T_{r\star}) \tag{106}\] where \(H\) is the Heaviside step function and \(T_{r\star}\) is a critical temperature of the order of the Planck scale. We can follow [25] to find that with \(k=-1\) and \((c_{g-}-c_{g+})/c_{g+}>10^{32}\) we solve the flatness problem to fit observations. From the gravitational equations we know that for such a shock there can only be delta functions in \(\,\ddot{a}\,\), so \(a\) and \(\dot{a}\) must be continuous. From the Hamiltonian constraint, \(a_{-}=a_{+}=a_{\star}\) and \(\dot{a}_{-}=\dot{a}_{+}=\dot{a}_{\star}\) implies: \[\Delta\rho=\frac{k\alpha_{2}}{a_{\star}^{2}\alpha_{3}}\Delta c_{g}^{2}, \tag{107}\] which could also be derived directly from (38). In such a scenario: \[a = c_{g-}\sqrt{|k|}t\hskip 56.905512pt(t<t_{\star}) \tag{108}\] \[= c_{g-}\sqrt{|k|}\left(\frac{t}{t_{\star}}\right)^{1/2}\hskip 28.452756pt (t>t_{\star})\] the only discontinuity happening in \(\ddot{a}\). Alternatively, we could choose \(k=1\) and a rise in \(c_{g}\), with \(c_{m}\) fixed: this would make more sense from the bi-metric perspective [48; 49; 50], if we want to solve the horizon problem (but see Section IX.2). Choosing another \(T_{mi}\) would place the energy into the respective dual fluid. We now elaborate on this simple model in light of the results in this paper. #### vi.2.1 The case \(c_{m}^{2},c_{g}^{2}\in\mathbf{\beta}\) As in Section VI.2, we can bring the matter speed of light into the fray, and identify \(c_{m}=c_{g}=c(T_{r})\) or leave them as separate functions. For a step function of the form (106) in \(c_{g}^{2}\) and \(c_{m}^{2}\) (with possibly different jumps) we find (adapting the results in Section VI.2): \[\Delta\rho_{r} = \frac{k\alpha_{2}}{a_{\star}^{2}\alpha_{3}}\Delta c_{g}^{2}-\sum _{i\neq r}\Delta\rho_{i}\] \[\Delta\rho_{i} = \rho_{i}(c_{m+}^{2})-\rho_{i}(c_{m-}^{2})=\int_{c_{m-}^{2}}^{c_{m +}^{2}}\mu_{ic}\,dc_{m}^{2};\ (i\neq r).\] Thus, radiation (or whatever fluid provides the clock for \(c_{m}\) and \(c_{g}\)) takes the energy violations due to LLIV (the term in \(\Delta c_{g}^{2}\)). Radiation also accounts for a discharge of energy (with no net violation) onto the other species, dependent on their speed of light chemical potentials (74) (the other terms, which involve \(\Delta c_{m}^{2}\)). This is interesting and predictive. #### vi.2.2 The case \(c_{m}^{2}\in\mathbf{\alpha}\), \(c_{g}^{2}\in\mathbf{\beta}\) More fundamentally, perhaps, we could consider \(c_{g}(T_{c})\), with \(c_{m}\in\mathbf{\alpha}\) as in Section VI.1. Then the LLIV energy violations (107) are directly shared among the various species according to their \(\mu_{ci}\). For a step function \(c_{g}^{2}=c_{g}^{2}(T_{c})\) there is a concomitant jump in \(c_{m}^{2}\) given by: \[\frac{dc_{m}^{2}}{dt}=\frac{\partial H}{\partial T_{c}}=-\frac{Na\alpha_{2}k} {\dot{T}_{c}}\frac{dc_{g}^{2}}{dt}=\frac{k\alpha_{2}}{a^{2}\alpha_{3}\mu_{c}} \frac{dc_{g}^{2}}{dt}. \tag{109}\] This is a non-linear equation (since \(\mu_{c}=\mu_{c}(c_{m}^{2})\)) with solution: \[\int_{c_{m-}^{2}}^{c_{m+}^{2}}\mu_{c}\,dc_{m}^{2}=\frac{k\alpha_{2}}{a_{\star }^{2}\alpha_{3}}\Delta c_{g}^{2}\] so that indeed: \[\Delta\rho=\rho(c_{m+}^{2})-\rho(c_{m-}^{2})=\frac{k\alpha_{2}}{a_{\star}^{2} \alpha_{3}}\Delta c_{g}^{2} \tag{110}\] i.e. Eq. (107). The violations are shared according to: \[\Delta\rho_{i}=\rho_{i}(m_{i},c_{m+}^{2})-\rho_{i}(m_{i},c_{m-}^{2})=\int_{c_ {m-}^{2}}^{c_{m+}^{2}}\mu_{ic}\,dc_{m}^{2}. \tag{111}\] Notice that at least one \(m_{i}\) has to be non-zero, otherwise \(\dot{T}_{c}=0\) (see Eq.(78)) and \(c_{g}\) would never feel the step predicted in \(c_{g}^{2}=c_{g}^{2}(T_{c})\). More generally \(\mu_{c}\propto\rho\), so the less matter there is to begin with the bigger the jump in \(c_{m}\). Therefore in this theory the net violations are still (107), since this only depends on the gravitational action/equations, but in addition \(c_{m}\) must change so that this \(\Delta\rho\) is produced given the overall dependence \(\rho(c_{m}^{2})\), with the various species receiving a contribution in proportion to their individual dependence \(\rho_{i}(c_{m}^{2})\). We could have played this game with any other fundamental matter parameter and its clock, say \(e\), \(m_{e}\), etc. #### vi.2.3 The case \(c_{g}^{2}\in\mathbf{\alpha}\), \(c_{m}^{2}\in\mathbf{\beta}\) We have not studied this case before, but we could instead use \(c_{g}^{2}\) to generate a gravitational clock, with: \[\dot{T}_{cg}=\frac{\partial H}{\partial c_{g}^{2}}=Na\alpha_{2}k \tag{112}\] and then study \(c_{m}^{2}=c_{m}^{2}(T_{cg})\). As it happens there is apparent perfect symmetry with the previous subsubsection, because in both: \[\frac{dc_{m}^{2}}{dc_{g}^{2}}=\frac{\frac{dc_{m}^{2}}{dt}}{\frac{dc_{g}^{2}} {dt}}=-\frac{\frac{\partial H}{\partial c_{g}^{2}}}{\frac{\partial H}{ \partial c_{m}^{2}}}=\frac{k\alpha_{2}}{a^{2}\alpha_{3}\mu} \tag{113}\] i.e. \(d\beta/d\alpha\) drops out leaving the same end result for (109), whatever choice we make for \(\alpha\) and \(\beta\). We then have the same result for \(\Delta\rho\) (i.e. (110) and (111)). The physics is very different, however, if we have a phase transition in which the starting point is quite extreme. In the case of Section IX.1.2 we had \(c_{g}^{2}(T_{c})\), and since \(T_{c}\propto\rho\) we need some matter for the step function to be felt: otherwise the clock \(T_{c}\) stops. In the present case we need \(k\neq 0\), i.e. some curvature, for the step function to be felt, or else \(T_{cg}\) would stop and \(c_{m}\) never feel the jump. So there is an intermediate step that breaks this perfect symmetry, except if the Universe already has some matter and curvature \(k\) before the transition. The \(T_{cg}\) interpretations sheds light on the solution to the flatness problem we found. Such a curvature clock stops if \(k=0\) or if curvature becomes negligible (just as a Ricci clock stops for radiation, or a chemical potential clock stops for Lambda). It also flows in opposite directions for \(k=\pm 1\). Flatness therefore becomes an attractor, with \(\Delta c_{m}^{2}/\Delta c_{g}^{2}\) having the sign of \(k/\mu\). ### Signature change scenarios That the natural variable that appears in the dynamics is often \(c^{2}\) suggests that \(c\) could appear in \(\mathbf{\alpha}\) as a square. We could then entertain scenarios in which the sign of \(c^{2}\) changes as part of evolution. This would imply a signature change, so that the Universe is _classically_ Euclidean before a phase transition into a Lorentzian phase. By _classical_ we mean that we are not applying a Wick rotation to a path integral, taking a real Lorentzian action to an imaginary Euclidean one, or vice versa. The action and all variables remain real, as in [46]. #### iv.2.1 Signature change and the horizon problem This would dramatically change the usual discussion of the horizon problem, which would thus become disconnected from the flatness problem. Options in which the matter speed of light increases (rather than decreases, as in the usual solution to the horizon problem) could then be considered, as long as the sign of \(c^{2}\) changed. Regardless of the flatness problem, we could dissociate \(c_{m}\) and \(c_{g}\) and have early Universe situations where \(c_{m}^{2}>0\) but \(c_{g}^{2}<0\), that is Lorentzian spacetime for matter (or some of matter) and Euclidean gravity, or vice versa. If the potentials are chaotic choices, this would be very higgledy-piggledy indeed. #### iv.2.2 A classical Hartle-Hawking realization In addition, we could avail ourselves of such scenarios to classically realize the Hartle-Hawking [45] no-boundary proposal, in a process reminiscent of [46]. This could be done with: \[c_{g}^{2}(T_{\Lambda})=c_{0}^{2}H(T_{\Lambda})-c_{0}^{2}H(-T_{\Lambda}) \tag{114}\] for \(k=1\) and \(c_{0}^{2}>0\). Then, Eq. (43) implies: \[\Delta\rho_{\Lambda}=\frac{k\alpha_{2}}{a_{*}^{2}\alpha_{3}}\Delta c_{g}^{2}=2 c_{0}^{2}\frac{k\alpha_{2}}{a_{*}^{2}\alpha_{3}} \tag{115}\] which combined with the Hamiltonian constraint implies a swap in the sign of \(\rho_{\Lambda}=\pm\rho_{\Lambda 0}\), equivalent to a redefinition of \(N\) and: \[\alpha_{2}(-b^{2}+kc_{0}^{2})+\alpha_{3}\rho_{\Lambda 0}a^{2}\approx 0.\] For \(T_{\Lambda}<0\) we therefore obtain a solution to the (real) Euclidean theory: the 4-sphere. In this scenario, rather than changing signature as a function of \(b\) (as in [46]), the change happens as a function of \(T_{\Lambda}\), that is, with proper time evolution. It is curious that Hartle and Hawking's "creation out of nothing", usually seen as an instanton and so a purely quantum process, could in fact be a classical phenomenon, within a theory with varying laws and classical signature change. ### The cosmological constant problem In the scenarios of Section IX.1 we set \(\Lambda=0\) and ignored the cosmological constant problem, but the problem could be rephrased in that context as: \[\frac{\partial c_{g}^{2}}{\partial T_{\Lambda}}\ll\frac{\partial c_{g}^{2}}{ \partial T_{r}}\] and \(\Lambda=0\) originally. Likewise, the quasi-Lambda problem [47] could be stated as: \[\frac{\frac{\partial c_{g}^{2}}{\partial T_{\Lambda}}}{\frac{\partial c_{g}^{ 2}}{\partial T_{i}}}\sim 10^{-120}\] for a phase transition at Planck scale. This is clearly just the usual fine tuning problems in this setting: the evolving constant potentials must depend very weakly on 4-volume time, or we would flood the early Universe with vacuum energy. A proper solution, however, can be attempted by allowing the vacuum energy to change as part of the evolution itself, dumping its energy into a form of matter. This can be done either making \(\Lambda\) the generator of a clock (\(\rho_{\Lambda}\in\mathbf{\alpha}\)), with a matter \(\mathbf{\beta}\) depend on \(T_{\Lambda}\) (e.g. \(c_{m}=c_{m}(T_{\Lambda})\)); or with \(\rho_{\Lambda}\in\mathbf{\beta}\), and dependent on a matter clock, such as with \(\rho_{\Lambda}(T_{r})\) or \(\rho_{\Lambda}(T_{c})\). These scenarios lead to "Lambda remittances" which would tune the Universe, solving the cosmological constant problem. #### iv.2.1 Lambda remittances with \(\rho_{\Lambda}\in\mathbf{\beta}\) As in [25] we could solve the "geometrical" Lambda problem in the same way we solved the flatness problem. We define a geometrical Lambda, \(\Lambda_{g}\), appearing in the LHS of Einstein's equations, and playing a role similar to \(k\) in implementing LLIV. As with \(k\), \(\Lambda_{g}\) is constant and has units of \(1/L^{2}\). It can be reinterpreted as an energy density on the RHS of Einstein equations via: \[\rho_{\Lambda}=\frac{\alpha_{2}}{\alpha_{3}}\Lambda c_{g}^{2}. \tag{116}\] If we assume constant \(\alpha_{2}\) and \(\alpha_{3}\) and \(c_{g}=c_{g}(T_{r})\), we then get creation of radiation according to: \[\dot{\rho}_{r}+4\frac{\dot{a}}{a}\rho_{r} = \frac{\dot{m}_{r}}{a^{4}}=\frac{1}{a^{4}}\frac{\partial H}{ \partial T_{r}}=\frac{N\rho_{\Lambda}}{ac_{g}^{2}}\frac{dc_{g}^{2}}{dT_{r}} \tag{117}\] \[= \frac{N\rho_{\Lambda}}{ac_{g}^{2}}\frac{dt}{dT_{r}}\frac{dc_{g}^ {2}}{dt}=-2\frac{\dot{c}_{g}}{c_{g}}\rho_{\Lambda}\] \[= -\dot{\rho}_{\Lambda}\] where we used the first/second Hamilton equation for the pair \(\{m_{r},T_{r}\}\) in the first/second line. Hence, we introduce the concept of _vacuum remittance_: a discharge of vacuum energy into non-vacuum matter. We would have obtained similar results with \(c_{g}=c_{g}(T_{c})\), with the vacuum energy now being distributed by matter species according to their \(\mu_{ic}\). In this scenario Lambda plays a role similar to \(k\) as the originator of LLIV (via \(c_{g}\)) and violations of energy conservation but this does not need to be the case. For any \(\rho_{\Lambda}=\rho_{\Lambda}(T_{r})\) we have: \[\dot{\rho}_{r}+4\frac{\dot{a}}{a}\rho_{r}=-\dot{\rho}_{\Lambda} \tag{118}\] either because: \[\dot{T}_{r} = \frac{N}{a} \tag{119}\] \[\dot{m} = -Na^{3}\frac{\partial\rho_{\Lambda}}{\partial T_{r}} \tag{120}\] or because of the canonically related: \[\dot{T}_{r} = \frac{N}{a}\Pi_{r} \tag{121}\] \[\dot{\Pi}_{r} = -Na^{3}\frac{\partial\rho_{\Lambda}}{\partial T_{r}}. \tag{122}\] The latter leads to an effective potential interpretation: \[\ddot{T}_{r}+\frac{\dot{a}}{a}\dot{T}_{r}=-Na^{2}\frac{\partial\rho_{\Lambda}} {\partial T_{r}}. \tag{123}\] Such non-LLIV scenarios will be studied in [44]. #### iii.3.2 Remititances with \(\rho_{\Lambda}\in\mathbf{\alpha}\) We could also reverse the roles of variable and clock and induce vacuum remitances by making a matter parameter vary as a function of unimodular time. The simplest example is \(c_{m}^{2}=c_{m}^{2}(T_{\Lambda})\) leading to: \[\dot{\rho}_{\Lambda}=\frac{\partial H}{\partial T_{\Lambda}}=Na^{3}\alpha_{3} \mu_{c}\frac{dc_{m}^{2}}{dT_{\Lambda}}. \tag{124}\] Since the Lambda term is of the form \(H_{\Lambda}=N\alpha_{3}Na^{3}\rho_{\Lambda}\), we have \(\dot{T}_{\Lambda}=-\alpha_{3}Na^{3}\), so for a sharp phase transition: \[\Delta\rho_{\Lambda}=-(\rho(c_{m+}^{2})-\rho(c_{m-}^{2})), \tag{125}\] with separate components receiving their share depending on \(\mu_{ic}\): \[\Delta\rho_{i}=\rho_{i}(c_{m+}^{2})-\rho_{i}(c_{m-}^{2}). \tag{126}\] This could be done with any other matter parameter elevated to \(\mathbf{\beta}\). In both scenarios (this subsubsection and previous one) whatever caused the hot Big Bang suppressed the vacuum energy. The Big Bang is a sign of variability involving Lambda and a matter parameter, with one of them playing the role of clock. #### iii.3.3 Interaction with a Ricci clock Naturally within this framework one may ask why would the Big Bang be unique? And could this be related to why Lambda is so small compared to the Planck scale? The simplest scenario in this respect is a cascade of Big Bangs triggered by low matter temperatures, syphoning matter out of Lambda as soon as Lambda resurfaces. A Big Bang in the past, orginated by a phase transition due to vacuum domination, will be reflected with a Big Bang in the future when similar circumstances arise, as seems to be the case around now. However, in such a cyclic scenario, the Big Bang temperature would be \(10^{-32}\) smaller each time, which is nonsense. We should therefore renormalize the Planck mass at the start of each cycle. This could happen in two ways: either via a dependence on \(T_{R}\), or by making \(\alpha_{2}\) a \(\mathbf{\beta}\). Imagine a \(\rho_{\Lambda}(T_{r},T_{R})\). Then \[\dot{\alpha}_{2}=Na^{3}\frac{\partial\rho_{\Lambda}}{\partial T_{R}} \tag{127}\] and we have more than a transfer of energy between vacuum and radiation: \[\dot{\rho}_{r}+4\frac{\dot{a}}{a}\rho_{r} = \frac{\dot{m}_{r}}{a^{4}}=\frac{1}{a^{4}}\frac{\partial H}{ \partial T_{r}}=\frac{N}{a}\frac{\partial\rho_{\Lambda}}{\partial T_{r}}\] \[\dot{\rho}_{\Lambda} = \dot{T}_{r}\frac{\partial\rho_{\Lambda}}{\partial T_{r}}+\dot{T}_ {R}\frac{\partial\rho_{\Lambda}}{\partial T_{R}} \tag{128}\] \[= -\frac{N}{a}\frac{\partial\rho_{\Lambda}}{\partial T_{r}}+\frac{Na ^{3}}{\alpha_{2}}\frac{\rho-3p}{2}\frac{\partial\rho_{\Lambda}}{\partial T_{R}}\] \[= -\frac{N}{a}\frac{\partial\rho_{\Lambda}}{\partial T_{r}}+2\frac {Na^{3}}{\alpha_{2}}\rho_{\Lambda}\frac{\partial\rho_{\Lambda}}{\partial T_{R}}.\] We could now design the function \(\rho_{\Lambda}(T_{r},T_{R})\) so that the Planck scale changes at each event, so that any new Big Bang event is at the Planck scale. More directly this could be done making \(\alpha_{2}\) a \(\mathbf{\beta}\). These possibilities will be studied in [44]. ## X Quantum Cosmology Finally, in all of the above we have assumed that the Universe is classical during evolution. This need not be the case, and evolution might occur in the realm of quantum cosmology as we now illustrate. Standard theories with relational times \(\mathbf{T_{\alpha}}\) are known to convert the Wheeler-DeWitt equation into a Schrodinger equation (e.g. [18; 19; 30; 31]). A logically independent claim to the same effect was made for VSL scenarios with hard break of diffeomorphism invariance [43]. For the VSL scenarios we have proposed here we obtain a relational-time-dependent Schrodinger equation, similar to what one gets in the interaction picture. Hence the classical matter creation we have reported in this paper can also be seen as quantum matter creation (and presumably an \(S\)-matrix approach could be developed). We illustrate this in the case of unimodular VSL (see Section V.1). One can put the Hamiltonian constraint in the form [19]: \[0=\frac{1}{b^{2}+kc_{g}^{2}}a^{2}-\frac{3}{\Lambda}\equiv H_{0}-\phi\] with \(\Lambda\) replaced by \(\phi=3/\Lambda\) and the dynamical Hamiltonian \(H_{0}\) defined by this expression. Choosing \(\phi\) (instead of \(\Lambda\)) for \(\alpha\) and allowing \(c_{g}^{2}\) to become a function of \(T_{\phi}\), upon quantization with suitable ordering we obtain the time-dependent Schrodinger equation: \[\left[\hat{H}_{0}(b,T_{\phi})-i\mathfrak{h}\frac{\partial}{\partial T_{\phi} }\right]\psi(b,T_{\phi})=0,\] with \[\hat{H}_{0}(b,T_{\phi})=\frac{-i\mathfrak{h}}{b^{2}+kc_{g}^{2}(T_{\phi})}\frac {\partial}{\partial b}\] following from (34) (where \(\mathfrak{h}\) is defined). As is well known, if \(c_{g}\) does not change, one possible solution is: \[\psi(b,T_{\phi};\phi)=\mathcal{N}e^{-i\frac{\phi}{\mathfrak{h}}T_{\phi}}\psi_ {s}(b,\phi)\] where \[\psi_{s}(b,\phi)=e^{i\frac{\phi}{\mathfrak{h}}X_{CS}(b)}=e^{i\frac{\phi}{ \mathfrak{h}}\left(\frac{3}{\mathfrak{h}}+kc_{g}^{2}b\right)}\] is a minisuperspace version of the Chern-Simons-Kodama state [53; 54; 55]. This is the monochromatic solution and superpositions with different \(\Lambda\) are possible (see [19; 55] for a discussion of normalizability and inner product). If \(c_{g}^{2}\) changes, the solution is a time-dependent version of the Chern-Simons-Kodama state: \[\psi(b,T_{\phi})=\mathcal{NT}\exp\left[-\frac{i}{\mathfrak{h}}\int_{T_{\phi 0}}^{T_{\phi}}H_{0}(\tilde{T}_{\phi})d \tilde{T}_{\phi}\right]\psi(b,T_{\phi 0})\] where \(\mathcal{T}\) denotes time-ordering in \(T_{\phi}\). For a semi-classical initial condition long before a sharp phase transition in \(c_{g}^{2}\) we may expect as an outcome a semi-classical state reproducing the classical change in \(\Lambda\) described in Section V.1. This matter is under investigation. ## XI Conclusions Where does all the matter in the Universe come from? In this paper we proposed that the Universe acquired its energy by virtue of "changing the rules of the game", viz. evolution in the laws of physics. Such evolution was defined in terms of time variables, \(\mathbf{T_{\alpha}}\), dual to constants, \(\mathbf{\alpha}\), using as a blueprint one formulation of unimodular gravity [9] (in which \(\Lambda\) plays the role of \(\mathbf{\alpha}\)). Variability was then implemented by making _other_ constants, \(\mathbf{\beta}\), functions of these times, \(\mathbf{\beta}(\mathbf{T_{\alpha}})\). In general we found that there is energy production if either a matter parameter varies in terms of a gravitational clock, or a gravity parameter evolves in terms of a matter clock. Other combinations are sterile. To give an example, if we build a matter clock following the procedure of [9], then the clock has a constant of motion (related to a conserved number density) which is the canonical dual variable to its hands, that is, the variable that changes. If the laws of nature depend on its time, then there are energy violations if the variability involves a gravitational parameter. This energy goes into changing the clock's constant of motion, and so into the clock system. We saw this general principle in action in a large number of models throughout this paper. As in Wheeler's view, we can think of the evolution potentials \(\mathbf{\beta}(\mathbf{T_{\alpha}})\) as random choices for chaotic evolution, selected in or out by their ultimate effects. Selection could specifically relate to whether evolution leads to "viable" offspring: only evolution that implies creation of matter is beneficial, with sterile evolution edited out. In this paper we "geocentrically" qualified this statement as creation of _semi-classical_ matter, but this is not strictly necessary. Perhaps the quantum cosmology of this process is far more interesting and we sketched the first steps in this direction in Section X. Still, the examples given in Section IX are classical models: they are more detailed and elaborate versions of the models first proposed in [25]. We stress that the proposal in this paper is distinct from multidimensional time. On-shell, the various \(\mathbf{T_{\alpha}}\) are all a function of each other, so time is classically one dimensional. But the \(\mathbf{T_{\alpha}}\) start off as independent variables (something which has dramatic effects in the quantum theory [18; 19; 30; 31]), and index evolution differently should there be a \(\mathbf{\beta}(\mathbf{T_{\alpha}})\). If there are violations of energy conservation due to evolution, it is crucial to specify in terms of which of these clocks evolution takes place (even though on-shell they are all related). This has physical effects, namely it determines into which matter component the energy goes/is taken off. We close with a few comments on future developments. In this paper we focused on theories exhibiting local Lorentz invariance violations, defined in the spirit of Horava-Lifshitz theory: a \(4=3+1\) split that leads to a \(3+1\neq 4\) after one fiddles with some aspect of the split theory. This author started off this article with the hunch that Lorentz invariance violation would be favoured by cosmic procreation of matter, but was disabuse by hard calculations. The shattered remains of this dogma lead to the sequel [44]: a repeat of this paper without recourse to local Lorentz invariance violation. We should stress that in our theories, even under LLIV and with time-dependence, we have a sympletic/Hamiltonian structure. This is very powerful, and is responsible for most of the results in this paper. Finally, there is the question of falsifiability. The various scenarios with matter progeny considered in Section IX are all equally possible, so one may raise questions about the testability of these scenarios. But producing copious amounts on matter on near homogeneous patches is just the starting point: one must add fluctuations. We gave special attention to scenarios with bimetric VSL (\(c_{m}\neq c_{g}\)) because these are known to produce very distinctive predictions for the cosmic fluctuations [51; 52]. More directly, we speculate on whether violations of energy conservation might be accessible on not-so-large scales, for example during black hole formation. This is particularly relevant if there is evolution in the laws of physics in our future, as predicted in some of our scenarios. In the presence of foliation-dependent evolution, the region near a black hole horizon acts as a crystal ball for events in our future. Any future or latter-day Big Bangs that might be implied, will indeed happen near the black hole horizon, with possible observational consequences. This matter is currently under investigation. ## XII Acknowledgments We thank Bruno Alexandre, Michele Arzano, Giulia Gubitosi, John Moffat and Alex Vikman for discussions related to this paper. This work was supported by the STFC Consolidated Grant ST/T000791/1.
2308.00260
Commuting Probability of Finite Groups (Extended)
The target of this article is to discuss the concept of \textit{commuting probability} of finite groups which, in short, is a probabilistic measure of how abelian our group is. We shall compute the value of commuting probability for many special classes of non-abelian groups and also establish some local and global bounds. We will conclude with a few topics for further reading.
Snehinh Sen
2023-08-01T03:44:32Z
http://arxiv.org/abs/2308.00260v1
# Commuting Probability of Finite Groups ###### Abstract The target of this article is to discuss the concept of _commuting probability_ of finite groups which, in short, is a probabilistic measure of how abelian our group is. We shall compute the value of commuting probability for many special classes of non-abelian groups and also establish some local and global bounds. We will conclude with a few topics for further reading. **Keyword:** Commuting Probability, Conjugacy Classes, Probability, Group Theory. _ORCiD:_ 0000-0002-7423-0090 ## Introduction _Commuting probability_ is a way of stating "how abelian" a group is. It is a natural numerical measure used to answer the question "when do two elements of a group commute?" As abelian groups are easier to study, one might then try to use probabilistic methods to prove or disprove some facts about pretty complicated groups by the use of commuting probability. The basic notion was introduced and studied in [7], [10] and [12]. The target of this article is to summarise certain known results with proofs accessible to undergraduate students familiar with basic group theory. We will also try to improve certain known results or give an alternate approach on some instances. To keep up with this spirit, the proofs of most of the results are included. Yet certain results, which are undoubtedly worth mentioning, have rather advanced or long proofs. We omit proofs in such cases and provide appropriate references for the interested readers. It is very important to note that _commuting probability_ is not the only measure of how close to being abelian our group is. Few other measures, after normalisation are - the size of the center (a global measure using subgroups), the sizes of centralisers (a local measure using subgroups), size of the abelianization (a measure using quotients) and the class equation (a measure using conjugacy classes). As we go along, we will try to see how these different measures correlate to commuting probability. Here is how the rest of the article is organised. The section _Primary Considerations_ 1 introduces definitions, few basic results and examples. The next section _The Dihedral, The Symmetric and The Alternating_ will focus on explicit computations for these special classes of groups. In the following section _Bounding the Commuting Probability_, we shall establish some global and local bounds, including the very famous _Erdos 5-8 Theorem_. Finally, _Further Adventures_ is a selected catalogue of topics for further study. Except the last section, all groups are assumed to be finite unless mentioned otherwise. To the best of the author's knowledge, certain results presented here have not appeared in the given form elsewhere. These results are Propositions 2, 20, 21, Theorems 19, 22 and Corollaries 18.1, 19.1. Some of the proofs also differ from the sources. Another key aspect of this article is an intuitive reinterpretation of commuting probability as an _antitone_ (order reversing) _information number_ of a group, that is, in a very vague and intuitive sense, we argue that, in a fixed set-up, groups with larger commuting probability contain "lesser (commuting) information". ## 1 Primary Considerations Let us start with a few definitions. As our group is finite, the most natural probability measure should be the one where elements are chosen uniformly at random. So suppose \(G\times G\) is assigned with the discrete uniform distribution. Let \(L(G)\) be the event \(L(G)=\{(x,y)\in G\times G:xy=yx\}\). So \(L(G)\) is the set of all pairs of commuting elements. Commuting probability should thus be the probability of this event \(L(G)\) occurring in \(G\). **Definition**.: Let \(G\) be a finite group. We define the _commuting probability_ of \(G\) as \[\operatorname{cp}(G):=\mathbb{P}(xy=yx:x,y\in G)=\frac{|L(G)|}{|G\times G|}= \frac{|L(G)|}{|G|^{2}}\] where \(L(G)=\{(x,y)\in G\times G:xy=yx\}\) is the event that an arbitrary pair \((x,y)\in G\times G\) commutes. **Remark**.: Let \(g,h\in G\). The _commutator_\([g,h]\) is defined as \(g^{-1}h^{-1}gh\in G\). It is called so because \([g,h]=1\) if and only if \(g,h\) commute. So \(L(G)=\{(x,y)\in G\times G:[x,y]=1\}\) is an alternate definition. Given a group \(G\), the _commutator subgroup_ or _derived subgroup_\(G^{\prime}=[G,G]\) is defined as the subgroup generated by all the commutators \([g,h]\). It turns out to be normal and \(Ab(G)=G/G^{\prime}\) is the _largest_ abelian quotient of \(G\) and is thus called its _abelianization_. To lay the foundations, let us see how _abelian-ness_ translates for the aforementioned measures and try to give an elementary bound for commuting probability. **Proposition 1**.: _The following are equivalent for a group \(G\)._ 1. \(G\) _is abelian._ 2. \(Z(G)=G\)_, where_ \(Z(G)\) _is the center of_ \(G\)_._ 3. \(\operatorname{cp}(G)=1\) _or equivalently_ \(L(G)=G\times G\)_._ 4. \(Z_{G}(a)=G\) _for each_ \(a\in G\)_, where_ \(Z_{G}(a)\) _is the centraliser of_ \(a\)_, which is, by definition,_ \(\{y\in G:ay=ya\}\)_._ 5. _The class equation of_ \(G\) _is_ \(1+1+\ldots+1\)_. That is,_ \(\mathcal{C}_{a}=\{a\}\) _for each_ \(a\in A\)_, where_ \(\mathcal{C}_{a}\) _is the conjugacy class of_ \(a\)_._ 6. \(G^{\prime}=1\) _or equivalently_ \(Ab(G)=G\)_._ **Proposition 2**.: _For any non-trivial group \(G\), that is \(|G|\geq 2\), we have \(\operatorname{cp}(G)\geq\frac{3|G|-2}{|G|^{2}}\). Moreover, if \(|G|\geq 3\), \(\operatorname{cp}(G)\geq\frac{3}{|G|}\)._ Proof.: Observe that for each \(g\in G\), \((1,g),(g,1),(g,g)\in L(G)\), so \(|L(G)|\geq 2|G|-1+|G|-1=3|G|-2\). Hence, \(\operatorname{cp}(G)\geq\frac{3|G|-2}{|G|^{2}}\). Now if \(|G|\geq 3\) and \(G\) is non-abelian, then there must exist an element \(a\) of order at least 3. Then \((a,a^{2})\) and \((a^{2},a)\in G\). Hence, \(|L(G)|\geq 3|G|\), giving us the desired result. It can be noted that this is surely not the best of bounds. We will come to this later. For now, let us see some examples. **Example 1**.: Look at \(S_{3}=<x,y|x^{3}=y^{2}=1=(xy)^{2}=1>\), the smallest non-abelian group. Very explicitly calculating: \[L(G) =(\{1\}\times G)\cup(G\times\{1\})\cup\{(y,y),(xy,xy),(x^{2}y,x^{2 }y)\}\] \[\cup\{(x^{k},x^{l}):1\leq k,l\leq 2\}.\] So \(|L(G)|=18\). Thus, \(\operatorname{cp}(S_{3})=\frac{1}{2}\). **Example 2**.: Let \(Q_{8}\) be the group of quaternions. Again, one may explicitly calculate that \(|L(G)|=40\). So here we have \(\operatorname{cp}(Q_{8})=\frac{5}{8}=0.625>0.5\). So, for the sake of it, one might say that _" even though \(Q_{8}\) and \(S_{3}\) are both non-abelian, \(Q_{8}\) is'more abelian' than \(S_{3}\)"_. We now relate centralisers, hence conjugacy classes, to commuting probability. The following two results will be key to our analysis. We follow [10]. **Proposition 3**.: _For any group \(G\), \(|L(G)|=\sum_{x\in G}|Z_{G}(x)|\) (where \(Z_{G}\) is the centraliser of \(x\) in \(G\))._ Proof.: Note that \(L(G)=\{(x,y)\in G^{2}:xy=yx\}=\coprod_{x\in G}\{x\}\times Z_{G}(x)\). So \(|L(G)|=\sum_{x\in G}|Z_{G}(x)|\). **Theorem 4** ([7, Theorem IV] or [10, p. 1031]).: _For any group \(G\), if \(K\) is the number of conjugacy classes (that is, the class number of \(G\)), then \(\operatorname{cp}(G)=\frac{K}{|G|}\)._ Proof.: From group theory, we have that \(|Z_{G}(a)||\mathcal{C}_{a}|=|G|\) for every \(a\in G\). Also, conjugacy classes partition \(G\). Hence, we have \[K=\sum_{g\in G}\frac{1}{|\mathcal{C}_{g}|}=\sum_{g\in G}\frac{|Z_{G}(g)|}{|G|}\] whence, by Proposition 3, our claim follows. Let us apply Theorem 4 to some small non-abelian groups. * In \(S_{3}\), there are 3 conjugacy classes, so \(\operatorname{cp}(S_{3})\) is \(1/2\). * In \(Q_{8}\), there are 5 conjugacy classes, so \(\operatorname{cp}(Q_{8})\) is \(5/8\). * In \(A_{5}\), there are 5 conjugacy classes and 60 elements, so \(\operatorname{cp}(A_{5})\) is \(1/12\) (without doing 3600 multiplications). ## 2 The Dihedral, The Symmetric and The Alternating In this section, we will try to compute the commuting probability of some standard classes of groups. Before proceeding further, here are two standard results (see, for eg. [4, p. 120,126]) from group theory which we will need in this section. **Proposition 5**.: _For \(S_{n}\), two elements are conjugates if and only if they have the same cycle type. Further, if \(G\leq S_{n}\) then two conjugates have the same cycle type when considered in \(S_{n}\)._ **Proposition 6** (Cayley's Theorem).: _Every finite group \(G\) is contained in the symmetric group \(S_{|G|}\). Specifically, \(D_{2n},A_{n}\leq S_{n}\)._ We start by considering _Dihedral Groups_ of \(2n\) elements, \(D_{2n}\). The author was introduced to these results by Professor B. Sury. **Proposition 7**.: _Let \(G=D_{2n}\) where \(n\geq 3\). Then \(\operatorname{cp}(G)\) is \(\frac{n+6}{4n}\) if \(n\) is even and \(\frac{n+3}{4n}\) if \(n\) is odd._ Proof.: We just prove for the case when \(n\) is odd. The even case is similar. Let \(D_{2n}=<x,y|x^{n}=1,y^{2}=1,(xy)^{2}=1>\). Then observe that for each \(1\leq p\leq n-1\), \(Z_{G}(1)=G\), \(Z_{G}(x^{p})=<x>\), \(Z_{G}(y)=\{1,y\}\) and \(Z_{G}(x^{p}y)=\{1,x^{p}y\}\). Hence, by Proposition 3, we have \[\operatorname{cp}(G)=\frac{\sum_{x\in G}|Z_{G}(x)|}{|G|^{2}}=\frac{1\cdot 2n+(n -1)\cdot n+n\cdot 2}{4n^{2}}=\frac{n+3}{4n}.\] It is clear that the sequence of probabilities \(\operatorname{cp}(D_{2n})\), \(n\geq 3\), has alternate crests and troughs and converges to \(0.25\). Furthermore, \(\operatorname{cp}(D_{2n})\leq 0.5\) for each \(n\neq 4\). **Remark**.: Beyond \(n=3\), we get that \(D_{2n}\) is non-abelian. A small checking would show that \(\operatorname{cp}(D_{2n})\leq\frac{5}{8}\) for each \(n\geq 3\) with equality only for \(n=4\). In fact, for any non-abelian group \(G\) discussed so far, we had \(\operatorname{cp}(G)\leq\frac{5}{8}\). We shall soon see that this is indeed a global upper bound. We shift our focus to \(S_{n}\). A _partition_ of a natural number \(n\) is an unordered collection of natural numbers \(a_{1},\ldots,a_{l}\) which add up to \(n\). \(p(n)\) would denote the number of partitions. For example, as \(4=4=1+3=2+2=1+1+2=1+1+1\) we would have \(p(4)=5\). Observe that number of conjugacy classes of \(S_{n}=\) the number of cycle types in \(S_{n}=\) the number of _partitions_ of \(n\). So we have - **Proposition 8**.: _The number of conjugacy classes of \(S_{n}\) is equal to \(p(n)\), the number of partitions of \(n\). Therefore, \(\operatorname{cp}(S_{n})=\frac{p(n)}{n!}\)._ **Remark**.: The first few values of commuting probability of \(S_{n}\) - for \(n=3,4,5,6\) they are respectively \(0.5,\frac{5}{24},\frac{7}{120},\frac{11}{720}\). As can be seen, this decreases rapidly. Even though there are many known approximations and neat series which asymptotically converge to \(p(n)\) (check for example [11] for more details), there is no known closed formula for this function. Here is a well-known simple upper bound. **Proposition 9**.: _For any natural number \(n\), \(p(n)\leq 2^{n-1}\)._ Proof.: Any partition is a solution to the equation \(x_{1}+....+x_{k}=n\) with each \(x_{i}\geq 1\) for some \(k=1,...,n\). Total number of solutions of such equations is \(2^{n-1}\). Hence, \(p(n)\leq 2^{n-1}\). As \(n\) becomes larger, \(\operatorname{cp}(S_{n})\) goes to zero. Hence, commuting probability has no non-trivial global lower bound. We conclude this section by analysing the alternate group \(A_{n}\). We give a formula using different types of partitions. **Definition**.: Let \(n\in\mathbb{N}\). An _odd distinct partition_ (ODP) of \(n\) is a partition of \(n\) consisting of odd and distinct parts. The corresponding cycle type is called an _odd distinct cycle type_ (ODC). It is helpful to recall that the sign of a permutation is only dependent on its cycle type. Here is a well-known result characterizing the conjugacy classes of \(A_{n}\). For example, one might refer to [17]. **Theorem 10**.: _A conjugacy class \(\mathcal{C}\) of \(S_{n}\) with cycle type \(t\) of even permutations remains unchanged in \(A_{n}\) if and only if there is an odd permutation \(p\) and a permutation \(x\in\mathcal{C}\) such that \(xp=px\). Moreover, this happens if and only if \(t\) is not ODC. If \(t\) is ODC, then \(\mathcal{C}\) splits into two identical parts._ Proof.: Suppose \(x\in A_{n}\). Let \(\mathcal{C}_{x}\) and \(\mathcal{C}_{x}\) denote its conjugacy class in \(S_{n}\). By Proposition 5, we have \(\mathcal{C}^{\prime}_{x}\subseteq\mathcal{C}_{x}\). So \([A_{n}:Z_{A_{n}}(x)]\leq[S_{n}:Z_{S_{n}}(x)]\). Moreover, \(Z_{A_{n}}(x)\leq Z_{S_{n}}(x)\). Thus, \([Z_{S_{n}}(x):Z_{A_{n}(x)}]\leq 2\) with equality if and only if \(\mathcal{C}^{\prime}_{x}=\mathcal{C}_{x}\). So \(\mathcal{C}^{\prime}_{x}=\mathcal{C}_{x}\) if and only if \(Z_{A_{n}}(x)<Z_{S_{n}}\) which is true if and only if there is an odd permutation \(p\) commuting with \(x\). Otherwise, it will split into exactly two equally sized classes in \(A_{n}\). We now wish to see how this relates to ODC. Suppose \(x\in\mathcal{C}\) is not ODC. Then either \(x\) has an even cycle or two identical odd cycles. In the first case, this even cycle, call it \(p\), isan odd permutation in the centraliser. In the other case, if the two cycles of the same size are \((a_{1},\ldots,a_{k})\) and \((b_{1},\ldots,b_{k})\), take \(p=(a_{1},b_{1})\ldots(a_{k},b_{k})\). Clearly, \(p\) is odd and in \(Z_{S_{n}}(x)\). Conversely, if \(x\) is ODC, then let us denote its cycle decomposition (including singleton, if any) by \(C_{1}C_{2}\ldots C_{k}\) where \(n_{i}=|C_{i}|\) and \(n_{1}<n_{2}<\ldots n_{k}\). Then clearly \(|\mathcal{C}_{x}|=\frac{n!}{n_{1}n_{2}\ldots n_{k}}\). Thus \(|Z_{S_{n}}(x)|=n_{1}n_{2}\ldots n_{k}\). Now consider the subgroup \(H=<C_{1},\ldots,C_{k}>\). Then \(|H|=n_{1}n_{2}\ldots n_{l}\) and \(H\leq Z_{S_{n}}(x)\). So \(H=Z_{S_{n}}(x)\). But \(H\leq A_{n}\). Thus, \(Z_{A_{n}}(x)=Z_{S_{n}}(x)\) and \(\mathcal{C}_{x}\) splits in \(A_{n}\). Let \(q(n)\) denote the number of ODPs of \(n\). Let \(r(n)\) and \(s(n)\) respectively denote the number of partitions of \(n\) with even many even parts and odd many even parts. Then using formal power series manipulations, one can directly show that \(r(n)-s(n)=q(n)\) and \(r(n)+s(n)=p(n)\) for each \(n\geq 1\). So, to summarise, we have the following result. **Corollary 10.1**.: _Let \(G=A_{n}\), then \(\operatorname{cp}(A_{n})=\frac{2(r(n)+q(n))}{n!}=\frac{p(n)+3q(n)}{n!}\)._ **Example 3**.: For \(n=4\), \(r(4)=3\) and \(q(4)=1\). So commuting probability is \(\frac{1}{3}\). Likewise, for \(n=5,6\) we get the probabilities are \(\frac{1}{12}\) and \(\frac{7}{360}\). Bounding the Commuting Probability In this section, we shall be computing some global and local bounds on commuting probability. We start by recalling a few basic results [See, for eg. [4, p. 84-89]] from group theory. **Proposition 11**.: _Let \(G,H\) be groups and \(Z(G)\) be the center of \(G\)._ 1. \(G/Z(G)\) _is cyclic if and only if_ \(G=Z(G)\)_._ 2. _For any_ \((a,b)\in G\times H\)_,_ \(Z_{G\times H}(a,b)=Z_{G}(a)\times Z_{H}(b)\)_._ An immediate consequence of the above is the following. One may use it and the groups \(G_{n}=S_{3}\times S_{3}\dots S_{3}\) (\(n\) times) and give an alternate proof of the fact that commuting probability has no lower bound. **Proposition 12**.: _If \(G\) and \(H\) are two finite non-abelian groups. Then \(\operatorname{cp}(G\times H)=\operatorname{cp}(G)\times\operatorname{cp}(H)\)._ Earlier on, we observed that for small non-abelian groups \(\operatorname{cp}(G)\leq\frac{5}{8}\). We are derive the famous Erdos 5-8 Theorem, which confirms our observations, and give a group theoretic corollary. **Theorem 13** (Erdos 5-8 Theorem, see for eg. [10, p. 1032]).: _Let \(G\) be a finite non-abelian group. Then \(\operatorname{cp}(G)\leq\frac{5}{8}\). Moreover, equality holds for infinitely many groups._ Proof.: Let \(G\) be a non-abelian group. Then by Proposition 11, \([G:Z(G)]\geq 4\). Moreover, if \(a\notin Z(G)\), then \([G:Z_{G}(a)]\geq 2\). So by Proposition 3, we get - \[\operatorname{cp}(G)=\sum_{g\in G}\frac{|Z_{G}(g)|}{|G|^{2}} =\sum_{g\in Z(G)}\frac{|Z_{G}(g)|}{|G|^{2}}+\sum_{g\in G\setminus Z (G)}\frac{|Z_{G}(g)|}{|G|^{2}}\] \[\leq\sum_{g\in Z(G)}\frac{|G|}{|G|^{2}}+\sum_{g\in G\setminus Z(G )}\frac{1}{2|G|}\] \[=\frac{|Z(G)|}{|G|}+\frac{|G|-|Z(G)|}{2|G|}\] \[=\frac{1}{2}+\frac{|Z(G)|}{2|G|}\leq\frac{5}{8}\] Finally, observe that for any abelian group \(H\), \(\operatorname{cp}(H\times Q_{8})\) is indeed \(5/8\). This concludes the proof. There are several interesting applications of the 5-8 theorem, for example, one can bound the number of order 2 elements in a non-abelian group \(G\). A proof would require some character theory. Interested reader are referred to Corollary 3.1 and Lemma 2 of [15]. **Corollary 13.1**.: _Any non abelian finite group \(G\) has at most \(\left\lfloor\frac{5|G|}{8}\right\rfloor\) conjugacy classes, where \(\left\lfloor.\right\rfloor\) is the floor function._ A natural attempt would be to categorise all groups for which equality holds in Theorem 13. Such groups are called _5-8 groups_. Note that equality holds if and only if 1. [label=()] 2. \([G:Z(G)]=4\), and 3. \([G:Z_{G}(y)]=2\) for each \(y\in G\setminus Z(G)\), that is every non-trivial conjugacy class has 2 elements. However, observe that (1) implies (2) as well as the fact \(G/Z(G)\cong V_{4}\), the Klein \(4-\)group. Thus \(G\) is a 5-8 group if and only if \(G/Z(G)\) is isomorphic to \(V_{4}\) which is if and only if \([G:Z(G)]=4\). A better characterization is hinted in Section 3 of [10]. Note that the bound can be slightly improved in the case when the smallest prime dividing \(|G|\) is \(p>2\). This is implicit in the above proof. Further local improvement is also possible. **Theorem 14**.: _Let \(G\) be a finite non-abelian group with \(p\) being the smallest prime dividing \(|G|\). Then_ \[\operatorname{cp}(G)\leq\frac{1}{p}+\frac{(p-1)}{p[G:Z(G)]}\leq\frac{p^{2}+p-1 }{p^{3}}\] _All three are equal if and only if \(G/Z(G)\cong\mathbb{Z}/p\mathbb{Z}\times\mathbb{Z}/p\mathbb{Z}\)._ Proof.: The proof follows from the proof of Theorem 13 realising that if \(a\notin Z(G)\), then \([G:Z_{G}(a)]=|G|/|Z_{G}(a)|\geq p\). For the second inequality, note that \([G:Z(G)]\geq p^{2}\) by Proposition 11. Equality case is similar to above discussions. **Remark**.: A very large set of groups for which equality holds is \(G\cong P\times H\) where \(H\) is abelian and \(P\) is a \(p-\)group for which \([G:Z(G)]=p^{2}\). In fact, any \(5-8\) group is of this form. Let \(a\in H\) be an element such that \((o(a),p)=1\), then \(a\in Z(G)\). Let \(H:=\{a\in G:(o(a),p)=1\}\). Then \(H\leq Z(G)\) and if \(S\) is a Sylow \(p-\)group of \(G\), we get \(HS=G\). Together, this implies that \(G=H\times S\) and \(\operatorname{cp}(S)=\operatorname{cp}(G)=\frac{5}{8}\). One can hence show that \(S\) is precisely a non-abelian central extension of \(\mathbb{Z}/p\mathbb{Z}\times\mathbb{Z}/p\mathbb{Z}\). In [14, p. 202], it is mentioned that if \(G\) is finite and \(\operatorname{cp}(G)>\frac{1}{2}\), then \(\operatorname{cp}(G)\) is of the form \(\frac{1}{2}+\frac{1}{2^{2s+1}}\) where \(s\geq 0\). A proof can be found in [13]. This can be used to refine our bound as follows. **Proposition 15**.: _Let \(G\) be a non-abelian group of order \(n\). If \(8\) does not divide \(n\), then we have \(\operatorname{cp}(G)<\frac{5}{8}\). More precisely, \(\operatorname{cp}(G)\leq\frac{1}{2}\). Equality holds for infinitely many groups._ Proof.: Observe that, by the statement preceding the proposition, if \(\operatorname{cp}(G)>\frac{1}{2}\), then \(\operatorname{cp}(G)=\frac{m}{2^{2k+1}}\), where \(m\) is odd and \(k\geq 1\). Let \(K\) be the class number of \(G\). Then \(\frac{K}{n}=\operatorname{cp}(G)=\frac{m}{2^{2k+1}}\). Thus, \(mn=2^{2k+1}K\). As \(m\) is odd and \(k\geq 1\), we have \(8\) divides \(n\). Considering the contrapositive, if \(n\) is not divisible by \(8\), we thus get \(\operatorname{cp}(G)\leq\frac{1}{2}\), as desired. Equality will hold whenever \(G\cong S_{3}\times H\), where \(H\) is an odd abelian group. Perhaps some effort can be made to classify all equality cases. For example, see [13]. As we stated earlier, greater commuting probability indicates "lesser commuting information", because, in a sense, abelian groups of a given order contain the least amount of information due to commutativity. So it is natural to guess that subgroups and quotients contain lesser information (due to their derived nature) than the ambient group. Indeed this is true! In fact, a much stronger result holds (see Theorem 18). An analysis of these results can be found in [8]. We give a proof for the weaker cases, namely quotients and subgroups. **Theorem 16**.: _Suppose \(G\) is a finite group and \(H\trianglelefteq G\). Then \(\operatorname{cp}(G)\leq\operatorname{cp}(G/H)\). Equality holds if and only if \([x,y]\in H\) implies \([x,y]=1\). Thus, if equality holds, then \(H\leq Z(G)\)._ Proof.: Let \(\bar{G}=G/H\). Note that \[L(\bar{G}) =\{(xH,yH)\in\bar{G}\times\bar{G}:[xH,yH]=1H\}\] \[=\{(xH,yH)\in\bar{G}\times\bar{G}:[x,y]\in H\}.\] Thus, \(|H|^{2}|L(\bar{G})|=|\{(x,y)\in G\times G:[x,y]\in H\}|\geq|L(G)|\), from where our result follows. Equality holds if and only if \(\{(x,y)\in G\times G:[x,y]\in H\}=L(G)\), that is to say, \([x,y]\in H\implies[x,y]=1\). Moreover, \(x\in H\) and \(H\trianglelefteq G\) implies \([x,y]\in H\) for each \(y\in G\). So \([x,y]=1\), implying that \(H\leq Z(G)\). This completes our proof. Note that \(H\leq Z(G)\) is not sufficient for equality - take \(G\) to \(Q_{8}\) and \(H=Z(G)\). Then \(H\leq Z(G)\) and \(\operatorname{cp}(G/H)=1\) but \(\operatorname{cp}(G)<1\). **Theorem 17**.: _Suppose \(G\) is a finite group and \(H\leq G\). Then \(\operatorname{cp}(H)\geq\operatorname{cp}(G)\)._ Proof.: Observe that for each \(h\in H\), \(Z_{H}(h)=Z_{G}(h)\cap H\). In general, for a \(g\in G\), let \(Z_{H}(g)=Z_{G}(h)\cap H\). By Proposition 13 of [4, p. 93]. We get \[|Z_{G}(g)\cap H|=\frac{|Z_{G}(g)||H|}{|Z_{G}(g)H|}\geq\frac{|Z_{G}(g)||H|}{|G|}.\] Set \(m=[G:H]\). Thus, we have \(m|Z_{H}(g)|\geq|Z_{G}(g)|\). Also, by double counting, we get \[\sum_{g\in G}|Z_{H}(g)|=|\{(g,h):g\in G,h\in H,gh=hg\}|=\sum_{h\in H}|Z_{G}(h)|.\] Therefore \[|L(G)|=\sum_{g\in G}|Z_{G}(g)|\leq\sum_{g\in G}m|Z_{H}(g)|=\sum_{h\in H}m|Z_{G }(h)|=\sum_{h\in H}m^{2}|Z_{H}(h)|\] which would directly imply our result. We record a stronger result without any proof (see, for eg., [8]) and give some corollaries following [9]. **Theorem 18**.: _Suppose \(G\) is a finite group. Let \(H\trianglelefteq G\). Then_ \[\operatorname{cp}(G)\leq\operatorname{cp}(H)\operatorname{cp}(G/H).\] **Corollary 18.1**.: _Let \(G,H\) be two groups and suppose \(A=G\ltimes H\) is a semi-direct product of these groups (for example, see [4, p. 175]. Then \(\operatorname{cp}(A)\leq\operatorname{cp}(G)\operatorname{cp}(H)\)._ Proof.: This is true as \(H\trianglelefteq A\) and \(A/H\cong G\). Intuitively, semi-direct product has more commutative information (the joining map) compared to the direct product with the same underlying sets. For example, treating \(H,K\leq H\times K\) via natural inclusions, \(hk=kh\) for each \(h\in H\) and \(k\in K\). This is not the case in semi-direct products. So this corollary should follow from our intuition and Proposition 12. **Corollary 18.2**.: _Let \(1=G_{0}\trianglelefteq G_{1}\ldots\trianglelefteq G_{k}=G\) be a composition series of a group. Let \(H_{i}=G_{i}/G_{i-1}\) be the \(i^{th}\) composition factor. Then \(\operatorname{cp}(G)\leq\prod_{i=1}^{k}\operatorname{cp}(H_{i})\)._ Proof.: Follows from induction on \(k\) and Theorem 18. As we had noted earlier, it is not possible to find a global non-trivial lower bound. However, just like Proposition 2, we can define some lower bounds depending on the properties of \(G\). We improve Theorem 2.1 of [5] in the setting of groups. **Theorem 19** (Group version of [5, Theorem 2.1]).: _Suppose \(G\) is a finite group. Let \(p\) be the smallest prime dividing \(|G|\). Let \(m=[G:Z(G)]\). Then we have_ \[\operatorname{cp}(G)\geq\frac{(p+1)m-p}{m^{2}}.\] _Equality holds if and only if \([Z_{G}(a):Z(G)]=p\) for each \(a\notin Z(G)\)._ Proof.: Observe that for each \(a\in G\setminus Z(G)\), \(p|Z(G)|\leq|Z_{G}(a)|\). Using this, we have \[|L(G)|=\sum_{g\in G}|Z_{G}(g)| =|G||Z(G)|+\sum_{g\in G\setminus Z(G)}|Z_{G}(g)|\] \[\geq|G||Z(G)|+\sum_{g\in G\setminus Z(G)}p|Z(G)|\] \[=|G||Z(G)|+p(|G|-|Z(G)|)|Z(G)|\] whence the given inequality follows. Equality holds if and only if \(p|Z(G)|=|Z_{G}(a)|\) for each \(a\notin Z(G)\). **Remark**.: Once again, it can be shown that any group for which equality holds is of the form \(P\times H\), where \(H\) is an abelian group and \(P\) is a p-group with the aforementioned equality. For if \(a\in G\) has \((o(a),p)=1\), we must have \(a\in Z(G)\). Otherwise, as \([Z_{G}(a):Z(G)]=p\) and \(a\in Z_{G}(a)\), by taking quotients we would get \(p|o(a)\). Rest of the proof is similar to Remark 3. Once again, one can try to characterize all such \(p-\)groups. Let \(G\) be a non-abelian group. Observe that \(m\geq p^{2}>p\). Using this, we get \(\operatorname{cp}(G)>p/m\). Now \(\operatorname{cp}(G)=K/|G|\), where \(K\) is the class number of \(G\). So, we have \(Km>p|G|\), that is \(K>p|Z(G)|\). Using this, we have this pretty fascinating group theoretic result. **Corollary 19.1**.: _Let \(G\) be a finite non-abelian group of order \(n\) and let \(p\) be the smallest prime dividing \(n\), then the number of conjugacy classes of \(G\) is at least \(p|Z(G)|+1\). Hence, there are at least \((p-1)|Z(G)|+1\geq p\) many non-trivial conjugacy classes._ Till now, we tried to look at the size and prime factorization of the size of the groups to bound commuting probability. One could also study specific classes of group. We now try to formulate some results specifically about simple groups via elementary methods. A proper study would once again require advanced tools like representation theory, which we do not intend to use. Nevertheless, we give ample references for the interested readers. The first simple non-abelian groups has order \(60\). Beyond that, all simple non-abelian groups have order \(|G|\geq 168\) and, according to [2], at least \(6\) conjugacy classes. We shall make use of this fact. Here is a group theoretic result which we will need. **Proposition 20**.: _Let \(G\) be a non-abelian simple group of order \(n\geq k!\). Then \(n\) has no subgroup \(H\) of index \([G:H]\leq k\). Thus, every conjugacy class has size at least \(k+1\)._ Proof.: Note that it is enough to show that there is no subgroup of index \(k\). Suppose, on the contrary, there is a subgroup \(H\) with index \(k\). Let \(L\) be the set of left cosets of \(H\). Then \(G\) acts on \(L\) via left multiplication. Using this, we get a homomorphism \(\phi:G\to S_{k}\). As \(G\) is simple, we must have \(Ker(\phi)=1\). But then \(|G|\geq k!\). Thus we must have \(\phi\) is an isomorphism, which would contradict that \(G\) is simple. **Proposition 21**.: _For any simple non-abelian group \(G\), \(\operatorname{cp}(G)<\frac{1}{5}\)._ Proof.: As \(G\) is simple, \(G\) has a non-trivial center. Suppose \(K\) is the number of conjugacy classes of \(G\) and \(k\) is the size of the smallest non-trivial conjugacy class of \(G\). Then by considering the average size of the non-trivial conjugacy classes, we get \(k(K-1)\leq(|G|-1)\). Now \(K(|G|+5)>6|G|\) which would say \(k<\frac{6|G|}{5K}\). But by Proposition 20, we must have \(k\geq 6\). Thus \(\operatorname{cp}(G)=\frac{K}{|G|}<\frac{1}{5}\) for each simple \(G\) with order at least \(168\). But the only simple group of order \(60\) is \(A_{5}\), which has commuting probability \(\frac{1}{12}<\frac{1}{5}\). Therefore, our bound holds for every simple group. One can make this bound considerably better with some representation theory. Here is the strongest possible bound. **Theorem 22** (Dixon).: _Let \(G\) be a simple non-abelian group. Then \(P_{G}\leq\frac{1}{12}\). Equality holds only for \(A_{5}\)._ A proof can be found on [3, p. 302] (as a problem due to J.Dixon) and uses facts from representation theory and matrix groups. We state a fascinating corollary. The proof follows from Theorem 22, Corollary 18.2 and the Jordan-Holder Theorem for groups. **Corollary 22.1**.: _Every finite group \(G\) with \(\operatorname{cp}(G)>\frac{1}{12}\) is solvable._ For an alternate proof and a description of equality, readers are referred to Theorem 11 of [9]. To conclude our discussion on simple groups, we look at the following remarkable result. It can be found in [18] in the comments by I.Agol and D.L.Harden. **Theorem 23**.: _Let \(\epsilon>0\). Then the number of simple finite non-abelian groups with \(\operatorname{cp}(G)\geq\epsilon\) is finite._ Proof.: We follow the notation used in the proof of Proposition 21. Let \(|G|\geq 168\). Now if \(\operatorname{cp}(G)\geq\epsilon\), then \(k<\frac{6}{5\epsilon}\). By Proposition 20, \(k!>|G|\). So if \(m=\left\lfloor\frac{6}{5\epsilon}\right\rfloor\), we should have \(|G|<m!\) and clearly there are finitely many such groups. We conclude this section by trying to relate commuting probability to the derived subgroup \(G^{\prime}\). Recall that the larger \(G^{\prime}\) is, the farther away \(G\) is from being abelian. In fact, with some elementary representation theory, one can relate these two quantities pretty easily. For example, see the appendix of [1]. We record the result without a proof along with an obvious improvement. This would be followed by a bound on the other side. **Theorem 24**.: _Let \(G\) be a finite group, then_ \[\operatorname{cp}(G)\leq\frac{1}{4}\left(1+\frac{3}{|G^{\prime}|}\right).\] _In fact, if \(p\) is the smallest prime dividing the order of \(G\), then the above bound can be slightly improved to_ \[\operatorname{cp}(G)\leq\frac{1}{p^{2}}\left(1+\frac{p^{2}-1}{|G^{\prime}|} \right).\] **Remark**.: If \(G\) is a non-abelian finite group, then \(|G^{\prime}|\geq p\), which would recover the \(5-8\) bound and Theorem 14 using Theorem 24. **Proposition 25** (Group version of [5, Theorem 2.5]).: _Let \(G\) be a finite group. Then_ \[\operatorname{cp}(G)\geq\frac{[G:Z(G)]+|G^{\prime}|-1}{|G^{\prime}|[G:Z(G)]}\] _with equality if and only if \(|G^{\prime}|=|\mathcal{C}_{g}|\) for each \(g\notin Z(G)\)._ Proof.: Observe that the function \(\phi:\mathcal{C}_{g}\to G^{\prime}\) given by \(h\mapsto g^{-1}h\) is an injection, hence \(|G^{\prime}|\geq|\mathcal{C}_{g}|=[G:Z_{G}(g)]\). Rest follows from the proof of Theorem 19. ## 4 Further Adventures Having developed quite some background about commuting probability, one might ask - what else? This was just the tip of the iceberg. Here are a few paths in which one could proceed. **Topological Properties** Define \(\mathcal{P}:=\{\operatorname{cp}(G):|G|<\infty\}\). It is clear that \(\mathcal{P}\subseteq(0,0.625]\cup\{1\}\). According to [14], there are quite a few "gaps" in this set. It is fairly obvious that the derived set \(\mathcal{P}^{\prime}\) contains \(0\) and is a subset of \([0,0.625]\). What else is there in \(\mathcal{P}^{\prime}\)? Such questions were first asked by Keith Joseph in [12], who proposed that cp is a naturally well ordered set (using \(>\)) and \(\bar{\mathcal{P}}=\{0\}\cup\mathcal{P}\). This is a pretty amazing claim! However, as one might expect, the proof is pretty complicated and, to the knowledge of the author, certain parts are yet to be proven. A suitable reference is [6]. **Infinite Groups** Suppose \(G\) is a (locally) compact topological group. Over here, the _Haar measure_ can be used to define a commuting probability. As indicated by [10], the \(5-8\) Theorem, with a small modification, is still valid in such a set-up. One may then proceed to ask other questions about commuting probability. A few of these results have been considered in [16]. **Other Algebraic Structure** Instead of looking at groups, one could venture into the realm of finite rings, algebras, non-associative rings and so on. For example, one may start with [5]. **Isoclinism** Isoclinism is a phenomena introduced by Philip Hall to classify \(p-\)groups. It is a generalisation of isomorphism of groups. Recall that we have a _commutator map_\(\phi:G/Z(G)\times G/Z(G)\to G^{\prime}\) given by \((aZ(G),bZ(G))\mapsto[a,b]\). We say two groups \(G_{1}\) and \(G_{2}\) are _isoclinic_ if there commutator maps are, effectively, the same. That is, we have (a) \(G_{1}/Z(G_{1})\cong G_{2}/Z(G_{2})\) via some \(\psi\) (b) \(G_{1}^{\prime}\cong G_{2}^{\prime}\) via some \(\theta\) (c) If \(\phi_{i}\) is the commutator map of \(G_{i}\), then \(\theta\circ\phi_{1}=\phi_{2}\circ\psi\times\psi\) as maps. So the isomorphisms commute with commutator maps. Remarkably, if two groups are isoclinic, they have the same commuting probability. A sample reference for such considerations is [13]. **Other Probabilities** There are many more interesting probabilities on a group \(G\) of which we list a few. Let \(n\geq 2\). 1. Probability that an arbitrary \(n-\)tuple in \(G\) commutes, that is \[\mathbb{P}\left((g_{1},\ldots,g_{n})\in G^{n}:\prod_{i=1}^{n}g_{i}=\prod_{i=1} ^{n}g_{\sigma(i)},\forall\sigma\in S_{n}\right).\] 2. Probability that \(n\) randomly chosen elements generates \(G\). 3. Commuting probability of a subgroup with respect to a group, that is, \(\mathbb{P}((g,h)\in G\times H:gh=hg)\). 4. Probability that two arbitrarily selected elements are conjugates, or in general, satisfy some group theoretic property. ## Acknowledgement The author would like to thank the professors and students of ISI Bangalore for inspiring many aspects of this article. A special thanks to Prof. Yogeshwaran D., Prof. Parthanil Roy and Prof. B. Sury for their suggestions to improve the write-up of the article.
2307.01923
An Algorithm for Persistent Homology Computation Using Homomorphic Encryption
Topological Data Analysis (TDA) offers a suite of computational tools that provide quantified shape features in high dimensional data that can be used by modern statistical and predictive machine learning (ML) models. In particular, persistent homology (PH) takes in data (e.g., point clouds, images, time series) and derives compact representations of latent topological structures, known as persistence diagrams (PDs). Because PDs enjoy inherent noise tolerance, are interpretable and provide a solid basis for data analysis, and can be made compatible with the expansive set of well-established ML model architectures, PH has been widely adopted for model development including on sensitive data, such as genomic, cancer, sensor network, and financial data. Thus, TDA should be incorporated into secure end-to-end data analysis pipelines. In this paper, we take the first step to address this challenge and develop a version of the fundamental algorithm to compute PH on encrypted data using homomorphic encryption (HE).
Dominic Gold, Koray Karabina, Francis C. Motta
2023-07-04T21:11:08Z
http://arxiv.org/abs/2307.01923v1
# An Algorithm for Persistent Homology Computation Using Homomorphic Encryption ###### Abstract Topological Data Analysis (TDA) offers a suite of computational tools that provide quantified shape features in high dimensional data that can be used by modern statistical and predictive machine learning (ML) models. In particular, persistent homology (PH) takes in data (e.g., point clouds, images, time series) and derives compact representations of latent topological structures, known as persistence diagrams (PDs). Because PDs enjoy inherent noise tolerance, are interpretable and provide a solid basis for data analysis, and can be made compatible with the expansive set of well-established ML model architectures, PH has been widely adopted for model development including on sensitive data, such as genomic, cancer, sensor network, and financial data. Thus, TDA should be incorporated into secure end-to-end data analysis pipelines. In this paper, we take the first step to address this challenge and develop a version of the fundamental algorithm to compute PH on encrypted data using homomorphic encryption (HE). Keywords:homomorphic encryption, topological data analysis, secure computing, persistent homology, applied cryptography, privacy enhancing technology ## 1 Introduction Topological Data Analysis (TDA) has blossomed into a suite of computational tools, built on firm mathematical theory, that generate quantified, discriminating, shape-based features of data, which can provide interpretable representations of high dimensional data and be taken in by modern statistical and predictive ML models. To apply the flagship approach, known as persistent homology (PH), data--usually in the form of point clouds or scalar-functions defined on a mesh (e.g., images, time series)--are transformed into a binary matrix that encodes the evolution of a family of simplicial complexes. From this matrix a collection of persistence diagrams (PDs) can be derived through a simple reduction algorithm. PDs provide compact representations of the number and size of geometric/topological structures in the data as multisets of planar points, and can be equipped with natural metrics. Models can then be developed either directly on PDs using, for example, hierarchical clustering or \(k\)-medoids in the metric space of diagrams for classification tasks, or subsequent transformations can be applied to produce topological feature vectors [2, 7, 9, 18, 47, 49] to be used with ML model architectures such as random forests, support vector machines, and neural networks. We refer to these steps as the TDA-ML pipeline, as illustrated in Figure 1. Crucial to the use of PH in applications are the numerous stability results that establish--under a variety of assumptions about the data and the metrics placed on the data and the PDs--the (Lipschitz) continuity of the map sending data to PDs [11, 19, 53] and to feature vectors [2]. Due its inherent noise tolerance and suitability across domain and data types, PH has been widely adopted for model development including on sensitive data, such as genomic [52], cancer [8], sensor network [62], and financial data [26]. The reader may refer to recent review articles for references to a variety of PH applications [10, 44]. As the scale of predictive models and their data demands grow, there is pressure to move to collaborative and cloud-based systems in which analysis is performed remotely and in a distributed fashion (e.g., federated learning [40, 42]). This is especially true for propriety and sensitive models that require large training data. On the other hand, a user--be it an independent data producer or agent lacking the capabilities demanded by the models--may need to keep private both their data and the decisions informed by that data. Thus, there is a growing need in industry and government for efficient, secure end-to-end data analysis pipelines that protect vital information on which sensitive decisions are made; to protect privacy, ensure compliance with personal data management regulations, and prevent hostile interference or misuse. Example application domains, where bridging topological data analysis and secure end-to-end algorithms will yield more efficient, privacy-preserving, and Figure 1: The TDA-ML Pipeline. Data is first transformed into a family of topological spaces encoded in a binary matrix called a boundary matrix, and then the boundary matrix is transformed into compact representations of the topological structures in the data called persistence diagrams. The distances between diagrams are computed or further transformations produce topological feature vectors. Finally, topological feature vectors are input into downstream models for a desired task. The green box indicates the contribution of this paper, securing the boundary matrix to persistence diagram step in the pipeline. robust applications where data analysis, data mining, statistical inference and pattern recognition tasks are performed on private data collected from a large number of, and potentially competing, parties include video surveillance for law enforcement, location and energy use tracking for smart cities and autonomous vehicles [6, 56], financial data [26], and biomedical data such as genomics [52] and cancer [8], to name a few. In order to address challenges with outsourcing sensitive data analysis, cryptographic researchers have been developing secure multiparty computing tools since the 1980s [61]. A good portion of the theoretical foundations of these primitives have been successfully adapted for practical applications in industry [63]. For example, recent innovations in homomorphic encryption (HE) have expanded the variety and complexity of the operations and algorithms that can compute on encrypted data (e.g., comparisons and conditionals [16, 17, 31, 55]. Secure multiparty computing tools are nowadays interacting with privacy-preserving machine learning (ML) applications [12, 20, 36]. Indeed, there has been a recent surge in the development of secure ML algorithms using HE [4, 41, 24, 34]. Thus, HE promises to expand to support complex algorithms and models that protect the privacy of both input data and model outputs. Similarly, sensitive data may be outsourced to a third party database management system (DBMS), where data owner may not fully trust DBMS but still request DBMS to perform some relational operations on the data such as sort, join, union, intersect, and difference. Specialized (symmetric key) encryption schemes allow data owners to encrypt their data, while preserving the ability of DBMS to perform such operations over the encrypted data [28, 29, 48, 50]. In practice, a hybrid use of public key and symmetric encryption schemes are complementary in creating secure and trustworthy data analytical services and applications, which take encrypted data and perform both training and inference on it. Many such models have been performed this way, like logistic or ridge regression [12, 20, 25, 39, 43], support vector machines [3, 46], random forests [30, 37, 58, 64], and even neural networks [27, 35, 59, 60]. The dual benefits of an HE framework for ML model training and inference are that while the client protects their data, the server protects their models that take in this encrypted data. In the TDA-ML pipeline (Fig. 1), both feature generation and model training/evaluation on those features represent critical components of the model development and deployment. Thus, each step back in the pipeline that can be realized in an HE framework relaxes the preprocessing demands on the client and strengthens the protection of the server's model. Thus securing the boundary matrix to persistence diagram step (green box in Fig. 1) is a critical step to allow a Server to fully protect any model that uses topological data features. **Our contributions:** We develop HE-Reduce (Algorithm 6) as a first-of-its-kind version of the boundary matrix reduction algorithm (Reduce, Algorithm 1), which is at the heart of PH and TDA, and which is suitable for secure computation using HE. We achieve this by modifying the logical structure of Algorithm 1 and by developing new arithmetic circuits to replace its computational and conditional statements. As a result, HE-Reduce traces essentially the same steps as in Reduce but in a HE-friendly manner so that computations can be performed securely in the ciphertext space. We prove the correctness of our proposed algorithm and provide a complexity analysis. Our analysis is constructive and provides lower bounds on the implementation parameters that guarantee correctness. We implement our algorithms using the CKKS scheme from the OpenFHE library [5] but our techniques can be adapted for other HE schemes by implementing a compatible comparison function using BGV/BFV or TFHE schemes at a comparable cost; see [32]. Finally, we highlight some limitations of our proposed algorithm and suggest some improvements together with some empirical evidence. **Outline:** The rest of this paper is organized as follows. Section 2 establishes the mathematical and computational preliminaries of PH and HE. Section 2 also outlines the main challenges associated with transforming Reduce to HE-Reduce. In Section 3, we establish an HE-compatible version of the boundary matrix reduction algorithm, presented in Algorithm 6, and establish conditions guaranteeing correctness. Section 4 provides a complexity analysis for Algorithm 6 and notes on the implementation, including limitations of the proposed algorithm and potential improvements. Our plaintext implementation of Algorithm 6 in Section 4.4 simulates an implementation of HE-Reduce using HE, verifies the correctness of our theoretical results, and provides some positive evidence for improvements. Our experiments showcase the propagation of errors due to relaxing algorithm parameters; see Figure 3. We make concluding remarks in Section 5 concerning potential future research thrusts in secure TDA. In some cases, we have deferred technical proofs to the Appendix. ## 2 Preliminaries Our approach to adapting the PH boundary matrix reduction algorithm into a secure framework is to encrypt the input to the reduction algorithm and to allow computations to be performed on ciphertexts in such a way that the decrypted output of the algorithm is equivalent to the output of the algorithm running on the plaintext input. In Section 2.1, we provide some necessary background information on PH and present the main PH boundary matrix reduction algorithm in Algorithm 1. In Section 2.2, we present an overview of HE and explain some of the challenges that would occur when developing a cryptographic version of Algorithm 1 based on HE. We denote vectors and matrices with boldface, as in \(\mathbf{v}\in\mathbb{R}^{n}\), \(\mathbf{R}\in\mathbb{R}^{n\times n}\), and denote the \(i\)-th components of vectors with brackets, e.g., \(\mathbf{v}[i]\), and columns of matrices with subscripts, \(\mathbf{R}_{i}\). We denote the infinity norm of \(\mathbf{v}\) by \(|\mathbf{v}|=\|\mathbf{v}\|_{\infty}=\max\limits_{i}\big{|}\mathbf{v}[i]\big{|}\). We then define the following metric between any two vectors \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}\) in the usual manner: \[|\mathbf{x}-\mathbf{y}|=\|\mathbf{x}-\mathbf{y}\|_{\infty}=\max\limits_{i} \big{|}\mathbf{x}[i]-\mathbf{y}[i]\big{|},\] where \(|\cdot|\) in the final expression is the usual absolute value of a real number. Furthermore, for \(\mathbf{v}\in[0,1]^{n}\), we denote \(l_{\mathbf{v}}=low(\mathbf{v})\) as the integer-valued maximum index containing a \(1\) to help ease notation when appropriate. ### Persistent Homology PH, a mathematical device from algebraic topology, provides a means of comparing data through its latent shape-based structures. This is achieved by first associating to a dataset an ordered, nested family of combinatorial objects that are equipped with well-defined notions of shape. In particular, these shape features will be representations of \(k\)-dimensional holes in the data. Intuitively, a \(k\)-dimensional hole is a vacancy left by a \((k+1)\)-dimensional object whose \(k\)-dimensional boundary remains. In this way, PH can be regarded as a feature extraction tool which pulls from data topological/geometric features which may provide scientific insights and can be used to train discriminating or predictive models. Although there are different forms of (persistent) homology theory, we restrict our attention to simplicial homology because of its intuitive appeal and practical computability. Definition 1: An abstract simplicial complex, \(K\), is a finite collection of finite subsets (called simplices) such that if \(\sigma\in K\), then \(\tau\in K\) for all \(\tau\subset\sigma\). A Figure 2: (Rows 1,2) Example filtration given by an ordering of the simplices in a simplicial complex that consists of \(4\) points, \(5\) edges, and \(2\) triangles. (Row 3) From left to right, the exact binary boundary matrix, the binary reduced boundary matrix, and the \(H_{0}\) and \(H_{1}\) persistence diagrams corresponding to the given filtration. White boxes in the matrices indicate \(0\)s and shaded boxes represent \(1\)s, with the lowest \(1\) in each column shaded with black. simplex, or a simplex of dimension \(k\), is a set of size \(k+1\), and the dimension of a complex, \(\text{dim}(K)\), is the maximum dimension of any of its simplices. A proper subset, \(\tau\subsetneq\sigma\in K\), is called a face of \(\sigma\). If \(\tau\) is a codimension-1 face of \(\sigma\), i.e., \(\tau\subset\sigma\in K\) and \(|\tau|=|\sigma|-1\), we call \(\tau\) a boundary face of \(\sigma\). For simplicity, we will denote the \(k\)-simplex \(\{x_{0},x_{1},\ldots,x_{k}\}\) by \(x_{0}x_{1}\ldots x_{k}\)._ One may regard \(0\)-simplices (singleton sets) as points in some Euclidean space, \(1\)-simplices (pairs) as edge segments between points, \(2\)-simplices (sets of size \(3\)) as filled-triangles, \(3\)-simplices (sets of size \(4\)) as filled tetrahedra and so on, with the requirement that simplices in the geometric realization intersect only along common faces. Figure 2 illustrates such geometric realizations of abstract simplicial complexes. For example, \(K_{5}\) is the geometric realization of the abstract simplicial complex \(\{\emptyset,a,b,c,ab,ac,bc\}\). The empty triangle formed by the edges \(ab,bc\), and \(ac\) at index \(5\) in Figure 2 provides an example of a \(1\)-dimensional hole formed by the vacancy of the missing \(2\)-simplex, \(abc\), enclosed by its three boundary edges, \(ab\), \(ac\), and \(bc\). The holes in a simplicial complex, \(K\) are collected into a group, denoted \(H_{1}(K)\), composed of equivalence classes of collections of \(1\)-simplices that form cycles (e.g., \(ab\), \(ac\), and \(bc\) in \(K_{5}\)) that could be the boundary faces of some collection of \(2\)-simplices, but aren't. Similarly, a collection of triangles in \(K\) that enclose a void become represents of elements in \(H_{2}(K)\). More generally, for each dimension \(k\), the \(k\)-dimensional homology group \(H_{k}(K)\) comprises equivalence classes of \(k\)-dimensional cycles that are not boundaries of a collection of \((k+1)\)-dimensional simplices. \(H_{0}(K)\) encodes the connected components of \(K\). By ordering the simplices of a simplicial complex so that no simplex appears before any of its faces, one forms a nested sequence of simplicial complexes, which we'll call a _filtration_. Across this filtration one can track which simplices gave birth to homological features and which simplices kill off those homological features to determine (birth, death) pairs that track the persistence of each homological feature. For example, in Figure 2, \(H_{1}(K_{4})\) is trivial since \(K_{4}\) contains no holes. This is in contrast to the complexes \(K_{5}-K_{9}\) that have a non-trivial \(H_{1}\) element represented by the boundary edges \(ab,bc\), and \(ac\) that was born with the introduction of \(bc\) at index \(5\). In \(K_{8}\) there appears another hole with the introduction of the edge \(bd\), which then disappears in \(K_{9}\) when the triangle \(bcd\) fills the cycle formed by \(bc\), \(bd\), \(cd\). In practice one usually defines a complex, \(K\), from a dataset and computes a filtration from a real-valued function \(f:K\rightarrow\mathbb{R}\) that satisfies \(f(\tau)\leq f(\sigma)\) if \(\tau\subseteq\sigma\in K\). \(f\) encodes the'scales' at which each simplex appears in the filtration gotten by ordering simplices according to their scales and sorting ties arbitrarily while ensuring each simplex never appears before its faces. A multitude of methods have been proposed to derive such filtrations [45], both from point cloud data (e.g., Vietoris-Rips filtration [65], alpha filtration [21]) and related filtrations for functions on a cubical mesh [57]. However determined, the structures in the filtration can be encoded in a square, binary matrix \(\mathbf{\Delta}(\mathbf{K})\) called a _boundary matrix_, whose rows and columns are indexed by the simplices in \(K\), ordered \(\sigma_{1},\ldots,\sigma_{n}\) so that \(i<j\) if \(f(\sigma_{i})<f(\sigma_{j})\) or if \(\sigma_{i}\subset\sigma_{j}\). The entries of the boundary matrix are \[\mathbf{\Delta}_{i,j}=\begin{cases}1,&\text{if $\sigma_{i}$ is a boundary face of $\sigma_{j}$}\\ 0,&\text{otherwise}\end{cases}.\] Thus, \(\mathbf{\Delta}\) encodes the order in which simplices appear in the filtration and the relationship between each simplex and its boundary simplices. We let the first row and column correspond to the empty simplex, \(\emptyset\), so that the vertices have boundary equal to \(\emptyset\). Thus, vertices are encoded by a column [1,0,\(\ldots\),0], while \(\mathbf{\Delta}_{0}\) is then necessarily a zero column, which could be omitted. The scales, \(f(\sigma_{i})\), at which each simplex is added to the complex may be regarded as a real-valued vector in \(\mathbb{R}^{n}\) and can be held separately from the combinatorial information encoded in the boundary matrix. It is shown in [22, 23] that calculation of the persistence pairs can be achieved through a straightforward algorithm (Algorithm 1) that brings a boundary matrix into a reduced form. The critical operation needed to transform a filtered simplicial complex \(\mathbf{K}\)--given by the monotonic filtration function \(f:K\rightarrow\mathbb{R}\) and encoded in a boundary matrix \(\mathbf{\Delta}\)--into its PDs is the function \[low(\mathbf{v})=\max(\{i\mid(\mathbf{v}[i]=1\}),\] which returns the largest index among those coordinates of the binary vector \(\mathbf{v}\) that are equal to 1. Progressing from \(j=1\) to \(n\) (i.e., in the order of the simplices given by the monotonic function \(f\)), each column \(\mathbf{\Delta}_{j}\) is replaced with the mod-2 sum \(\mathbf{\Delta}_{i}+\mathbf{\Delta}_{j}\), whenever \(\text{low}(\mathbf{\Delta}_{i})=\text{low}(\mathbf{\Delta}_{j})\) and \(i<j\), until the lowest 1 in column \(j\) is distinct from all lowest 1s in the preceding columns. The lowest 1s in the reduced boundary matrix then specify the indices of the pair of simplices at which each PH class of the corresponding dimension is born and dies. More precisely, let \(\mathbf{R}=\texttt{Reduce}(\mathbf{\Delta})\) be the reduction of the boundary matrix \(\mathbf{\Delta}\) after applying Algorithm 1. Then \((f(\sigma_{i}),f(\sigma_{j}))\) is a (finite persistence) point in the \(k\)-dimensional PD \(\text{dgm}_{k}(\mathbf{K})\) if and only if \(\sigma_{i}\) is a simplex of dimension \(k\) and \(i=\text{low}(\mathbf{R}_{j})\). In other words, a \(k\)-dimensional homology class was born with the introduction of the simplex \(\sigma_{i}=\sigma_{\text{low}(\mathbf{R}_{j})}\) and died when \(\sigma_{j}\) was added to the filtration. In Figure 2 we illustrate the original boundary matrix, its reduced form after applying Algorithm 1, and \(H_{0}\) and \(H_{1}\) PDs associated to the given filtration. In the reduced matrix, columns \(b\) and \(c\) consist of all zeros, since their appearance created homology (\(H_{0}\)) classes4. The connected components represented by vertices \(b\) and \(c\) are then killed by the introduction of \(ab\) and \(ac\) respectively, since these edges merge the connected component into the component represented by \(a\), which was born earlier. This is encoded in the reduced boundary matrix by the low 1s at indices (\(b\), \(ab\)) and (\(c\), \(ac\)) respectively. The edge \(bc\) likewise gives birth to an \(H_{1}\) class, that is later killed off by the introduction of the triangle \(abc\). This is why, in \(\mathtt{Reduce}(\mathbf{\Delta})\), column \(bc\) consists of all zeros and the low 1 in \(abc\) is in row \(bc\). Footnote 4: The first vertex \(a\) is a special case, and technically kills off the (-1)-dimensional (reduced) homology class at index -1. The low 1s in the reduced matrix encode the birth-death simplex pairs appearing in the PDs of the filtration. Here we take the scale of each simplex to be the index of the complex in which it first appears so that the low 1 at (\(b\),\(ab\)) is sent to the point (1,3) in the \(H_{0}\) diagram. Similarly, (\(bd\), \(bcd\)) maps to (8,9) and (\(bc\), \(abc\)) maps to (5,10) in the \(H_{1}\) PD, \(\mathrm{dgm}_{1}(\mathbf{K})\). If the scales of each simplex were determined instead by some geometric information in the data (e.g., using pairwise distances between points as is the case for the Vietoris-Rips filtration), the positions of the points in the PDs would capture these scales, rather than merely the indices. ### Homomorphic Encryption Let \(\mathcal{M}\) be a message (plaintext) space and \(\mathcal{C}\) be a ciphertext space. We assume that \(\mathcal{M}\) and \(\mathcal{C}\) are commutative rings with their respective identity elements, and addition and multiplication operations, denoted \((\mathcal{M},1_{\mathcal{M}},+,\times)\) and \((\mathcal{C},1_{\mathcal{C}},\oplus,\otimes)\). When the underlying ring is clear from the context, we simply denote the identity element by 1 and so by abuse of notation the scalar space of the ring consists of elements \(s=\sum_{i=1}^{s}1\in\mathbb{Z}\). For a given parameter set params, an HE scheme consists of algorithms as described in the following: * \(\mathtt{KeyGen}(\mathtt{params})\): Takes params as input, and outputs a public key and secret pair (pk, sk), and an evaluation key evk. * \(\mathtt{Enc}_{\mathtt{pk}}(\mathbf{m})\): Takes a plaintext message \(\mathbf{m}\in\mathcal{M}\) and the public key pk as input, and outputs a ciphertext \(c\in\mathcal{C}\). * \(\mathtt{Dec}_{\mathtt{sk}}(c)\): Takes a ciphertext \(c\in\mathcal{C}\) and the public key sk as input, and outputs a plaintext message \(\mathbf{m}\in\mathcal{M}\). * \(\mathtt{Add}_{\mathtt{evk}}(c_{1},c_{2})\): Takes a pair of ciphertexts \((c_{1},c_{2})\), \(c_{i}\in\mathcal{C}\) and the evaluation key evk as input, an outputs a ciphertext \(c_{\mathtt{add}}\in\mathcal{C}\). * \(\mathtt{Mult}_{\mathtt{evk}}(c_{1},c_{2})\): Takes a pair of ciphertexts \((c_{1},c_{2})\), \(c_{i}\in\mathcal{C}\) and the evaluation key evk as input, an outputs a ciphertext \(c_{\mathtt{mult}}\in\mathcal{C}\). * \(\mathtt{Eval}_{\mathtt{evk}}(f;c_{1},...,c_{k})\): Takes an arithmetic circuit \(f:\mathcal{M}^{k}\rightarrow\mathcal{M}\), ciphertexts \(c_{i}\in\mathcal{C}\), and the evaluation key evk as input, and outputs a ciphertext \(c_{\mathtt{eval}}\in\mathcal{C}\). Here, params generally consists of a security parameter \(\lambda\) and a multiplicative depth parameter \(L\). The security parameter \(\lambda\) says that complexity of the best attack to break the security of the HE scheme is \(\mathcal{O}\). The depth parameter \(L\) guarantees that the HE scheme can evaluate circuits of maximum multiplicative depth \(L\). We frequently refer to multiplicative depth and computational complexity of circuits in our analysis and they are defined as follows. Definition 2: Let \(f\) be an arithmetic circuit. Multiplicative depth, or simply depth, of \(f\) is the maximum number of sequential multiplications required to compute \(f\). Computational complexity, or simply complexity, of \(f\) is the number of multiplication and addition operations required to compute \(f\). For example, \(f(m_{1},m_{2},m_{3},...,m_{n})=\sum_{i=1}^{n}m_{i}^{2^{i}}\) is a depth-\(n\) multiplicative circuit, where \(m_{i}^{2^{i}}\) can be computed after \(i\) successive multiplications (squarings). A naive way to compute \(f\) would require \(n(n+1)/2\) multiplications and \((n-1)\) additions and so we can say that \(f\) has computational complexity \(\mathcal{O}(n^{2})\). A basic correctness requirement5 for an HE scheme is that the decryption operation is the inverse of the encryption operation, that is Footnote 5: The correctness and homomorphic features of HE may be violated with negligible probability. \[\texttt{Dec}_{\texttt{sk}}(\texttt{Enc}_{\texttt{pk}}(m))=m\] for all \(m\in\mathcal{M}\). The homomorphic feature6 of an HE scheme requires Footnote 6: The correctness and homomorphic features of HE may be violated with negligible probability. \[\texttt{Dec}_{\texttt{sk}}(\texttt{Eval}_{\texttt{evk}}(f;c_{1},...,c_{k}))= f(m_{1},...,m_{k})\] for all \(c_{i}\in\mathcal{C}\) such that \(c_{i}=\texttt{Enc}_{\texttt{pk}}(m_{i})\). In other words, HE allows one to evaluate polynomials on encrypted data such that the decryption of the result is exactly the same as the value of that polynomial evaluated on plaintext messages. We should note that we presented here a limited overview of HE schemes so that our paper is self-contained. HE schemes are much more involved (e.g., consisting of other algorithms such as scaling, relinearization, bootstrapping, etc.) and their implementations require a great deal of detail (e.g., encoding and decoding algorithms so that the plaintext messages can be mapped into the message space of the scheme, batching operations, etc.). Moreover, most of these details depend on the choice of the HE scheme. For a survey of HE schemes and existing libraries, we refer the reader to [1, 5]. Some of the challenges of using HE in practice are: * Increasing the depth of the arithmetic circuit significantly increases the complexity of the circuit's encrypted evaluation. Practical HE schemes can handle arithmetic circuits with relatively low depth. For example, [20] reports and compares some results for homomorphic evaluation of circuits up to depth 30. Bootstrapping is a viable option to reset the level of a ciphertext right before maximum tolerance is reached. * Algorithms in general require evaluation of functions that are not necessarily polynomials and approximation of functions through low-depth circuits is a challenge. Similarly, algorithms involve conditional statements and evaluating these statements while running an algorithm on ciphertext variables requires different ways of handling conditionals. As an example, given \(m_{1},m_{2}\in\mathbb{Z}_{\shortmid}\) for some prime \(p\), the conditional statement that returns \(m_{1}+m_{2}\) if \(m_{1}=m_{2}\); and that returns \(m_{1}\) if \(m_{1}\neq m_{2}\) can be implemented over ciphertexts as \[(\mathtt{Eval_{evk}}(f;c_{1},c_{2})\otimes c_{1})\oplus((1-\mathtt{Eval_{ evk}}(f;c_{1},c_{2}))\otimes(c_{1}\oplus c_{2})),\] where \(c_{i}=\mathtt{Enc_{pk}}(m_{i})\), and \(f(m_{1},m_{2})=(m_{1}-m_{2})^{p-1}\) can be implemented as an arithmetic circuit of depth \(\mathcal{O}(\log_{2}p)\) using a square-and-multiply type exponentiation algorithm. Our objective is to adapt Algorithm 1 so that secure boundary matrix reduction operation can be performed based on encrypted boundary matrices using HE. In the light of our discussion above, there are three main challenges to address: 1. Develop an arithmetic circuit for _encrypted low_ computations so that given a pair of ciphertexts \(c_{1}\) and \(c_{2}\) (representing the encryption of column vectors \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\)), \(low(\mathbf{v}_{1})=low(\mathbf{v}_{2})\) can be verified; see line 3 in Algorithm 1. 2. Develop an arithmetic circuit so that the conditional modular addition operation (line 4 in Algorithm 1) can be performed in the ciphertext space. 3. Modify the logical structure of Algorithm 1 so that all of the modular vector additions in lines 2-4 in Algorithm 1 are correctly executed in the ciphertext space, until \(low(\mathbf{R}_{j_{0}})\neq low(\mathbf{R}_{j})\) for all \(j_{0}<j\), and for all \(j=0,...,(n-1)\). ## 3 HE-Compatible Matrix Reduction ### Low: HE-compatible computation of _low_ The first obstacle to realizing an HE-compatible Reduce algorithm is computing the largest index of any \(1\) in an \(n\)-dimensional binary vector \(\mathbf{v}\in\{0,1\}^{n}\), called \(low(\mathbf{v})\) (see Section 2.1). For reasons that will become clear, it will be necessary for us to extend the usual definition of \(low\)--as defined in Section 2--to the \(n\)-dimensional \(0\)-vector; we assign \(low(\mathbf{0})=n-1\). By construction, a non-zero column in a boundary matrix of a valid filtration can never have a \(low\) of \(n-1\) before or during reduction by Algorithm 1. 6 Footnote 6: If it did, that would imply the simplex that appeared latest is the boundary of a simplex that appeared earlier, which violates the condition that each step in the filtration gives a valid complex. In [16], the authors introduce a method of locating the index of the maximum value of a vector (\(maxidx\)) of distinct numbers using HE. We adapt this method to obtain an approximation of the \(low\) value of a binary vector. First, in Lemma 1, we establish the correctness of our reimagin function obtained by monotonically scaling vector coordinates with respect to their index while ensuring all coordinates remain distinct and guaranteeing the _low_ corresponds to the new largest coordinate. **Transformation 1**.: For \(\mathbf{v}\in\mathbb{R}^{n}\), let \(S\left(\mathbf{v}\right):=\left[\mathbf{v}[i]+\frac{i}{n}\right]_{i=0}^{n-1}\) **Definition 3**.: _Let \(\mathcal{D}^{n}=\{\mathbf{v}\in\mathbb{R}^{n}\mid\mathbf{v}[i]\neq\mathbf{v} [j],0\leq i\neq j<n\}\) be the collection of \(n\)-dimensional vectors with distinct coordinates. For a vector \(\mathbf{v}\in\mathcal{D}^{n}\), define \(maxidx:\mathcal{D}^{n}\rightarrow\mathbb{Z}\) by \(maxidx(\mathbf{v})=k\) if \(\mathbf{v}[k]>\mathbf{v}[j]\) for all \(j\) different from \(k\)._ **Lemma 1**.: _For any binary vector \(\mathbf{v}\in\{0,1\}^{n}\),_ \[low(\mathbf{v})=maxidx(S(\mathbf{v}))\] Proof.: See Appendix A. How does our argument about \(maxidx\) approximating \(low\) hold in our "approximate arithmetic" setting? The following generalization of Lemma 1 states that as long as our _approximate_ binary vector \(\mathbf{v}^{\prime}\in\mathbb{R}^{n}\) isn't too far from an underlying, true binary vector \(\mathbf{v}\in\{0,1\}^{n}\), then we may continue to extract \(low(\mathbf{v})\) using \(maxidx(\mathbf{v}^{\prime})\). **Lemma 2**.: _Let \(\mathbf{v}\in\{0,1\}^{n}\) and \(\mathbf{v}^{\prime}\in\mathbb{R}^{n}\) be given such that \(|\mathbf{v}^{\prime}-\mathbf{v}|<\frac{1}{2n}\). Then_ \[low(\mathbf{v})=maxidx(S(\mathbf{v}^{\prime})).\] Proof.: See Appendix A. _Remark 1_.: The proximity between \(\mathbf{v}\) and \(\mathbf{v}^{\prime}\) cannot be relaxed for the above choice of Transformation 1, since it is possible to construct vectors, \(\mathbf{v}\) and \(\mathbf{v}^{\prime}\) such that \(|\mathbf{v}-\mathbf{v}^{\prime}|=\frac{1}{2n}+c\), with \(0=low(\mathbf{v})\neq maxidx(S(\mathbf{v}^{\prime}))=n-1\) for any \(c>0\). Using this construction, it is then natural to apply the MaxIdx function presented in [16] (Algorithm 2), to develop the Low function (Algorithm 3). This Low function will estimate \(low\) with arbitrary accuracy, for real vectors that well-approximate binary vectors. MaxIdx takes a vector \(\mathbf{v}\in[1/2,3/2)^{n}\) and returns a vector \(\mathbf{b}\) with \(\mathbf{b}[k]\approx 1\) if \(maxidx(\mathbf{v})=k\) and \(\mathbf{b}[j]\approx 0\) for \(j\neq k\). The component-wise accuracy in approximating the coordinates of the true maximum value indicator vector (\(\mathbf{b}\) with \(\mathbf{b}[maxidx(\mathbf{v})]=1\) and \(0\) elsewhere) is controlled by a tuple of parameters \(\mathcal{P}_{\mathbf{L}}=(d,d^{\prime},m,t)\). In [16], the authors show that the error in each coordinate is bounded by \(2^{-\alpha}\), for \(\alpha>0\) which can be made arbitrarily large with sufficiently large choices of \(d,d^{\prime},m,\) and \(t\). To attain the actual index containing the maximum value of \(\mathbf{v}\), as opposed to the maximum index indicator vector, \(\mathbf{b}\), we compute the dot product between \(\mathbf{b}\) and \([0,1,...,n-1]\). This is the approach we adopt in the Low function given in Algorithm 3. Since the MaxIdx algorithm requires the input vector to be in the interval \([\frac{1}{2},\frac{3}{2})^{n}\), and our inputs \(S(\mathbf{v}^{\prime})\) will be in the interval \([0,2)^{n}\), we apply a linear transformation that preserves the \(maxidx\) of its input. **Algorithm 2** MaxIdx(\(\mathbf{v};d,d^{\prime},m,t\)) from [16] **Input:** A vector \(\mathbf{v}\in[\frac{1}{2},\frac{3}{2})^{n}\cap\mathcal{D}^{n}\); \(d,d^{\prime},m,t\in\mathbb{N}\) **Output:** A vector \(\mathbf{b}\in[0,1]^{n}\) such that \(\mathbf{b}[k]\approx 1\), if \(maxidx(\mathbf{v})=k\), otherwise \(\mathbf{b}[i]\approx 0\). **Depth:**\(d^{\prime}+1+t(d+\log m+2)\) **Complexity:**\(O(n+d^{\prime}+t(d+n\log m))\) ``` 1:\(I\leftarrow\texttt{Inv}(\sum_{j=0}^{n-1}\mathbf{v}[j]/n;d^{\prime})\) 2:for\(j\gets 0\)to\(n-2\)do 3:\(\mathbf{b}[j]\leftarrow\mathbf{v}[j]/n\cdot I\) 4:endfor 5:\(\mathbf{b}[n-1]\gets 1-\sum_{j=0}^{n-2}\mathbf{b}[j]\) 6:for\(i\gets 1\)to\(\mathbf{b}[j]^{m};d)\) 7:for\(j\gets 0\)to\(n-2\)do 9:\(\mathbf{b}[j]\leftarrow\mathbf{b}[j]^{m}\cdot I\) 10:endfor 11:\(\mathbf{b}[n-1]\gets 1-\sum_{j=0}^{n-2}\mathbf{b}[j]\) 12:endfor 13:return\(\mathbf{b}\) ``` **Algorithm 2** MaxIdx(\(\mathbf{v};d,d^{\prime},m,t\)) from [16] **Transformation 2**.: \(T_{\mathsf{L}}\left(\mathbf{v}\right):=\left[\frac{\mathbf{v}[i]+1}{2}\right]_{ i=0}^{n-1}\) The error in MaxIdx propagates through the Low algorithm in the following manner: **Theorem 1**.: _Let \(\alpha>0\) and fix parameters \(d,d^{\prime},m,t\) for the MaxIdx algorithm so that_ \[|\texttt{MaxIdx}(\mathbf{x};d,d^{\prime},m,t)-\mathbf{e}_{maxidx(\mathbf{x} )}|<2^{-\alpha},\] _for all \(\mathbf{x}\in[\frac{1}{2},\frac{3}{2})^{n}\). Further assume \(\mathbf{v}^{\prime}\in[0,1]^{n}\) and \(\mathbf{v}\in\{0,1\}^{n}\) are such that \(|\mathbf{v}^{\prime}-\mathbf{v}|<\frac{1}{2n}\). Then_ \[\left|\texttt{Low}(\mathbf{v}^{\prime};d,d^{\prime},m,t)-low(\mathbf{v}) \right|<\frac{3}{2}(n)(n-1)2^{-\alpha}.\] Proof.: The result follows from Lemmas 4 and 5 in Appendix A and the triangle inequality. In the next section we establish choices of parameters ensuring a specified level of accuracy of the approximating Low function. As a final remark, we note that the dependence of Low's error on \(n^{2}\) is a consequence of extracting the \(low\) of a vector using a dot product between the vector of indices, \([0,\ldots,n-1]\), and the max-index-indicator vector. This may be unavoidable when using the current implementation of the MaxIdx function, although it is conceivable that a fundamentally different approach to computing Low may yield a better error growth with the size of the boundary matrix. ### Parameters for Low Having established an approximation of the _low_ function that is amenable to an HE framework, we next establish the prerequisite results needed to inform the choices of Low's parameters that will guarantee correctness. There are two results we create in order to help ease the proof of the theorem at the end of this section. The first of these is to establish a lower bound on this ratio over for all binary vector inputs to Low, as this value will directly affect the choice of parameters for the MaxIdx and, subsequently, the Low functions. Let us borrow Theorem 5 from [16], which gives the parameter choices \((d,d^{\prime},m,t)\) to achieve any desired non-zero error \[|\mathtt{MaxIdx}(\mathbf{v};d,d^{\prime},m,t)-\mathbf{e}_{maxidx(\mathbf{v}) }|<2^{-\alpha}.\] Theorem 3.1 (Theorem 5 in [16]): _Let \(\mathbf{v}\in[\frac{1}{2},\frac{3}{2})^{n}\) be a vector with \(n\) distinct entries. Define \(c\) to be the ratio of the maximum value over the second maximum value such that \(c\in(1,3)\). If_ \[t \geq\frac{1}{\log(m)}[\log(\alpha+\log(n)+1)-\log\log(c)]\] \[\min(d,d^{\prime}) \geq\log(\alpha+t+2)+(m-1)\log(n)-1\] _then the error (component-wise) of the MaxIdx\((\mathbf{v};d,d^{\prime},m,t)\) algorithm compared to \(\mathbf{e}_{maxidx(\mathbf{v})}\) is bounded by \(2^{-\alpha}\)._ Of great importance to us is a lower bound on \(c\), the ratio of the largest to the second largest coordinate values in the input to MaxIdx's parameters. As \(c\) approaches \(1\), MaxIdx and Low's parameters \(d,d^{\prime}\), and \(t\) grow without limit. For this reason, we aim to obtain a larger lower bound on \(c\) across all possible (approximate binary) input vectors. We re-write the bound \(|\mathbf{v}-\mathbf{v}^{\prime}|<\frac{1}{2n}\) as \(|\mathbf{v}-\mathbf{v}^{\prime}|\leq\frac{\varepsilon}{2n}\) where \(\varepsilon\in[0,1)\) to fine-tune parameter \(c\). We compute that a lower bound on \(c\) is given by \(c\geq 1+\frac{2-2\varepsilon}{6n-4+\varepsilon}\) in Lemma 8 in Appendix A. Importantly, if \(\varepsilon=1\) (and so assume \(\mathbf{v}^{\prime}\) is approximately binary only within the bound \(1/2n\) needed for Lemma 2 to compute \(low\) via \(maxidx\) then the ratio of the first to the second largest coordinates of the transformed \(\mathbf{v}^{\prime}\) can be arbitrarily close to \(1\). As a consequence, there will no longer exist a choice of finite parameters in the Low algorithm that guarantees correctness over all possible approximately-binary vectors \(\mathbf{v}^{\prime}\). On the other hand, as \(\varepsilon\) gets closer to \(0\), the lower bound on \(c\) increases away from \(1\), which will allow Low to be computed more efficiently. Thus there will be a trade-off between the computational cost of maintaining \(\mathbf{v}^{\prime}\) sufficiently close to binary throughout the boundary matrix reduction, and estimating \(low\) efficiently. The variable \(\alpha\) specifies the desired level of accuracy of MaxIdx (to \(2^{-\alpha}\)), and informs the minimum parameters needed to attain said accuracy. Lemma6.1.1 recasts the accuracy parameter of Low to an arbitrary \(\delta>0\). With this, we can specify the choice of parameters needed to approximate \(low(\mathbf{v})\) using \(\mathtt{Low}(\mathbf{v}^{\prime};d,d^{\prime},m,t)\) to arbitrary accuracy. **Theorem 3**.: _Assume \(\mathbf{v}\in\{0,1\}^{n}\) and \(\mathbf{v}^{\prime}\in[0,1]^{n}\) are such that \(|\mathbf{v}-\mathbf{v}^{\prime}|\leq\frac{\varepsilon}{2n}\), for some \(0\leq\varepsilon<1\). Choose the parameters \(d,d^{\prime},m\), and \(t\) for the MaxIdx function, along with a pre-determined \(\delta>0\), such that_ \[\alpha >\log(3)+2\log(n)-\log(\delta)-1\] \[t \geq\frac{\log\left(\alpha+1+\log(n)\right)-\log\log\left(1+\frac{ 2-2\varepsilon}{6n-4+\varepsilon}\right)}{\log m}\] \[\min(d,d^{\prime}) \geq\log(\alpha+t+2)+(m-1)\log(n)-1\] _Then \(\mathtt{Low}(\mathbf{v}^{\prime};d,d^{\prime},m,t)\) has \(\delta\)-error. That is,_ \[|\mathtt{Low}(\mathbf{v}^{\prime};d,d^{\prime},m,t)-low(\mathbf{v})|<\delta.\] Proof.: See Appendix B. As these parameters are now well-established for the Low function, we now refer to this tuple of parameters \((d_{\mathsf{L}},d^{\prime}_{\mathsf{L}},m_{\mathsf{L}},t_{\mathsf{L}})\) as \(\mathcal{P}_{\mathsf{L}}\) to avoid confusion with the upcoming Comp function which will have a similar parameter naming convention. Furthermore, when \(\mathcal{P}_{\mathsf{L}}\) is clear from context, define \[\mathtt{L}_{\mathbf{v}}\coloneqq\mathtt{Low}(\mathbf{v};\mathcal{P}_{\mathsf{ L}})\] for ease of notation in the upcoming sections. ### LowComp: HE-compatible Equality Check Theorem3 approximates \(low(\mathbf{x})\) and \(low(\mathbf{y})\) via \(\mathtt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\mathsf{L}})\) and \(\mathtt{Low}(\mathbf{y}^{\prime};\mathcal{P}_{\mathsf{L}})\). One of the remaining challenges is to characterize the equality check \(low(\mathbf{x})=low(\mathbf{y})\) using \(\mathtt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\mathsf{L}})\) and \(\mathtt{Low}(\mathbf{y}^{\prime};\mathcal{P}_{\mathsf{L}})\). The second challenge is to rewrite (1) for \(\mathbf{z}^{\prime}\) so that it can be computed by avoiding the if statement and the mod \(2\) addition. Suppose that \(\mathbf{x}^{\prime}\) and \(\mathbf{y}^{\prime}\) are two real valued vectors that are approximations of the binary vectors \(\mathbf{x}\) and \(\mathbf{y}\), respectively. We must now determine a method that takes \(\mathbf{x}^{\prime}\) and \(\mathbf{y}^{\prime}\) as input, and outputs \(\mathbf{z}^{\prime}\) such that \(\mathbf{z}^{\prime}\) approximates the binary vector \[\mathbf{z}=\begin{cases}\mathbf{x}+\mathbf{y}\mod 2&\text{if }low(\mathbf{x})=low( \mathbf{y})\\ \mathbf{x}&\text{if }low(\mathbf{x})\neq low(\mathbf{y})\end{cases} \tag{1}\] In Section 3.5, we show that \(\mathbf{z}\) in (1) can be approximated by \[\mathbf{z}^{\prime}=\Omega(\mathbf{x}^{\prime}-\mathbf{y}^{\prime})^{2}+(1- \Omega)\mathbf{x}^{\prime}, \tag{2}\] where the predicate \(\Omega\) takes \(\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\texttt{L}})\) and \(\texttt{Low}(\mathbf{y}^{\prime};\mathcal{P}_{\texttt{L}})\) as input, and approximates the boolean value \(low(\mathbf{x})==low(\mathbf{y}).\) We establish the theory to calculate \(\Omega\) in this section. **Lemma 3**.: _Let \(\mathbf{x},\mathbf{y}\in\{0,1\}^{n}\) and \(\mathbf{x}^{\prime},\mathbf{y}^{\prime}\in[0,1]^{n}\) and assume that \(\mathcal{P}_{\texttt{L}}\) is chosen such that \(|\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\texttt{L}})-low(\mathbf{x})|<\delta\) and \(|\texttt{Low}(\mathbf{y}^{\prime};\mathcal{P}_{\texttt{L}})-low(\mathbf{y})|<\delta\) for some \(0<\delta<\frac{1}{4}.\) Let \(\phi\) be any value in the interval \((2\delta,1-2\delta).\) Then_ \[|\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\texttt{L}})-\texttt{Low}( \mathbf{y}^{\prime};\mathcal{P}_{\texttt{L}})|\leq\phi\text{ iff }low(\mathbf{x})=low(\mathbf{y})\] Proof.: Suppose that \(|\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\texttt{L}})-\texttt{Low}( \mathbf{y}^{\prime};\mathcal{P}_{\texttt{L}})|>\phi.\) Then \[\phi <|\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\texttt{L}})- \texttt{Low}(\mathbf{y}^{\prime};\mathcal{P}_{\texttt{L}})|\] \[\leq|\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\texttt{L}})- low(\mathbf{x})|+|\texttt{Low}(\mathbf{y}^{\prime};\mathcal{P}_{\texttt{L}})-low( \mathbf{y})|+|low(\mathbf{x})-low(\mathbf{y})|\] \[<2\delta+|low(\mathbf{x})-low(\mathbf{y})|.\] This implies that \(|low(\mathbf{x})-low(\mathbf{y})|>\phi-2\delta>0\) as \(\phi>2\delta\) by assumption. Both \(low(\mathbf{x})\) and \(low(\mathbf{y})\) are integer-valued functions, so it must be the case that \(low(\mathbf{x})\neq low(\mathbf{y}).\) Conversely, suppose that \[|\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\texttt{L}})-\texttt{Low}( \mathbf{y}^{\prime};\mathcal{P}_{\texttt{L}})|\leq\phi.\] Then \[|low(\mathbf{x})-low(\mathbf{y})| \leq|\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\texttt{L}})- low(\mathbf{x})|\] \[+|\texttt{Low}(\mathbf{y}^{\prime};\mathcal{P}_{\texttt{L}})- low(\mathbf{y})|\] \[+|\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\texttt{L}})- \texttt{Low}(\mathbf{y}^{\prime};\mathcal{P}_{\texttt{L}})|\] \[<\delta+\delta+\phi\] And so we have that \(|low(\mathbf{x})-low(\mathbf{y})|<2\delta+\phi<1\) as \(\phi<1-2\delta.\) Again, as _low_ is an integer-valued function, it must be the case that \(low(\mathbf{x})=low(\mathbf{y}).\) _Remark 2_.: Tracing the proof of Lemma 3 also reveals that the intervals on which \(|\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\texttt{L}})-\texttt{Low}( \mathbf{y}^{\prime};\mathcal{P}_{\texttt{L}})|\) and \(\phi\) live are disjoint, and so it will never be the case that \[|\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\texttt{L}})-\texttt{Low}( \mathbf{y}^{\prime};\mathcal{P}_{\texttt{L}})|=\phi,\] despite the statement of the lemma. The implications of Lemma 3 is that one does not need to be very accurate in the calculation of \(\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\mathrm{L}})\), and in fact only needs to approximate \(low(\mathbf{x})\) (using \(\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\mathrm{L}})\)) to an accuracy of \(\frac{1}{4}\). If that condition is guaranteed, then one may compare the value \(|\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\mathrm{L}})-\texttt{Low}( \mathbf{y}^{\prime};\mathcal{P}_{\mathrm{L}})|\) to any \(2\delta<\phi<1-2\delta\) to check whether the underlying low values are equal or not. With this lemma, our strategy to compare \(low\) values of two approximately binary vectors will be to exploit an approximation of the function that compares the relative size of its two inputs. First, we introduce the following function: **Definition 4**.: _For \(\mathbf{x},\mathbf{y}\in\{0,1\}^{n}\), let \(l_{\mathbf{x}}=low(\mathbf{x})\) and \(l_{\mathbf{y}}=low(\mathbf{y})\). Define_ \[lowcomp(l_{\mathbf{x}},l_{\mathbf{y}})=\begin{cases}0,&\text{ if }l_{\mathbf{x}} \neq l_{\mathbf{y}}\\ 1,&\text{ if }l_{\mathbf{x}}=l_{\mathbf{y}}\end{cases}.\] The function \(lowcomp\) will be used to gate the mod 2 addition of two columns in place of the conditional equality check in Algorithm 1. In particular, for a given \(\mathbf{x}\) and \(\mathbf{y}\in[0,1]^{n}\), the statement "update \(\mathbf{x}\) to \(\mathbf{x}+\mathbf{y}\mod 2\), if their lows are equal" may be reinterpreted as \[\mathbf{x}=\mathbf{x}+lowcomp(l_{\mathbf{x}},l_{\mathbf{y}})\mathbf{y}\mod 2.\] We now establish a LowComp algorithm to estimate the \(lowcomp\) function for approximately binary vectors. Our formulation is based on the Comp algorithm, which estimates the \(comp\) function given in Definition 5 (both introduced in [16]) that compares the relative size of its inputs. **Definition 5** ([16]).: _For any non-zero real numbers \(a,b\), define_ \[comp(a,b)=\lim_{k\to\infty}\frac{a^{k}}{a^{k}+b^{k}}=\begin{cases}1,&\text{ if }a>b\\ \frac{1}{2},&\text{ if }a=b\\ 0,&\text{ if }a<b\end{cases}\] The Comp algorithm (Algorithm 4), approximates the \(comp\) function by evaluating the expression \(\frac{a^{m^{t}}}{a^{m^{t}}+b^{m^{t}}}\), for \(t\) a positive integer, and \(m\) often chosen to be a power to 2. Comp, along with Lemma 3, are the building blocks we need to build LowComp. Using Lemma 3, we make the observation that \[lowcomp(\mathbf{x},\mathbf{y})=1 \Leftrightarrow low(\mathbf{x})=low(\mathbf{y})\] \[\Leftrightarrow\phi\geq|\texttt{Low}(\mathbf{x}^{\prime};\mathcal{ P}_{\mathrm{L}})-\texttt{Low}(\mathbf{y}^{\prime};\mathcal{P}_{\mathrm{L}})|\] \[\Leftrightarrow\phi^{2}\geq(\texttt{Low}(\mathbf{x}^{\prime})- \texttt{Low}(\mathbf{y}^{\prime}))^{2}\] and so we compare \((\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\mathrm{L}})-\texttt{Low}( \mathbf{y}^{\prime};\mathcal{P}_{\mathrm{L}}))^{2}\) to \(\phi^{2}\) to determine if the underlying \(low\) values are equal or not. This construction removes the need to implement an HE circuit to compute absolute value at the cost of two squarings. We make two important notes before we explicitly define \(\mathtt{LowComp}\). The first is that, by construction, \(|\mathtt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\mathsf{L}})-\mathtt{Low}(\mathbf{y} ^{\prime};\mathcal{P}_{\mathsf{L}})|\) and \(\phi\) exist in disjoint intervals (refer to Lemma 3's remark), and so \(\phi\) and \(|\mathtt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\mathsf{L}})-\mathtt{Low}( \mathbf{y}^{\prime};\mathcal{P}_{\mathsf{L}})|\) will never be equal. Thus \(\mathtt{LowComp}\) may be treated as an approximate binary indicator function for our application. The second is that the input \((\mathtt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\mathsf{L}})-\mathtt{Low}( \mathbf{y}^{\prime};\mathcal{P}_{\mathsf{L}}))^{2}\) is in the interval \([0,(n-1)^{2}]\). As the \(\mathtt{Comp}\) function requires its inputs to be in the interval \([\frac{1}{2},\frac{3}{2})\), we apply a linear transformation to bring values in the correct interval. **Transformation 3**.: \(T_{\mathsf{C}}(x):=\frac{1}{2}+\frac{x}{n^{2}}\)__ Since \(T_{\mathsf{C}}\) is a monotonic function, the relative order of the inputs are preserved. We now explicitly define \(\mathtt{LowComp}\) by performing \(\mathtt{Comp}\) on \(T_{\mathsf{C}}(\phi^{2})\) and \(T_{\mathsf{C}}((\mathtt{L}_{\mathbf{x}^{\prime}}-\mathtt{L}_{\mathbf{y}^{ \prime}})^{2})\) as described in Algorithm 5. LowComp inherits from \(\mathtt{Comp}\) that its outputs live in (0,1) and that it can approximate \(lowcomp\) arbitrarily well given appropriately chosen parameters. We formalize this in the following theorem. **Theorem 4**.: _Let \(\mathbf{x},\mathbf{y}\in\{0,1\}^{n}\) and \(\mathbf{x}^{\prime},\mathbf{y}^{\prime}\in[0,1]^{n}\) and assume that \(\mathcal{P}_{\mathsf{L}}\) is chosen such that \(|\mathtt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\mathsf{L}})-low(\mathbf{x})|<\delta\) and \(|\mathtt{Low}(\mathbf{y}^{\prime};\mathcal{P}_{\mathsf{L}})-low(\mathbf{y})|<\delta\) for some \(0<\delta<\frac{1}{4}\). Let \(\phi\) be any value in the interval \((2\delta,1-2\delta)\). Define \(\mathtt{LowComp}\) as in Algorithm 5._ _If the parameters in the \(\mathtt{Comp}\) function are chosen such that_ \[|\mathtt{Comp}(a,b;d,d^{\prime},m,t)-comp(a,b)|<\eta,\] _then we also have_ \[|\mathtt{LowComp}(\mathtt{L}_{\mathbf{x}^{\prime}},\mathtt{L}_{\mathbf{y}^{ \prime}},\phi;d,d^{\prime},m,t)-lowcomp(l_{\mathbf{x}},l_{\mathbf{y}})|<\eta.\] Proof.: See Appendix C. ### Parameters for LowComp We shall proceed with the analysis of LowComp's parameters in a similar fashion to Low's parameters in Section 3.2. Theorem 4 from [16] gives lower bounds for the parameters \(d,d^{\prime},m,\) and \(t\) to achieve \(2^{-\alpha}\) error in the Comp function. Theorem 3.4 (Theorem 4 in [16]): _Let \(x,y\in[1/2,3/2)\) satisfy_ \[c\leq max(x,y)/min(x,y)\] _for a fixed \(c\in(1,3)\). If_ \[t \geq\frac{1}{\log(m)}[\log(\alpha+1)-\log\log(c)]\] \[d \geq\log(\alpha+t+2)+m-2\] \[d^{\prime} \geq\log(\alpha+2)-1\] _then \(|\texttt{Comp}(x,y;d,d^{\prime},m,t)-comp(x,y)|<2^{-\alpha}\)._ The role of \(c\) in Comp is similar to MaxIdx in the Section 3.2: the closer \(c=\max(a,b)/\min(a,b)\) is to \(1\), the larger the value for all subsequent choice of parameters, thus increasing the "effort" needed for the Comp function to distinguish which of the two inputs is larger. For this reason, it is our goal to bound \(c=\max(a,b)/\min(a,b)\) as far from \(1\) as possible. Once a \(\phi\) is fixed, the only guarantee is that \(T_{\textsf{C}}((\mathsf{L}_{\textsf{x}^{\prime}}-\mathsf{L}_{\textsf{x}^{ \prime}})^{2})\) is either strictly greater or less than said \(T_{\textsf{C}}(\phi^{2})\) (see Lemma 3's remark). Since we are only concerned with whether \(low(\mathbf{x})\) and \(low(\mathbf{y})\) are equal or not, the ratio \(c\) may be reinterpreted as \[c=\frac{\max\left\{T_{\textsf{C}}(\phi^{2}),T_{\textsf{C}}((\mathsf{L}_{ \textsf{x}^{\prime}}-\mathsf{L}_{\textsf{x}^{\prime}})^{2})\right\}}{\min\left\{ T_{\textsf{C}}(\phi^{2}),T_{\textsf{C}}((\mathsf{L}_{\textsf{x}^{\prime}}- \mathsf{L}_{\textsf{x}^{\prime}})^{2})\right\}}>\begin{cases}\frac{T_{\textsf{ C}}(\phi^{2})}{T_{\textsf{C}}((\mathsf{L}_{\textsf{x}^{\prime}}-\mathsf{L}_{ \textsf{x}^{\prime}})^{2})},&\text{ if }\;\;low(\mathbf{x})=low(\mathbf{y})\\ \frac{T_{\textsf{C}}((\mathsf{L}_{\textsf{x}^{\prime}}-\mathsf{L}_{\textsf{x} ^{\prime}})^{2})}{T_{\textsf{C}}(\phi^{2})},&\text{ if }\;\;low(\mathbf{x})\neq low(\mathbf{y}) \end{cases}\] It follows that \[c>\min\left\{\frac{T_{\textsf{C}}(\phi^{2})}{T_{\textsf{C}}((2\delta)^{2})}, \frac{T_{\textsf{C}}((1-2\delta)^{2})}{T_{\textsf{C}}(\phi^{2})}\right\}>1\] where the minimum changes depending on which case we are in. Thus, once a \(\delta\in(0,1/4)\) is chosen, this expression is variable with respect to the value of \(\phi\) and thus \(T_{\mathsf{C}}(\phi^{2})\). The optimal choice of \(\phi\) will ensure the minimum of these two ratios are as far away from \(1\) as possible. So, we aim to optimize the right side of this expression with respect to \(\phi\): that is, to determine what value of \(T_{\mathsf{C}}(\phi^{2})\) solves \[\max\left(\min\left\{\frac{T_{\mathsf{C}}(\phi^{2})}{T_{\mathsf{C}}((2\delta)^ {2})},\frac{T_{\mathsf{C}}((1-2\delta)^{2})}{T_{\mathsf{C}}(\phi^{2})}\right\} \right), \tag{3}\] where the \(\max\) is taken over \(T_{\mathsf{C}}(\phi^{2})\) in the interval \(\left(T_{\mathsf{C}}((2\delta)^{2}),T_{\mathsf{C}}((1-2\delta)^{2})\right)\). This solution to Eq. (3) comes from a general fact about positive real numbers, which we prove in Proposition 9, and which establishes the following corollary: **Corollary 1**.: _The value of \(T_{\mathsf{C}}(\phi^{2})\) which solves Eq. (3) is_ \[T_{\mathsf{C}}(\phi^{2})=\sqrt{\left(\frac{1}{2}+(\frac{2\delta}{n})^{2} \right)\left(\frac{1}{2}+(\frac{1-2\delta}{n})^{2}\right)}.\] _Thus, \(c>\sqrt{\frac{n^{2}+2(1-2\delta)^{2}}{n^{2}+2(2\delta)^{2}}}\)._ Proof.: See Appendix D. Having determined the bottleneck value \(c\), we explicitly construct a choice of parameters for LowComp to achieve any desired level of accuracy (which has been re-contextualized from the \(2^{-\alpha}\) error in Comp to an arbitrary \(\eta\) error in LowComp, see Lemma 7). **Theorem 6**.: _Let \(\mathbf{x},\mathbf{y}\in\{0,1\}^{n}\) and \(\mathbf{x}^{\prime},\mathbf{y}^{\prime}\in[0,1]^{n}\) and assume that \(\mathcal{P}_{\mathcal{L}}\) is chosen such that \(\left|\mathtt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\mathcal{L}})-low( \mathbf{x})\right|<\delta\) and \(\left|\mathtt{Low}(\mathbf{y}^{\prime};\mathcal{P}_{\mathcal{L}})-low( \mathbf{y})\right|<\delta\) for some \(0<\delta<\frac{1}{4}\). Define LowComp as in Algorithm 5, where we explicitly pick_ \[\phi=n\sqrt{\sqrt{\left(\frac{1}{2}+(\frac{2\delta}{n})^{2}\right)\left(\frac {1}{2}+(\frac{1-2\delta}{n})^{2}\right)}-\frac{1}{2}}.\] _If the parameters in the_ Comp _function are chosen such that_ \[\alpha >-\log(\eta)\] \[t \geq\frac{1}{\log(m)}\left[\log(\alpha+2)-\log\log\left(\sqrt{ \frac{n^{2}+2(1-2\delta)^{2}}{n^{2}+2(2\delta)^{2}}}\right)\right]\] \[d \geq\log(\alpha+t+2)+m-2\] \[d^{\prime} \geq\log(\alpha+2)-1\] _then_ LowComp _has \(\eta\)-error. That is,_ \[\left|\mathtt{LowComp}(\mathtt{L}_{\mathbf{x}^{\prime}},\mathtt{L}_{\mathbf{y }^{\prime}},\phi;d,d^{\prime},m,t)-lowcomp(l_{\mathbf{x}},l_{\mathbf{y}}) \right|<\eta\] Proof.: See Appendix 0.D. _Remark 3_.: LowComp can now be thought of as a function of only two inputs (\(\mathsf{L}_{\mathbf{x}^{\prime}}\) and \(\mathsf{L}_{\mathbf{y}^{\prime}}\)), as we will always choose this optimal value of \(\phi\). The theorem also implies a trade-off between \(\delta\) and \(\eta\). Indeed, estimating \(low\) using Low to a high degree requires less "effort" for Comp to distinguish the (in)equality of two Low values. Similarly, less accurate \(low\) estimates will require Comp to do more of the heavy-lifting. This intuition is confirmed by the dependence on \(\delta\) of the lower bound on \(c\). As \(\delta\) approaches \(0\), the bound on \(c\) increases further away from \(1\), causing our choice of parameters for LowComp to get smaller. On the flip side, as \(\delta\) approaches its upper limit of \(1/4\), then \(c\) may get arbitrarily close to \(1\), causing LowComp's parameters to get arbitrarily large. We refer to these parameters as \(\mathcal{P}_{\mathsf{C}}=(d_{\mathsf{C}},d_{\mathsf{C}}^{\prime},m_{\mathsf{C }},t_{\mathsf{C}})\). ### Conditional modular addition of vectors For a given \(\mathbf{x}\) and \(\mathbf{y}\in[0,1]^{n}\), the statement "update \(\mathbf{x}\) to \(\mathbf{x}+\mathbf{y}\mod 2\), if their \(low\) values are equal" from Equation 1 may be reinterpreted as \[\mathbf{x}=\mathbf{x}+lowcomp(l_{\mathbf{x}},l_{\mathbf{y}})\mathbf{y}\mod 2. \tag{4}\] Furthermore, addition modulo \(2\) can be recast as a polynomial operation using the observation that for any two \(a,b\in\{0,1\}\), the operation \((a-b)^{2}=a+b\mod 2\). Thus, we may rewrite (1) as \[\mathbf{x}=lowcomp(l_{\mathbf{x}},l_{\mathbf{y}})(\mathbf{x}-\mathbf{y})^{2} +(1-lowcomp(l_{\mathbf{x}},l_{\mathbf{y}}))\mathbf{x},\] taking all operations component-wise, to remove mod \(2\) addition. We may then approximate this operation using Low to esimate \(low\) and LowComp to estimate \(lowcomp\). That is, the operation we will be performing on approximate binary vectors is \[\mathbf{x}^{\prime}=\texttt{LowComp}(\mathsf{L}_{\mathbf{x}^{\prime}}, \mathsf{L}_{\mathbf{y}^{\prime}})(\mathbf{x}^{\prime}-\mathbf{y}^{\prime})^{2 }+(1-\texttt{LowComp}(\mathsf{L}_{\mathbf{x}^{\prime}},\mathsf{L}_{\mathbf{y} ^{\prime}}))\mathbf{x}^{\prime}, \tag{5}\] as alluded to in Eq. (2) in Section 3.3. ### Modifying the Logical Structure of Reduce The main operation in Reduce (Algorithm 1) is gated by a conditional **while** loop. As mentioned before, conditional statements cannot be explicitly implemented (or traced) over ciphertexts. Therefore, we need to rewrite lines 2-6 in Algorithm 1 so that they are HE-compatible. This is done by replacing the **while** loop with a double nested **for** loop, each of which run through all preceding column indices. Assume \(low(\mathbf{\Delta}_{i})\neq low(\mathbf{\Delta}_{k})\) for all \(0\leq i\neq k<j\), as is the case when the Reduce algorithm, applied to a boundary matrix \(\mathbf{\Delta}\), first encounters column \(j\). If we loop through the preceding \(j\) columns once, comparing each \(\mathbf{\Delta}_{k},k=0\ldots,j-1\) to \(\mathbf{\Delta}_{j}\), either \(low(\mathbf{\Delta}_{k})=low(\mathbf{\Delta}_{j})\) for some \(k<j\) or not. In the latter case, we know \(\mathbf{\Delta}_{j}\) is already in reduced form and will not change--no matter how many times one loops again through the preceding \(j-1\) columns--since the (binary) addition only happens when the \(low\)'s of two columns match. On the other hand, if some \(low(\mathbf{\Delta}_{k})=low(\mathbf{\Delta}_{j})\) for some \(k<j\), \(\mathbf{\Delta}_{j}\leftarrow\mathbf{\Delta}_{j}+\mathbf{\Delta}_{k}\mod 2\) will change \(\mathbf{\Delta}_{j}\), and in particular, this addition necessarily causes the \(low(\mathbf{\Delta}_{j})\) to decrease. Thus, after such an update this new \(\mathbf{\Delta}_{j}\) will never again be equal to \(\mathbf{\Delta}_{k}\). In other words each column vector will only update column \(j\) at most once. Without any assumptions about the order in which preceding columns update \(\mathbf{\Delta}_{j}\), we simply loop over the preceding columns enough times to guarantee every vector which should have updated column \(j\) has done so. This requires exactly \(j\) loops over all preceding columns since each preceding column can only update \(\mathbf{\Delta}_{j}\) at most once. For the base case, note that column \(j=0\) is trivially in reduced form and \(\mathbf{\Delta}_{1}\) will certainly be in reduced form after a single comparison with \(\mathbf{\Delta}_{0}\). This aligns with the worst case complexity for the original Reduce algorithm: \(O(j^{2})\) for column \(j\), \(O(n^{3})\) overall [22]. In Section 3.1, we modified the existing MaxIdx from [16] to attain the Low algorithm to estimate \(low\). We have already discussed how to check the equality of two \(low\) values using Low and LowComp in Section 3.3. Finally, the mod 2 addition over rational numbers was constructed in Section 3.5. With all of this combined, we may now rewrite the main block of the Reduce algorithm as written in lines 6-9 of Algorithm 6 in a way which makes it compatible with each approximate algorithm on approximate vectors framework. ### An HE-Compatible Reduce As the challenges as listed in Section 2.2 have now been addressed in Sections 3.1-3.6, we now present Algorithm 6, which is an HE-compatible version of Reduce and which can take an encrypted boundary matrix as input and reduce it using HE-operations in the ciphertext space. We note that the moment we do the very first column addition, vectors have moved from \(\{0,1\}^{n}\) to \([0,1]^{n}\), requiring the need for all algorithms to be compatible with approximate binary vectors. For this reason, we must have a guarantee of correctness, which is a function of the controllable errors in our approximation variables: \(\mathbf{v}^{\prime}\) (Theorem 3.1), Low(Lemma 3), and LowComp (Section 3.2). As long as \(|\mathbf{v}^{\prime}-\mathbf{v}|<1/2n\), we know that \(\texttt{Low}(\mathbf{v}^{\prime};\mathcal{P}_{\text{L}})\) will approximate \(low(\mathbf{v})\) as accurately as wanted. And as long as \(\texttt{Low}(\mathbf{v}^{\prime};\mathcal{P}_{\text{L}})\) estimates \(low(\mathbf{v})\) to within \(1/4\), LowComp is able to distinguish between \(low(\mathbf{x})\) and \(low(\mathbf{y})\) using \(\texttt{Low}(\mathbf{x}^{\prime};\mathcal{P}_{\text{L}})\) and \(\texttt{Low}(\mathbf{y}^{\prime};\mathcal{P}_{\text{L}})\). LowComp directly defines an approximately binary indicator \[\Omega\leftarrow\texttt{LowComp}(\text{L}_{j_{0}},\text{L}_{j};\mathcal{P}_{ \text{C}})\] which will be used to perform the "mod 2" addition, which will naturally have accumulating non-zero errors (determined by \(\eta\)). The finiteness of the algorithm guarantees the existence of an \(\eta\) such that the accumulation of errors never exceeds the maximum threshold of \(1/2n\). In a strict sense, HE-Reduce only fails to produce the correct reduced boundary matrix if the maximum error in some component is \(1/2\) or larger. If \(|\texttt{Reduce}(\mathbf{\Delta})-\mathbf{R}^{\prime}|<1/2\), then \(\texttt{Round}(\mathbf{R}^{\prime})=\texttt{Reduce}(\mathbf{\Delta})\), where Round casts entries to the nearest integer. This condition is guaranteed by the stricter requirement that errors are within \(1/2n\). ## 4 Complexity and Implementation Analysis As in all HE-compatible functions, there is particular interest in HE-Reduce's complexity and depth to understand the noise growth that a ciphertext will accumulate as it passes through the algorithm. We will prove a more general statement that establishes the depth of our algorithm on an \(n\times n\) boundary matrix. We note that while we establish the textbook version of the Reduce algorithm as HE-Reduce, an immediate improvement to the algorithm to make it even more HE-compatible is easily seen. We implement this version, HE-Reduce-Optimized, and analyze its depth and complexity. ### Analysis of HE-Reduce-Optimized In our implementation, we use Algorithm 7, which is a slightly modified version of Algorithm 6. Here, the Low computation in line 10 in Algorithm 6 is now pushed out of the **for** loop (see line 14 in Algorithm 7) and the repetitive update operations \[\mathbf{\Delta}^{\prime}_{j}\leftarrow\Omega(\mathbf{\Delta}^{\prime}_{j}-\mathbf{\Delta }^{\prime}_{j_{0}})^{2}+(1-\Omega)\mathbf{\Delta}^{\prime}_{j},\ j_{0}=0,...,j-1\] in line 9 in Algorithm 6 are now replaced by a single cumulative update operation (line 13 in Algorithm 7), which can be explicitly rewritten as \[\mathbf{\Delta}^{\prime}_{j}\leftarrow\sum_{j_{0}=0}^{j-1}\Omega_{j_{0},j}((\mathbf{ \Delta}^{\prime}_{j}-\mathbf{\Delta}^{\prime}_{j_{0}})^{2})+(1-\sum_{j_{0}=0}^{j-1 }\Omega_{j_{0},j})\mathbf{\Delta}^{\prime}_{j}, \tag{6}\] where \(\Omega_{j_{0},j}=\texttt{LowComp}(\texttt{L}_{j_{0}},\texttt{L}_{j};\mathcal{P}_{ \texttt{C}})\). The correctness follows from the fact that \(\Omega_{j_{0},j}\) is either approximately zero for all \(j_{0}=0,...,j-1\) except for at most one value of \(j_{0}=k\) (where it is approximately one), whence we have either \(\boldsymbol{\Delta}^{\prime}_{j}\) stays approximately the same or is updated to \(\boldsymbol{\Delta}^{\prime}_{j}\approx(\boldsymbol{\Delta}^{\prime}_{j}- \boldsymbol{\Delta}^{\prime}_{k})^{2}\), as required. Theorem 4.1: _Let \(\mathbf{B}\in\mathbb{Z}_{2}^{n\times m}\) be a binary matrix with \(n\geq m\). Furthermore suppose the tuples of parameters \(\mathcal{P}_{\texttt{L}}=(d_{\texttt{L}},d^{\prime}_{\texttt{L}},m_{\texttt{ L}},t_{\texttt{L}})\) and \(\mathcal{P}_{\texttt{C}}=(d_{\texttt{C}},d^{\prime}_{\texttt{C}},m_{\texttt{ C}},t_{\texttt{C}})\) are given which give depth \(D_{\texttt{L}}=d_{\texttt{L}}+1+t_{\texttt{L}}(d^{\prime}_{\texttt{L}}+\log(m _{\texttt{L}})+2)\) and \(D_{\texttt{C}}=d_{\texttt{C}}+1+t_{\texttt{C}}(d^{\prime}_{\texttt{C}}+\log(m _{\texttt{C}})+2)\) to the Low and Comp functions, respectively. Then, the depth of the HE-Reduce-Optimized (Algorithm 7) is \(\frac{m(m-1)}{2}[D_{\texttt{L}}+D_{\texttt{C}}+1]\) and its complexity is_ \[\mathcal{O}\big{(}m^{3}[1+d^{\prime}_{\texttt{C}}+t_{\texttt{C}}(d_{\texttt{C }}+\log(m_{\texttt{C}}))]+m^{2}[d^{\prime}_{\texttt{L}}+t_{\texttt{L}}(d_{ \texttt{L}}+m\log(m_{\texttt{L}}))]\big{)}.\] Proof: We proceed with an induction on \(m\). For the base case, note that column \(j=0\) is trivially in reduced form and \(\boldsymbol{\Delta}_{1}\) will certainly be in reduced form after a single comparison with \(\boldsymbol{\Delta}_{0}\). This aligns with the worst case complexity for the original Reduce algorithm: \(O(j^{2})\) for column \(j\), \(O(n^{3})\) overall [22]. For the inductive hypothesis, assume that for all \(j\leq m-1\), that the depth of the HE-Reduce-Optimized algorithm, after termination on a \(n\times j\) matrix, is \(d(j)=\frac{j(j-1)}{2}D\). Now consider an \(n\times m\) matrix \(\mathbf{B}=[\;\mathbf{x}_{0}\;|\;...\;|\;\mathbf{x}_{m-2}\;|\;\mathbf{x}_{m-1} \;]\in\mathbb{Z}_{2}^{n\times m}\). Then the sub-matrix \(\mathbf{B}^{\prime}\) obtained by excluding the last column \(\mathbf{x}_{m-1}\) is an \(n\times(m-1)\) matrix, and thus has depth \(d(m-1)=\frac{(m-1)(m-2)}{2}D\) by the inductive hypothesis. Let us now focus on the last column, \(\mathbf{x}_{m}\). Consider the outer loop corresponding to \(k=0\) in the HE-Reduce-Optimized algorithm. After the first inner loop finishes, we have the depth of column \(\mathbf{x}^{\prime}_{m}\) is exactly \(d(m-1)+D=\frac{(m-1)(m-2)}{2}D+D\), where the last \(D\) term is added from the very last update. However, in HE-Reduce-Optimized, for all subsequent \(k=1,...,m-1\), every \(k\) loop adds exactly \(D\) to the depth only one time. This is because every run of the inner \(j_{0}\)**for** loop runs in parallel with ciphertexts of lower depth than the most recent update of \(\mathbf{x}^{\prime}_{m}\). A counting argument will yield that the depth of column \(\mathbf{x^{\prime}}_{m}\) after all loops are completed is \([\frac{(m-1)(m-2)}{2}D+D]+[(m-2)D]=\frac{m(m-1)}{2}D\), thus completing the induction. As for the complexity, the optimized algorithm calls Low (which has complexity \(O(m+d^{\prime}_{L}+t_{L}(d_{L}+m\log m_{L}))\) exactly \(m(m-1)/2\) times but still calls LowComp (which has complexity \(O(d^{\prime}_{C}+t_{C}(d_{C}+\log m_{C}))\)) exactly \(m(m-1)(2m-1)/6\) times. Thus, the overall complexity is as stated. Remark 4.: This algorithm performed on a boundary matrix \(\mathbf{\Delta}\in\mathbb{Z}_{2}^{n\times n}\) is depth \(n(n-1)/2\left[D_{\mathsf{L}}+D_{\mathsf{C}}+1\right]\) and cost \(\mathcal{O}(n^{3}+n^{2}[d^{\prime}_{\mathsf{L}}+t_{\mathsf{C}}(d_{\mathsf{L}}+ n)])\) for the choice of \(m_{\mathsf{C}}=m_{\mathsf{L}}=2\), and assuming that \(d^{\prime}_{\mathsf{L}}>d^{\prime}_{\mathsf{C}}\), \(d_{\mathsf{L}}>d_{\mathsf{C}}\), and \(t_{\mathsf{C}}\approx t_{\mathsf{L}}\). ### Implementation Notes In this section, we discuss our implementation of Algorithm 6 using HE. We assume that a Client generates pk, sk, evk, for some suitable params; and the Server knows pk and evk. Note that the Server can evaluate circuits on ciphertexts but cannot decrypt; see Section 2.2. By construction, variables of Algorithm 6 deals with vectors over the set \(\mathbb{R}\) of real numbers and the approximate arithmetic is performed over \(\mathbb{R}\). Additionally, as comparisons feature heavily in our implementation, we note that CKKS comparison circuits are comparable in amortized time to both BFV/BGV and TFHE schemes [33]. Therefore, the HEAAN [15] HE scheme, also known as the CKKS scheme, would be a suitable choice for implementing Algorithm 6. In CKKS, we have \(\mathcal{M}=\mathbb{Z}[X]/\langle X^{N}+1\rangle\) and \(\mathcal{C}=\mathbb{Z}_{Q}[X]/\langle X^{N}+1\rangle\times\mathbb{Z}_{Q}[X]/ \langle X^{N}+1\rangle\). Moreover, CKKS allows one to encode and encrypt \(N/2\) numbers \([x_{0},...,x_{N/2-1}]\), \(x_{i}\in\mathbb{R}\), as a single ciphertext, where ciphertext operations can be performed component-wise and simultaneously. As a result, under the setting of above CKKS parameters, a Client can encode and encrypt an \(n\times n\) boundary matrix \(\mathbf{\Delta}\) in at least two different ways: as \(n\) ciphertexts \(c_{0},...,c_{n-1}\), where \(c_{i}\in\mathcal{C}\) represents the encryption of the \(i\)'th column of \(\mathbf{\Delta}\), which requires \(n\leq N/2\); or as a single ciphertext \(c\), where \(c\in\mathcal{C}\) represents the encryption of the "concatenated columns of \(\mathbf{\Delta}\)"-vector, which requires \(n\leq\sqrt{N/2}\). For simplicity, we assume that a Client encrypts \(\mathbf{\Delta}\) using the first method; obtains and sends \(c_{i}\in\mathcal{C}\) to the Server. The Server can use evk and compute \(c^{\prime}_{0},...,c^{\prime}_{n-1}\leftarrow\mathtt{Eval}_{\mathtt{evk}}(f;c_ {0},...,c_{n-1})\), using ciphertext addition and multiplication operations7, where \(f\) is the arithmetic circuit induced by Algorithm 6. The Server sends \(c^{\prime}_{i}\), \(i=0,...,n-1\), back to the Client, who can use sk and decrypt \(c^{\prime}_{i}\) to \(x^{\prime}_{i}\). Note that, by our previous arguments following Algorithm 6, \(\mathsf{Round}(x^{\prime}_{i})\) would match the \(i\)th column of \(\mathsf{Reduce}(\mathbf{\Delta})\). In order to get a more concrete sense of the implementation of Algorithm 6 using CKKS, we consider CKKS parameters at \(\lambda=128\)-bit security level, and set \(N=2^{17}\) and \(Q=P\cdot q_{0}\cdot\prod_{i=1}^{50}q_{i}\), as a product of \(52\) primes with \(\log_{2}Q=3300\), \(\log_{2}P=660\), and \(\log_{2}q_{i}\approx\delta=51<\log_{2}q_{0}<2\delta=102\); see Table 6 in [14]. This choice maximizes the depth \(L\) of circuits that HEAAN can evaluate, without bootstrapping, to \(L=50\) and the precision digits of data during computations is kept at \(10\). Under this choice of parameters, a Client can encode and encrypt boundary matrices of size \((n\times n)\), where \(n\leq N/2=2^{16}(\sqrt{N/2}=2^{8})\) using the first (second) encoding approach. CKKS can handle circuits of depth up to \(L=50\) and so one would have to bootstrap [15] once the depth limit is exhausted. In our implementation, we use Algorithm 7, which is a slightly modified and optimized version of Algorithm 6. Our implementation, using Intel(R) 16-Core(TM) i9-9900K 3.60GHz, can reduce a single encrypted 3x3 matrices in 4.5 seconds with 40 bootstrappings using the (non-cryptographic) CKKS parameters \(N=2^{5}\), \(Q\approx 2^{3188}\); and \(\mathcal{P}_{\mathsf{L}}=\mathcal{P}_{\mathsf{C}}=(5,5,2,5)\), where the parameters are chosen such that the underlying Comp in our computations uses one of the optimal parameters as reported in [16]. Note that Reducing 3x3 matrices takes 225 minutes using 128-bit secure CKKS parameters with \(N=2^{17}\). Note that if ciphertext slots are fully utilized then the amortized times would be \(4.5/(2^{4}/(3+1))=1.125\) and \(225\cdot 60/(2^{16}/(3+1))=0.82\) seconds, respectively. ### Limitations and Potential Improvements A major challenge in implementing HE-Reduce using HE is the cubic co-factor \(n^{3}\) in the depth of the underlying arithmetic circuit (even HE-Reduce-Optimized has a quadratic co-factor \(n^{2}\), see Theorem 4.2.) As pointed out in an implementation scenario in Section 4.2, HEAAN can handle circuits up to depth 50 but the depth of HE-Reduce-Optimized quickly reaches large numbers as \(n\) grows and exceed 50 even for small values of \(n\). Therefore, the size of the boundary matrix may be too large in practice to be practically reducable. Indeed, the Vietoris-Rips and Cech filtrations have \(2^{m}\) simplices in the worst case for a point cloud with \(m\) points [45] since they define scales for every simplex in the powerset of the vertices (although it would be unusual to compute with simplices of all dimensions). Another challenge is to encode and encrypt \((n\times n)\) boundary matrices for large \(n\). As noted in Section 4.2, currently suggested HEAAN parameters [14] at 128-bit security level limits \(n<N/2=2^{16}\) or \(n<\sqrt{N/2}=2^{8}\), depending on the choice of encoding. Therefore, substantial improvements would be required before an efficient implementation of HE-Reduce-Optimized can be realized. A possible improvement would be to reduce the size of the boundary matrix by the choice of filtration, which is an active field of research. For example, for a point cloud of size \(m\) in dimension \(d\), the (weighted) alpha [21, 22] and sparse Rips filtrations [51] create complexes of size \(m^{\mathcal{O}(d/2)}\) and \(\mathcal{O}(m)\) respectively [45]. Very recent theoretical results also justify computing the PH of small subsets of a point cloud to construct a distribution of PDs representing the topology of the original cloud [54]. This approach has the potential to massively reduce the size of each boundary matrix, whose reductions can be carried out completely in parallel. Another improvement would come from relaxing our theoretical bounds for parameters to reduce the depth in Theorem 7. Section 4.4 provides some motivating evidence of the feasibility and potential consequences of such an approach. ### Empirical Results The output of Algorithm 7 is an approximately binary matrix \[\mathbf{R}^{\prime}=\texttt{HE-Reduce-Optimized}(\boldsymbol{\Delta}; \mathcal{P}_{\texttt{L}},\mathcal{P}_{\texttt{C}})\in[0,1]^{n\times n}\] which approximates the output of \(\texttt{Reduce}(\boldsymbol{\Delta})\). The key bound in parameter selection is that throughout HE-Reduce-Optimized, the approximate binary vectors must never disagree with the true underlying binary vectors by more than \(1/2n\), to ensure the output of HE-Reduce-Optimized returns an approximately binary vector with the same implied birth-death pairings as the exact Reduce. How prevalent are the cases in which the maximum error between the approximate and the exact reduced matrix exceeds \(1/2n\)? This question focuses on the accumulation of error throughout HE-Reduce-Optimized due to approximating exact operations in plaintext, and is independent of the noise growth that is accumulated by HE operations. We explored this question in a fashion similar to the parameter relaxation experiment conducted in [16] by systematically increasing the parameters \(\mathcal{P}_{\texttt{L}}\) and \(\mathcal{P}_{\texttt{C}}\) of HE-Reduce-Optimized with respect to their depth and complexity to determine a minimum depth cofactor \(D=D_{L}+D_{C}+1\) (as defined in Theorem 7) which resulted in \(100\%\) accuracy. Specifically, for each parameter choice, we randomly sampled the space of \(10\times 10\), upper-triangular, binary matrices and compared the results of exact and approximate reductions, recording when all entries were within \(1/2n\) and/or \(1/2\) of the exact-reduced binary matrix. We found that the minimum depth (119) and complexity (55300) parameter pair for which \(100\%\) of the approximately reduced matrices were within \(1/2n\) of their exact counterparts was \(\mathcal{P}_{\texttt{L}}=(3,3,2,6)\) and \(\mathcal{P}_{\texttt{C}}=(3,3,2,12)\), as reported in Table 1. That said, it may be that some matrices will exhibit an error in excess of the \(1/2n\) tolerance for these parameter choices, although we expect such examples to be rare if they exist. By reducing \(t_{\texttt{C}}\) from 12 to 11, we found only \(81.2\%\) of approximately reduced matrices had errors less than \(1/2n\) and only \(91.2\%\) of matrices had maximum error was less than \(1/2\)--and so would still yield the correct reduced matrix after rounding (Table 1). By additionally raising \(t_{L}\) from 6 to 7 (so the circuit depth is again 119 but complexity is 51800) we find \(98.6\%\) of approximately reduced matrices had errors less than \(1/2n\) and \(100\%\) of matrices had maximum error was less than \(1/2\) (Table 1). These results in expected accuracy suggests there is moderate sensitivity to the choice of some parameters. We found that the same parameters for HE-Reduce-Optimized shown to correctly reduce random \(10\times 10\) matrices, also correctly reduces the \(12\times 12\) example boundary matrix given in Figure 2. Indeed, the maximum error in any component of the approximate reduced boundary matrix is \(2.04\mathrm{e}{-3}\), well within the required \(1/2n=1/24\approx 0.041\) tolerance to guarantee correct computation of column low \(1\)s (Figure 3 (A)). By relaxing some choices of accuracy parameters we observe failure cases where HE-Reduce-Optimized produces approximate binary matrices that do not cast to the exact reduced matrix. For instance, relaxing \(t_{\mathrm{L}}\) from \(5\) to \(6\) returns a matrix that fails to be in reduced form, as both columns \(ab\) and \(bc\) have the same low \(1\)s (Figure 3 (B)). By increasing \(t_{\mathsf{C}}\) substantially, this issue is remedied, however, the approximately reduced matrix does not agree with the exact reduction (Figure 3 (C)). It is interesting to note that, in this case, the low \(1\)s are all correct, and so the correct persistence diagram is computed. Relaxing LowComp parameters also leads to failure, as shown in (Figure 3 (D)), where large errors accumulate during reduction leading to values that fall far outside the allowed range of \([0,1]\). ## 5 Concluding Remarks and Future Research We developed a new algorithm that enables key TDA computations in the ciphertext space using HE. We proved the correctness of our proposed algorithm, provided detailed correctness and complexity analysis, and an implementation of our algorithms using CKKS from the OpenFHE library [5]. We also presented some concrete directions for improvement and provided experimental results. To our knowledge, this is the first attempt to introduce secure computing for TDA. It would be interesting to extend and improve our results, and to implement secure TDA algorithms on realistic data sets. The Reduce algorithm represents one of several fundamental components of TDA machinery which challenge existing technologies in the HE space. Another is the calculation of distances between PDs, which rely on combinatorial optimization algorithms to minimize the cost of matchings between persistence pairs in pairs of PDs [38]. Others include the numerous methods being broadly deployed to vectorize PDs for use with downstream ML models [2, 7, 9, 18, 47, 49]. HE-compatible implementations could allow remote processing of encrypted PDs and would immediately enable the use of existing implementations of encrypted \begin{table} \begin{tabular}{c c c c c c} \hline \(\mathcal{P}_{\mathrm{L}}\) & \(\mathcal{P}_{\mathsf{C}}\) & \(D\) & Complexity & Within \(1/2n\) & Within \(1/2\) \\ \hline (3, 3, 2, 6) & (3, 3, 2, 11) & 113 & 51300 & 81.2\% & 91.2\% \\ (3, 3, 2, 7) & (3, 3, 2, 11) & 119 & 51800 & 98.6\% & 100\% \\ \(*\) & (3, 3, 2, 6) & (3, 3, 2, 12) & 119 & 55300 & 100\% & 100\% \\ \hline \end{tabular} \end{table} Table 1: Empirically-determined parameters for accurate reduction of \(10\times 10\) matrices. \(*\) represents the lowest depth and complexity parameters of HE-Reduce-Optimized that exhibited correct reduction (within \(1/2n\) error) of \(100\%\) of randomly chosen matrices. Euclidean distance calculations [13] and encrypted ML models that take as input finite-dimensional feature vectors [12, 20, 36]. We are hopeful these challenges will have implications beyond TDA-ML use cases by soliciting contributions from the broader HE community, and that the constraints imposed by HE will motivate new TDA approaches.
2305.08818
Sentence Level Curriculum Learning for Improved Neural Conversational Models
Designing machine intelligence to converse with a human user necessarily requires an understanding of how humans participate in conversation, and thus conversation modeling is an important task in natural language processing. New breakthroughs in architecture and data gathering continue to push the performance of such conversational AI models. However, designs neglect the gradual buildup in sentence structure and complexity experienced by humans as we learn to communicate. During training, our model accepts one or more sentences as input and attempts to predict the next sentence in the conversation one word at a time, so our goal is to separate training into segments, with each segment's corpus comprised of longer sentence pairs than the previous one. This will mimic the desired "buildup" component of human learning. We begin with only "short" length sentence pairs, then only "medium" length pairs, and so on. A majority of our experiments were toward optimizing this technique, ensuring a proper representation of the technique's potential, since many of the details were new questions. Our segment-trained models were then able to achieve lower validation loss at the end of training than models trained with standard text preparation. This segmented training is straightforward to implement and our results provide a general direction for future research to implement and improve it.
Sean Paulsen
2023-05-15T17:28:59Z
http://arxiv.org/abs/2305.08818v1
# Sentence Level Curriculum Learning for Improved Neural Conversational Models ###### Abstract Designing machine intelligence to converse with a human user necessarily requires an understanding of how humans participate in conversation, and thus conversation modeling is an important task in natural language processing. New breakthroughs in architecture and data gathering continue to push the performance of such conversational AI models. However, designs neglect the gradual buildup in sentence structure and complexity experienced by humans as we learn to communicate. During training, our model accepts one or more sentences as input and attempts to predict the next sentence in the conversation one word at a time, so our goal is to separate training into segments, with each segment's corpus comprised of longer sentence pairs than the previous one. This will mimic the desired "buildup" component of human learning. We begin with _only_ "short" length sentence pairs, then _only_ "medium" length pairs, and so on. A majority of our experiments were toward optimizing this technique, ensuring a proper representation of the technique's potential, since many of the details were new questions. Our segment-trained models were then able to achieve lower validation loss at the end of training than models trained with standard text preparation. This segmented training is straightforward to implement and our results provide a general direction for future research to implement and improve it. ## 1 Introduction We take inspiration from the work done by (Vogelsang et al., 2018) which adapts the physical process of humans learning to see to a CNN machine vision model. Newborns have lower visual acuity, so our brains begin learning to see using degraded (i.e blurry) images. As demonstrated in their work, this degradation is actually critical for learning to process configural face judgments, a fact reflected in the inability of older children who are treated for congenital cataracts to properly perform that facial analysis. (Vogelsang et al., 2018) hypothesized that the same benefit could be found in the receptive fields and performance of a convolutional neural network being trained for machine vision. "The results show that commencing training with blurred images creates receptive fields that integrate information across larger image areas and leads to improved performance and better generalization across a range of resolutions." (Vogelsang et al., 2018) (Bengio et al., 2009) had established a more general treatment of this process that they called "Curriculum Learning," which has now become the standard terminology, and is the term used in this paper, and abbreviated as CL when the context is clear. We hypothesized that a curriculum of increasing sentence complexity would result in higher performance on next-utterance prediction. Humans do not learn to participate in conversation by attempting to respond to sentences of random length. Our parents give us simple statements and questions to respond to, and our responses begin as statements with very few words and little complexity. However this is not reflected in the training regimens of modern conversational models. Similar length sentences are generally grouped together within a batch, but batch randomization disrupts any sense of gradual learning. Thus our analogue of "degraded images," that is, our _curriculum_, is to train in three segments, where the training data sentence pairs have the same "length" in each segment, length being either "short," "medium," or "long" (this is discussed in greater detail in **Section 3**). Our overall goal was to compare the performance of a model trained with this sentence level curriculum learning to a model trained on a standard all-inclusive, disordered dataset, but the finer details of such a process were a mystery as we could not find prior work on this technique at the sentence level. **Sec tion 3** explores these mysteries and our approach to solving them. Afterwards, we compile several different test sets and compare the final validation loss of our curriculum learning models to those of two control group models. The first is trained on a corpus consisting in thirds of pairs from the short, medium, and long data, all mixed together randomly. This is to confirm that the ordering of increasing complexity contributes to overall performance.The second control model has no length-based preprocessing other than the standard general grouping by length within each batch. That is, each sentence pair in this control model's training data could have either sentence be of any length. This model represents the standard approach to training conversational agents. We needed a consistent and approachable conversational architecture as a baseline, and found a perfect candidate in (Vinyals and Le, 2015). Our hyperparameters were optimized via random search (Bergstra and Bengio, 2012), and can be found in the supplemental materials, along with a sketch of our Keras implementation of the model. We defer to (Vinyals and Le, 2015) for the remaining architectural details unless otherwise specified. ## 2 Related Work The last few years have seen the emergence of many end-to-end architectures for conversational agents and NLP in general. A review of "significant deep learning related models... employed for numerous NLP tasks,"Young et al. (2017) showed that these conversational models share overall similarities with (Vinyals and Le, 2015), yet none of them have addressed the gradual built-up nature of human speech that our work attempts to capture. Variations of the Transformer (Vaswani et al., 2017) architecture are currently responsible for state of the art conversational agents. Indeed, "pretrained Transformer variants are currently the best performing models on this task."Dinan et al. (2019) Yet even these models continue to train on both simple and complex data samples within the same regimen. (Hancock et al., 2019), for example, remarks that "dialogue agents would ideally learn directly from dialogue... corresponds to the way humans learn to converse," similar to our desire to mimic a human learning process. Their models, like ours, are first trained on the familiar "next-utterance prediction" task, using the PersonaChat dataset (Zhang et al., 2018). However, their corpus of next-utterance training pairs include samples such as "How are you?" \(\rightarrow\) "Great, thanks!", but also "I do not have children at the moment." \(\rightarrow\) "That just means you get to keep all the popcorn for yourself."Zhang et al. (2018) No regard is then given to the significant difference in semantic complexity between the pairs. Our initial work on this topic has reduced that notion of "semantic complexity" to merely "length," but we look forward to refining the definition of our "degraded samples" in future work. (Bengio et al., 2009) examined a curriculum of increasing vocabulary size on a task of next-_word_ prediction, which showed statistically significant improvement. Their curriculum and task were not the only important differences from our work, however. Their training corpus consisted of windows of text from Wikipedia, so there is no notion of the conversational aspects we are focused on, and no heed was paid to the complexity of the text itself. ## 3 A Sentence Level Curriculum We need to optimize the our curriculum before we can draw conclusions about models that are trained according to it. More specifically, we need to answer these two questions: 1) How are "short," "medium," and "long" sentences defined? 2) How many pairs of each length should we train on, and for how long? Due to time constraint, we could not perform a proper grid search to optimize the lengths of the three segments, and thus we somewhat arbitrarily defined a "short sentence" as being between 1 and 4 (inclusive) words, not counting the start-of-sentence and end-of-sentence tokens added during preprocessing, a "medium sentence" as 5 to 10 words, and a "long sentence" as 10 to 16 words. Define a **length pair** to then be comprised of two successive sentences of dialogue appearing somewhere in the OpenSubtitles corpus. Note that our task does not predict the utterance of the next _speaker_, merely the next utterance in general, and so it may be that both sentences were spoken by the same person. This is a flaw we choose to tolerate for the time being. The first bullet above is resolved. After processing the full OpenSubtitles english dataset(Vinyals and Le, 2015), we have 68 million short length pairs, 59 million medium length pairs, and 7 million long length pairs. However, due to hardware and time limitations, the maximum training a given model can receive on a given segment is 3 million samples (with a batch size of 128) for 6 epochs, which brings us to the second bullet. Do we really want the full treatment on each segment? Our intuition was that more data and more time (until overfitting begins) should be superior. However, note that when the model trains on short length pairs, it is going to learn very quickly that the end-of-sentence token, "\(<\)eos\(>\)", comes after only a few words, but more importantly, that it _never_ takes more than four words to reach the end of the sentence. Then once we begin training on the medium segment, the model will have a difficult time predicting anything other than "\(<\)eos\(>\)" after four words, and that prediction will be incorrect in every training sample. Perhaps the additional learning from additional training time on the short segment is not worth the trouble of correcting a more ironclad notion of where and when to predict "\(<\)eos\(>\)". We have called this potential problem "overspecializing," where the model becomes _too_ familiar with the "degraded" samples and is therefore overly resistant to learning "higher resolution" samples. Overspecialization could theoretically be caused by training on too many samples, by spending too many epochs on those samples, or both. Thus the problem of optimizing our curriculum has now been reduced to minimizing overspecialization. On one end, if overspecialization is simply not real, then we expect to see the models trained on the most data for the most time to be the most effective learners in the medium segment, that is, to end medium segment training with the lowest loss values, and we expect the analogous result if indeed overspecialization is a crippling problem. We gathered training sets of sizes 10K, 50K, 200K, 500K, 1M, and 3M short length pairs, chosen uniformly at random across the 68 million available pairs. To account for the impact of random initial values of parameters, we trained ten models on each of the six sample sizes, and all were trained for 6 epochs. But recall that we're interested in the extent of overspecialization due to training time, so we saved all the parameters of each model after 2, 4, and 6 epochs. This resulted in 180 sets of parameters to consider. Unwieldy, but accounting for initial values contributed a factor of 10. So for each of the 18 dataset/epoch combinations, we search the ten results for the lowest validation loss and declare these sets of weights the "winners." The 18 winners' short-trained weights are then inherited by a new model to be trained on the medium segment. But we have the same problem at this stage, how much of the available medium training data should we use and how many epochs should we train for? We gathered the same six sizes of training data, medium pairs chosen uniformly at random from the 59 million available pairs, and trained all combinations of medium training data and short-winners for six epochs, for a total of 108 models and 324 sets of short-medium-trained parameters. These results granted us preliminary evidence of overspecialization in our models. To see this, consider a hypothetical experiment where our overall task is only to predict next-utterance for medium length pairs after training on short length pairs. Then we would have our many inherited parameters after short training as we do now, and we would train on as much medium data as we can for as long as possible. So we can look at our results so far for that specific case, that is, models trained on 3 million medium length pairs for 6 epochs. Then, given a short-training data size, which number of epochs on that short length data resulted in the lowest loss value? Table 1 shows this information. Indeed the best loss overall for 3 million/6 epochs medium training had short segment hyperparameters in the middle, 200K short pairs for only 2 epochs. More specifically, for the task of predicting medium length next-utterances, a pre-training regimen of 200K short length pairs for 2 epochs was superior to other "intuitively superior" configurations such as 200K for 6 epochs, 3M for 2 epochs, and 3M for 6 epochs. This is good evidence to support our concerns about overspecialization. It is important to note that our models are programmed for validation based early stopping, which is to say that the validation accuracy improved with each epoch during short-training, so this is fundamentally different from overfitting. Ideally we would have liked to continue in this full grid-search manner, training models initialized with all 324 short-medium-trained parameters on all six different sizes of long training sets, but this was simply not feasible in time or hardware. Thus one-sixth of the 324 sets of parameters were chosen uniformly at random to be inherited by the models in the long segment, which, in the interest of time, had training sets only of size 1 million and 3 million, and all models were trained for the full six epochs. Two of these models encountered errors in our high performance cluster during training and were lost. Table 2 shows the results. The three additional "types" below the curriculum learning results are explained in **Section 4**, and were included in this table for cleanliness. As we have come to expect, the best performance was not trained for 6 epochs on either medium or short length data, and the worst performances did not have enough exposure to medium length pairs with the smaller medium training set sizes, even with 6 epochs to learn from them. We see substantial improvement on the 3 million large-length set when reducing the short-length set from 3 million down to 50 thousand, while the full 3 million medium set is common between them. This is likely do to the semantic similarity of our "medium" and "large" data. The notion of separate clauses or other abstract pieces of a sentence is gained in the medium segment, since the short sentences are some single, simple clause. So training on more medium data exposes the model to more variations of how those abstract pieces can fit together, which helps to learn the long data, but only for 4 epochs so as not to become too attached to the more immediate placement of the end-of-sentence token, as well as whatever unknown contributors there are to overspecialization. This was of course different from what we saw when moving from short to medium, because the difference in semantics was more significant, i.e single clause to multi clause is more significant than multi clause to slightly longer multi clause. ## 4 Results An immediate question to ask at this point is whether our curriculum learning has accomplished anything at all. Perhaps it was optimal to train on short length pairs for short periods of time because that made it easier to _forget_ what was learned, so the medium segment training approximated training with fresh, randomly initialized models. A quick experiment yields evidence to the contrary. We trained five fresh models on a training set of 1 million long length pairs, and five more on 3 million, all for the full six epochs, then looked at the best performance for the two training sets. These are the "fresh" type models included in Table 2 below. As the table shows, even the worst performing curriculum trained models outperformed the fresh models. Gaining advantage from pre-training is not surprising in and of itself, but in our case what was effectively pre-training consisted of two training sets that are fundamentally different from each other, as well as from the current training set. The model must have learned from the short and medium segments some things about dialogue that are not necessarily specific to the length of the samples in order to have obtained this edge in training on the long segment. It is worth noting, however, that the fresh models on 3 million long pairs were able to out perform the curriculum trained models that only had 1 million long pairs to learn from. Now how do these compare against models trained with standard training sets? Define **mix pairs** to be a training set comprised one-third each of short, medium, and long length pairs, chosen uniformly at random from their respective collections, and ordered randomly within this mix pairs collection. So each pair has two sentences of the same length, but the set contains pairs of all three lengths. This is the natural training set to acquire models for comparison against our curriculum training, that is, to determine if the segmentation had the desired effect. Define **cross pairs** to be a training set drawn from OpenSubtitles with no restrictions other than the maximum length of either sentence within a pair remains at 16. Models trained on cross pairs represent the standard method of training for us to compare against. We train five models on 1 million cross pairs for 6 epochs, and five more on 1 million mix pairs for 6 epochs, and choose the best of each to account for random initialization. \begin{table} \begin{tabular}{l l l} \multicolumn{1}{c}{**Short TD**} & \multicolumn{1}{c}{**Short Epochs**} & \multicolumn{1}{c}{**Val. Loss**} \\ \hline 10000 & 6 & 3.786 \\ \hline 50000 & 2 & 3.785 \\ \hline 200000 & 2 & 3.782 \\ \hline 500000 & 4 & 3.799 \\ \hline 1000000 & 2 & 3.788 \\ \hline 3000000 & 2 & 3.791 \\ \end{tabular} \end{table} Table 1: Given a short-segment training set size, we identified the best training performance on the 3 million medium length pairs after 6 epochs and checked how many epochs it spent training on that short length data. These two winners are the representatives of the cross and mix "types" in Table 2. The "Long TD" column for these models refers to the validation set that was used for evaluation. Even the worst performing curriculum learning models performed better than the three comparison types on a long-length validation set by a considerable amount. Notice, though, that the fresh model representative outperformed the mix and cross models as well, due to training specifically on the task being evaluated, i.e long pair next-utterance prediction. Having just completed training on its long segment, the curriculum trained model has an advantage over the mix and cross models in this evaluation for that same reason.Therefore it is difficult to quantify the improvement due only to the curriculum from these results alone. Further, a quick check shows our best curriculum trained models performing much worse than the standard models on a test set of mix pairs, by about the same margin that they are superior in Table 2. So if we want to make this sort of direct comparison, we need to be more deliberate in designing our curriculum toward that goal. For example, proportions of successive segments contain samples from previous segments so that the final segment is approximately a mix pairs training set. _That_ would serve for a direct comparison against the mix model we trained here, and is closer to answering the question we set out with. This is expanded upon in the following section. For now, though, having taken merely our first step into the territory of sentence level curriculum learning, these are positive results. We hypothesized the influence of overspecialization and observed strong evidence of its existence both when moving from short to medium and from medium to large. Further, we observed that short and medium segment training drastically improved performance on long pairs compared to a fresh \begin{table} \begin{tabular}{l l l l l l l} **Type** & **Val. Loss** & **Long TD** & **Med TD** & **Med Epochs** & **Short TD** & **Short Epochs** \\ \hline DT (worst) & & & & & & \\ \hline 3.941 & 1000000 & 10000 & 6 & 200000 & 2 \\ \hline 3.904 & 1000000 & 10000 & 2 & 10000 & 6 \\ \hline 3.878 & 1000000 & 10000 & 4 & 500000 & 6 \\ \hline 3.618 & 3000000 & 10000 & 6 & 10000 & 4 \\ \hline 3.578 & 3000000 & 50000 & 6 & 500000 & 4 \\ \hline 3.574 & 3000000 & 10000 & 6 & 500000 & 4 \\ \hline DT (best) & & & & & & \\ \hline 3.783 & 1000000 & 1000000 & 4 & 50000 & 2 \\ \hline 3.788 & 1000000 & 3000000 & 6 & 10000 & 2 \\ \hline 3.791 & 1000000 & 1000000 & 4 & 1000000 & 4 \\ \hline 3.473 & 3000000 & 3000000 & 4 & 50000 & 4 \\ \hline 3.483 & 3000000 & 3000000 & 4 & 3000000 & 4 \\ \hline 3.485 & 3000000 & 3000000 & 6 & 200000 & 6 \\ \hline Fresh & & & & & \\ \hline 4.028 & 1000000 & & & & \\ \hline 3.705 & 3000000 & & & & \\ \hline Mix & & & & & \\ \hline 4.199 & 1000000 & & & & \\ \hline 4.188 & 3000000 & & & & \\ \hline Cross & & & & & \\ \hline \hline 4.625 & 1000000 & & & & \\ \hline 4.577 & 3000000 & & & & \\ \end{tabular} \end{table} Table 2: The lowest validation loss for fresh-long, cross, and mix models on the 1 million and 3 million sized long length validation sets for comparison against the best and worst performers on long length training sets after curriculum learning. model, despite all the short and medium pairs being outside the domain of the long segment's task. ## 5 Future Work There are many important questions remaining. How would our results change by varying the cutoffs in between segments, or perhaps by adding more segments? Our analogue to different resolutions of images was different lengths of sentence pairs, while a more fitting analogue would have some notion of overall semantic complexity. How exactly to define and measure such a quantity remains an open problem. Further, similar to length, what would be the cutoffs in this complexity value from segment to segment? The answer might only be found experimentally, and similar to our experience, optimizing the hyperparameters of such a regimen would be quite taxing. As mentioned near the end of the previous segment, one glaring flaw with our approach is that humans do not stop using short sentences in conversation merely because they've learned to use medium sentences, and so on for long sentences. How would our results change if each segment included not only length pairs of the current segment length, but length pairs, mix pairs, and cross pairs of current and all previous segments? As always, tuning the proportions of each type of pair that would comprise such a dataset would be a significant experimental endeavor. Most imminent, though, is to run these same experiments on state of the art Transformer variants. Doing so is straightforward with the model in hand, one need only divide their training data into segments and tune the regimen, ideally with a more deliberate plan toward producing evidence of overspecialization, now that we have a better idea of how to demonstrate it. Per Dinan et al. (2019), PersonaChat seems to result in better conversational performance than OpenSubtitles with these models. But, as mentioned near the end of **Section 2**, it remains to be seen if PersonaChat is large enough to survive being segmented.
2302.05343
Efficient and Accurate Learning of Mixtures of Plackett-Luce Models
Mixture models of Plackett-Luce (PL) -- one of the most fundamental ranking models -- are an active research area of both theoretical and practical significance. Most previously proposed parameter estimation algorithms instantiate the EM algorithm, often with random initialization. However, such an initialization scheme may not yield a good initial estimate and the algorithms require multiple restarts, incurring a large time complexity. As for the EM procedure, while the E-step can be performed efficiently, maximizing the log-likelihood in the M-step is difficult due to the combinatorial nature of the PL likelihood function (Gormley and Murphy 2008). Therefore, previous authors favor algorithms that maximize surrogate likelihood functions (Zhao et al. 2018, 2020). However, the final estimate may deviate from the true maximum likelihood estimate as a consequence. In this paper, we address these known limitations. We propose an initialization algorithm that can provide a provably accurate initial estimate and an EM algorithm that maximizes the true log-likelihood function efficiently. Experiments on both synthetic and real datasets show that our algorithm is competitive in terms of accuracy and speed to baseline algorithms, especially on datasets with a large number of items.
Duc Nguyen, Anderson Y. Zhang
2023-02-10T16:00:40Z
http://arxiv.org/abs/2302.05343v1
# Efficient and Accurate Learning of Mixtures of Plackett-Luce Models ###### Abstract Mixture models of Plackett-Luce (PL) - one of the most fundamental ranking models - are an active research area of both theoretical and practical significance. Most previously proposed parameter estimation algorithms instantiate the EM algorithm, often with random initialization. However, such an initialization scheme may not yield a good initial estimate and the algorithms require multiple restarts, incurring a large time complexity. As for the EM procedure, while the E-step can be performed efficiently, maximizing the log-likelihood in the M-step is difficult due to the combinatorial nature of the PL likelihood function (Gormley and Murphy, 2008). Therefore, previous authors favor algorithms that maximize surrogate likelihood functions (Zhao et al., 2018, 2020). However, the final estimate may deviate from the true maximum likelihood estimate as a consequence. In this paper, we address these known limitations. We propose an initialization algorithm that can provide a provably accurate initial estimate and an EM algorithm that maximizes the true log-likelihood function efficiently. Experiments on both synthetic and real datasets show that our algorithm is competitive in terms of accuracy and speed to baseline algorithms, especially on datasets with a large number of items. ## 1 Introduction Learning to rank is an active area of research with wide-ranging applications in recommendation systems, information retrieval, crowdsourcing and the social sciences. The Plackett-Luce (PL) model (Plackett, 1975; Luce, 1959) is one of the most fundamental ranking models. In a universe of \(n\) items, the PL model posits that item \(i\) has a latent _utility_\(\theta_{i}^{*}\in\mathbb{R}\). The probability of observing a full ranking \(\pi\) given by the user (most preferred item first) is given as \[\mathbb{P}^{PL}(\pi=[\pi_{1},..,\pi_{n}]\,|\theta^{*})=\prod_{i=1}^{n-1}\frac{ \exp\left(\theta_{\pi_{j}}^{*}\right)}{\sum_{j=i}^{n}\exp\left(\theta_{\pi_{j} }^{*}\right)}\,. \tag{1}\] The maximum likelihood estimate (MLE) can be obtained using iterative algorithms such as the Minorize-Maximize (MM) algorithm of Hunter (2004) and enjoys favorable theoretical properties (Hajek, Oh, and Xu, 2014). In recent years, an algorithm known as Luce spectral ranking (LSR) (Maystre and Grossglauser, 2015) has become the method of choice for maximum likelihood inference for PL models. LSR outputs the MLE just like MM but is often much faster. The PL model is closely connected to the Bradley-Terry-Luce (BTL) model (Luce, 1959) for _pairwise comparisons_. For two items \(i\neq j\), the probability that \(i\) is ranked ahead of \(j\)_in a ranking_ is equal to the probability that \(i\) beats \(j\) in a _pairwise comparison_ under the BTL model. That is, \[\mathbb{P}^{PL}(\pi(i)<\pi(j))=\mathbb{P}^{BTL}_{ij}=\frac{1}{1+\exp\left(-( \theta_{i}^{*}-\theta_{j}^{*})\right)}\,, \tag{2}\] where \(\pi(i)\) is the position of item \(i\) in ranking \(\pi\). The classical PL model assumes that there is a universal preference ordering of the items according to their utilities. However, in practice, there might be multiple subpopulations of users with different preference profiles which cannot be fully captured by a single PL model. In such settings, a mixture of PL models is a more appropriate modeling assumption. **Problem Descriptions.** Consider a mixture model with \(K\) components and \(n\) items for some constant \(K\). Let \(\beta^{*}=[\beta_{1}^{*},\ldots,\beta_{K}^{*}]^{\top}\), \(\beta^{*\top}\mathbf{1}=1\) denote the mixing distribution. For component \(k\in[K]\) (where \([a]\) denotes \([1,\ldots,a]\)), the utility parameters for the items are \[\theta^{*k}=[\theta_{1}^{*k},\ldots,\theta_{n}^{*k}]^{\top}\in\mathbb{R}^{n}\,.\] Let \(\mathbf{\theta}^{*}=[\theta^{*1},\ldots,\theta^{*K}]\in\mathbb{R}^{n\times K}\) denote the concatenation of the \(K\) sets of parameters. A _ranking dataset_\(\Pi\) is a collection of full rankings. Consider the following generative model for a ranking dataset of size \(m\). For \(l\in[m]\), let \(z_{l}^{*}\in[K]\) denote the mixture component membership where \(\mathbb{P}(z_{l}^{*}=k)=\beta_{k}^{*}\). Then a permutation \(\pi_{l}\) is drawn from the PL distribution parametrized by \(\theta^{*z_{l}}\). That is, \[\mathbb{P}^{PL}(\pi_{l}=[\pi_{l,1},\ldots,\pi_{l,n}]\,|\,z_{l}^{*},\mathbf{\theta}^{* })=\prod_{i=1}^{n-1}\frac{\exp\left(\theta^{*z_{l}^{*}}_{\pi_{l,i}}\right)}{\sum_ {j=i}^{n}\exp\left(\theta^{*z_{l}^{*}}_{\pi_{l,j}}\right)}\,, \tag{3}\] where \(\pi_{l,i}\) denote the \(i\)-th item in permutation \(\pi_{l}\). The reader may recognize two identifiability issues here. The first is parameter translation. For each component, the distributions parametrized by \(\theta^{*k}\) and \(\theta^{*k}+c\cdot\mathbf{1}_{n}\) are the same for any \(c\in\mathbb{R}\). The second is mixture components (columns of \(\mathbf{\theta}^{*}\)) relabeling. To account for these issues, we consider the following error metric. \[\text{dist}(\mathbf{\theta},\mathbf{\theta}^{*}):=\min_{R\in\mathcal{O}^{K\times K}} \lVert N(\mathbf{\theta})R-N(\mathbf{\theta}^{*})\rVert_{F}\,, \tag{4}\] where \(\mathcal{O}^{K\times K}\) is the set of all permutation matrices (Strang et al., 1993, Chapter 2) of size \(K\times K\) and \(N\) is the normalization operator (i.e., \(N(\mathbf{\theta})._{k}=\theta^{k}-\frac{1}{n}(\theta^{k})^{\top}\mathbf{1}_{n}\)). **Prior Works.** Generalizing the PL model to mixtures adds a layer of complexity to the inference problem. In general, the likelihood function is non-convex in the model parameters. Most previously proposed algorithms instantiate the EM algorithm (Dempster et al., 1977). As a general recipe, an EM algorithm is initialized with some parameter \(\mathbf{\theta}^{(0)}\) (e.g., using random initialization). It then repeats the following two steps for \(t=1,2,\ldots\) until convergence. The E-step computes the posterior class probability conditioned on the current estimate: \[q_{l}^{k}=\mathbb{P}(z_{l}^{*}=k\,|\,\pi_{l},\mathbf{\theta}^{(t-1)})\propto\beta _{k}\cdot\mathbb{P}^{PL}(\pi_{l}\,|z_{l}^{*}=k,\mathbf{\theta}^{(t-1)}) \tag{5}\] for \(l\in[m],k\in[K]\) where \(\mathbb{P}^{PL}\) is given in Equation (3) and \(\beta\) the prior class probability. Thanks to the closed form of the PL likelihood function, the E-step can be done efficiently. The M-step obtains the next estimate \(\mathbf{\theta}^{(t)}\) by maximizing the joint log-likelihood function which decomposes into \(K\)_weighted log-likelihood functions_. Namely, \[\mathbf{\theta}^{(t)}=\arg\max_{\mathbf{\theta}}\sum_{k=1}^{K}\bigg{(}\sum_{l=1}^{m}q _{l}^{k}\,\log\mathbb{P}^{PL}(\pi_{l},z_{l}^{*}=k\,|\,\mathbf{\theta})\bigg{)}\,. \tag{6}\] Due to the combinatorial nature of the PL likelihood function, the derivative of the log likelihood function has a complicated form. As a result, maximizing the (weighted) log-likelihood via gradient-based algorithms quickly becomes inefficient as \(n\) grows. The first practical approach towards solving the M-step uses the Minorize-Maximize algorithm of Hunter (2004), yielding the EMM algorithm of Gormley and Murphy (2008). While guaranteed to solve the M-step, it has been observed that the MM subroutine converges slowly even for datasets with a moderate number of items (e.g., Figure 2). Motivated by practical concerns, researchers have developed _pseudo-likelihood_ estimators that optimize, instead of the true log-likelihood function, _alternative objective functions_. Two such algorithms are the Generalized Method of Moments (GMM) of Azari Soufiani et al. (2013) and Composite Marginal Likelihood (CML) of Zhao and Xia (2018). It has been observed experimentally that GMM is considerably faster than MM and CML is even faster than GMM with comparable accuracy. Besides maximum likelihood (ML) inference methods, previous authors have also proposed Bayesian inference algorithms (Guiver and Snelson, 2009; Mollica and Tardella, 2017). In this paper, we focus primarily on ML algorithms but include additional experiments with Bayesian methods in the supplementary materials. Using GMM and CML to solve the M-step gives us the EM-GMM algorithm (Zhao et al., 2018) and the EM-CML algorithm (Zhao et al., 2020), respectively. The only non-EM algorithm for learning PL mixtures that we are aware of is a GMM-based algorithm proposed in Zhao et al. (2016); Zhao and Xia (2019). However, the construction of the algorithm is quite ad-hoc and the authors did not show extension of the algorithm to more than 2 mixture components. In addition, previous authors primarily restrict their experiments to datasets with a small number of items such as the SUSHI datasets (Kamishima, 2003) with \(n=10\). It is unknown how the previous methods perform when \(n\) is large. Recent works have also studied PL mixtures learning with features and partial rankings (Takchenko and Lauw, 2016; Liu et al., 2019). While we include possible extensions of our algorithm in the supplementary materials, our main focus in this paper is an _improved algorithm for the classical setting_. **Our Contributions.** We propose a new EM algorithm for learning mixtures of PL models that * Has a provably accurate initialization procedure with a finite sample error guarantee, the first of its kind in the literature; * Efficiently maximizes the weighted log-likelihood function in the M-step without using a surrogate likelihood or objective function, thus returning the true maximum likelihood estimate; * Performs competitively with the previously proposed algorithms in terms of accuracy and speed, and is scalable to datasets with \(n\geq 100\). ## 2 The Spectral EM Algorithm In this section, we present our algorithmic contributions. Section 2.1 describes the spectral initialization algorithm and Section 2.2 describes the EM refinement procedure. ### Spectral Initialization The initialization for our algorithm is delegated to spectral clustering (Algorithm 1) and a least squares minimization algorithm (Algorithm 2). To apply spectral clustering, we first embed each ranking \(\pi_{l}\) into a 'pair-wise vector' - \(X_{l}\in\{0,1\}^{\binom{n}{2}}\) where each entry corresponds to a pair of items. As an overload of notation, we use \(d=(d_{1},d_{2})\) where \(d_{1}<d_{2}\) to denote the entry corresponding to the pair \((d_{1},d_{2})\). Define \[X_{l,d}(\pi)=\begin{cases}1&\text{ if }\pi_{l}(d_{1})<\pi_{l}(d_{2})\\ 0&\text{ otherwise}\end{cases}. \tag{7}\] Let \(X\in\mathbb{R}^{m\times\binom{n}{2}}\) denote the concatenation of the embeddings of \(m\) rankings in dataset \(\Pi\). Given a target number of components \(K\), Algorithm 1 can then be applied to the rows of \(X\) to obtain \(K\) clusters, \(\{\hat{C}^{k}\}_{k=1}^{K}\subseteq[m]\). For each cluster of rankings \(\hat{C}^{k}\), we estimate the preference probability for a pair \((i,j)\) as \[\hat{P}^{k}_{ij}=\frac{1}{|\hat{C}^{k}|}\sum_{l\in\hat{C}^{k}}\mathbf{1}[\pi_{l} (i)<\pi_{l}(j)]\,. \tag{8}\] From the preference probability estimates for all pairs, Algorithm 2 recovers the utility parameter \(\hat{\theta}^{k}\). It applies the logit function on the pairwise probabilities and solves a constrained least squares minimization problem, which can be efficiently done using off-the-shelf solvers (Virtanen et al., 2020). Algorithm 3 summarizes the spectral initialization algorithm. ``` Input: Dataset \(\Pi=\{\pi_{1},\ldots,\pi_{m}\}\), number of mixture components \(K\) and threshold \(T\). Output:\(K\) clusters of rankings. 1: Embed the rankings as the rows of a matrix \(X\in\{0,1\}^{m\times\binom{n}{2}}\) according to Equation (7). 2: Perform SVD: \(X=USV^{\top}\), where the singular values are arranged from largest to smallest. 3: Let \(\hat{r}\) be largest index in \([K]\) such that the difference between the successive singular values is greater than \(T\), i.e., \(\hat{r}=\max\{a\in[K]\,:\,S_{aa}-S_{(a+1)(a+1)}\geq T\}\,.\) 4: Run k-means on the rows of \(XV_{1:\hat{r}}\) with \(K\) clusters: \[\left(\hat{z},\{\hat{c}_{k}\}_{k=1}^{K}\right)=\operatorname*{arg\,min}_{ \begin{subarray}{c}z\in\{1,\ldots,K\}^{m}\\ \{c_{k}\}\in\mathbb{R}^{\hat{r}}\end{subarray}}\sum_{l=1}^{m}\lVert V_{1:\hat{ r}}^{\top}X_{l}-c_{z_{l}}\rVert_{2}^{2}\,.\] 5: Return clusters \(\hat{C}^{k}=\{l\in[m]\,:\,\hat{z}_{l}=k\}\) for \(k\in[K]\). ``` **Algorithm 1** Spectral Clustering with Adaptive Dimension Reduction **Input:** Dataset \(\Pi=\{\pi_{1},\ldots,\pi_{m}\}\), number of mixture components \(K\). **Output:** Parameter estimates for \(K\) mixture components \(\hat{\mathbf{\theta}}=[\hat{\theta}^{1},\ldots,\hat{\theta}^{K}]\in\mathbb{R}^{n \times K}\). ``` 1: Run Algorithm 1 on \(\Pi\) with \(T=\sqrt{n}\sqrt{m+n}\sqrt{\log n}\) to obtain \(K\) clusters \(\hat{C}^{1},\ldots,\hat{C}^{K}\). 2: Estimate the pairwise preference probabilities \(\hat{P}^{k}_{ij}\) per Equation (8) for each cluster. 3: Run Algorithm 2 on \(\{\hat{P}^{k}\}_{k=1}^{K}\) and return the parameter estimates for \(K\) mixture components. ``` **Algorithm 2** Least Squares Parameter Estimation **Remarks.** The application of spectral clustering to mixtures of PL models has also appeared in a manuscript by Shah and Song (2018). There, the authors apply the classical spectral clustering algorithm - clustering the rows of \(XV_{1:K}\) - and their analysis requires a spectral gap condition which is hard to verify. We use spectral clustering with adaptive dimension reduction and our analysis does not require any spectral gap condition (Zhang and Zhou, 2022). Furthermore, we focus on _parameter estimation_ while Shah and Song (2018) only focus on clustering, resulting in different theoretical guarantees. The choice of threshold \(T\) in Algorithm 3 is to satisfy a mild technical condition in the analysis of spectral clustering. In our experiments, the performance of the EM algorithm does not seem to critically depend on this threshold. **Intuition behind Algorithm 2.** Recall the connection between the PL model and the BTL model in Equation (2). Suppose we observe a large sample drawn from a single PL distribution. Then \(\hat{P}_{ij}\approx P^{*}_{ij}=1/\left(\exp\left(-(\theta^{*}_{i}-\theta^{*}_{j })\right)\right)\) and \(\hat{\phi}_{ij}=\ln(\hat{P}_{ij}/1-\hat{P}_{ij})\approx\theta^{*}_{i}-\theta^ {*}_{j}\). Solving the least squares optimization problem recovers \(\hat{\theta}\approx\theta^{*}\). In the mixture setting, if the estimates \(\hat{P}^{k}\)'s are accurate, we obtain good parameter estimates (e.g., Theorem 3.1). Rajkumar and Agarwal (2016) apply a similar idea in their algorithm for _ranking from comparisons of \(O(n\log n)\) pairs under a single BTL model_. They first apply the logit function on the pairwise preference probabilities, followed by a low rank matrix completion algorithm (Keshavan, Montanari, and Oh, 2009). Their algorithm _produces a ranking_. On the other hand, our goal is mixture learning and the resulting theoretical analysis is different. ### Iterative Refinement via EM **The Weighted LSR Algorithm.** As noted before, we wish to maximize the weighted log-likelihood (6) efficiently. Towards this goal, we generalize the Luce spectral ranking (LSR) algorithm (Maystre and Grossglauser, 2015) to incorporates sample weights. The original LSR algorithm produces the MLE. Our generalized algorithm outputs the weighted MLE (see Theorem 3.2). The intuition behind LSR is an interpretation of the _PL ranking generative process as a sequence of choices_(Plackett, 1975). Given a ranking \(\pi_{l}\), define its _choice breaking_ as \[\mathcal{B}_{\pi_{l}}=\left\{(\pi_{l,1},\{\pi_{l,1},\ldots,\pi_{l,n}\},l),\ldots,( \pi_{l,n-1},\{\pi_{l,n-1},\pi_{l,n}\},l)\right\}.\] Each tuple \((i,A,l)\in\mathcal{B}(\pi_{l})\) is a _choice enumeration_ of the ranking \(\pi_{l}\). Given a ranking dataset \(\Pi=\{\pi_{1},\ldots,\pi_{m}\}\), define the _choice breaking_ of \(\Pi\) as the union of all ranking-level choice breakings: \[\mathcal{B}_{\Pi}=\mathcal{B}_{\pi_{1}}\cup\ldots\cup\mathcal{B}_{\pi_{m}}\,. \tag{9}\] Note that \(|\mathcal{B}_{\Pi}|=m(n-1)\). When the dataset \(\Pi\) is clear from context, we simply use \(\mathcal{B}\) to denote the dataset-level choice breaking. _We now introduce sample weights_. Firstly, define the 'weight' of a choice breaking \(\mathcal{B}\) with weight vector \(w\) and parameter \(\theta\in\mathbb{R}^{n}\) as \[\gamma(\mathcal{B},w,\theta)=w^{\top}\bigg{(}\frac{1}{\sum_{j\in A}e^{\theta _{j}}}\bigg{)}_{(i,A,l)\in\mathcal{B}}\,, \tag{10}\] where \(w\in\mathbb{R}_{+}^{m(n-1)}\) is an arbitrary weight vector; \(\bigg{(}\frac{1}{\sum_{j\in A}e^{\theta_{j}}}\bigg{)}_{(i,A,l)\in\mathcal{B}}\) is also vector in \(\mathbb{R}_{+}^{m(n-1)}\) where each entry corresponds to a choice enumeration \((i,A,l)\). The reader may recognize that the weight vector \(w\) has the same size as the choice breaking while sample weights are often given at the ranking level - each ranking \(\pi_{l}\) is assigned a weight \(q_{l}\) for \(l\in[m]\) as in (6). Given sample weights \(q=(q_{1},\ldots,q_{m})\), one simply sets \[w=[\underbrace{q_{1},\ldots,q_{1}}_{n-1\text{ terms}},\underbrace{q_{2}, \ldots,q_{2}}_{n-1\text{ terms}},\underbrace{q_{m},\ldots,q_{m}}_{n-1\text{ terms}}]^{\top}\,. \tag{11}\] Given a choice breaking \(\mathcal{B}\) and items \(i,j\), define the set of choice enumerations where \(i\) 'beats' \(j\) as \[\mathcal{B}_{i\succ j}=\left\{(i,A,l)\in\mathcal{B}\,:\,j\in A\right\}.\] As a shorthand notation, for a weight vector \(w\) corresponding to choice breaking \(\mathcal{B}\), define \(w_{j\succ i}\) as the sub-vector of \(w\) corresponding to \(\mathcal{B}_{i\succ j}\). Similarly to the original LSR algorithm, we construct a Markov chain (MC) and recover PL parameters from its stationary distribution. This MC has \(n\) states. Given choice breaking \(\mathcal{B}\), weight vector \(w\) and parameter \(\theta\), the pairwise transition probabilities of \(M\) are given as \[M_{ij}=\begin{cases}\frac{1}{d}\cdot\gamma(\mathcal{B}_{j\succ i},w_{j\succ i },\theta)&\text{if }i\neq j\\ 1-\frac{1}{2}\cdot\sum_{k\neq i}\gamma(\mathcal{B}_{k\succ i},w_{k\succ i}, \theta)&\text{if }i=j\end{cases}\,, \tag{12}\] where \(d\) is a sufficiently large normalization constant such that \(M\) does not contain any negative entries. Intuitively, \(M_{ij}\) is proportional to the sum of the weights of all choice enumerations where \(j\) 'beats' \(i\). Algorithm 4 summarizes the weighted LSR algorithm. It repeatedly constructs a Markov chain based on the current estimate, computes its stationary distribution and recovers the next estimate until convergence. When sample weights are not given, the weighted LSR algorithm reduces to the original LSR algorithm. **Input:** Dataset \(\Pi=\{\pi_{1},\ldots,\pi_{m}\}\), (optional) weight vector \(q\in\mathbb{R}_{+}^{m}\) and (optional) initial estimate \(\hat{\theta}^{(0)}\in\mathbb{R}^{n}\). **Output:** Normalized estimate of the item parameters \(\hat{\theta}\in\mathbb{R}^{n}\). ``` 1:Obtain choice breaking \(\mathcal{B}\) from \(\Pi\) per Equation (9). 2:If the weight vector \(q\) is not given, set \(q=\mathbf{1}_{m}\). 3:Construct \(w\) from \(q\) per Equation (11). 4:If the initial estimate is not given, set \(\hat{\theta}^{(0)}=\mathbf{0}_{n}\). 5:For \(t=1,\ldots\) until convergence 5.1:Construct a Markov chain \(M\) with pairwise transition probability per Equation (12) from choice breaking \(\mathcal{B}\), weight vector \(w\) and parameter \(\hat{\theta}^{(t-1)}\). 5.2:Compute the stationary distribution of \(M\) (e.g., via power iteration), \(p\) and return the normalized estimate \(\hat{\theta}^{(t)}=\log(p)-\big{(}\frac{1}{n}\sum_{i=1}^{n}\log(p)\big{)} \cdot\mathbf{1}_{n}\) ``` **Algorithm 4** Weighted Luce Spectral Ranking **The EM-LSR Algorithm.** In the E-step, we compute the posterior class probabilities \(q^{k}\in\mathbb{R}^{m},k\in[K]\). The M step consists of \(K\) maximization problem as shown in Equation (6). These can be solved in parallel by running Algorithm 4 on \(\Pi\) using \(q^{k}\) as sample weights for \(k\in[K]\). Algorithm 5 summarizes the overall algorithm. ``` 1:Dataset \(\Pi=\{\pi_{1},\ldots,\pi_{m}\}\), number of components \(K\), prior distribution \(\beta\), (optional) initial estimate \(\hat{\theta}^{(0)}\in\mathbb{R}^{n\times K}\). 2:Normalized estimate \(\hat{\mathbf{\theta}}=[\hat{\theta}^{1},\ldots,\hat{\theta}^{K}]\). 3:If \(\hat{\mathbf{\theta}}^{(0)}\) is not given, run Algorithm 3 on \(\Pi\) with \(K\) mixture components and set \(\hat{\mathbf{\theta}}^{(0)}\) to the output. 4:For \(t=1,2,\ldots\) until convergence 2.1:E-step - Compute the class posterior probabilities \(q^{k}_{l}=p(z^{*}_{l}=k|\pi_{l},\hat{\mathbf{\theta}}^{(t-1)})\) for \(l\in[m],k\in[K]\). 2.2:M-step - Estimate \(\hat{\theta}^{k(t)}\) by running Algorithm 4 on \(\Pi\) with sample weight vector \(q^{k}=[q^{k}_{1},\ldots,q^{k}_{m}]\) and initial estimate \(\hat{\theta}^{k(t-1)}\) for \(k\in[K]\). ``` **Algorithm 5** Spectral EM (EM-LSR) In another EM-based approach for learning PL mixtures, Liu et al. (2019) use the _unweighted LSR_ algorithm. There, the E-step remains the same. The key differences lie in initialization (they use random initialization) and in the M-step. Our algorithm maximizes the weighted log-likelihood via weighted LSR and is therefore an exact EM algorithm. On the other hand, Liu et al. use the posterior class probabilities to perform a random clustering of the rankings and then run unweighted LSR on each cluster, making their algorithm an _inexact_ EM algorithm. From additional experiments in the supplementary materials, one can observe that the stochastic M-step actually leads to _worse estimates without a significant reduction in inference time_. Theoretical Analysis In this section, we study the theoretical properties of EM-LSR. Section 3.1 presents the finite sample error guarantee for the spectral initialization algorithm. Section 3.2 focuses on the analysis of the M-step. ### Spectral Initialization Central to the analysis of the spectral initialization algorithm is the accuracy of spectral clustering (Algorithm 1). Our analysis starts from the fact that, under the pairwise representation in Equation (7), the PL distribution exhibits _sub-gaussian characteristics_(Vershynin, 2018; Shah and Song, 2018). The detailed descriptions of these characteristics are not immediately important to our discussions so we refer the interested reader to the supplementary materials. However, we emphasize that these characteristics also appear in a broad class of ranking models known as _random utility models_ (RUMs) that subsume the PL model. The spectral clustering algorithm is model-agnostic. It can be applied to mixtures of sub-gaussian distributions and enjoys high clustering accuracy if the signal-to-noise ratio (SNR) is high. We also show how, by changing the mapping function used in Algorithm 2, we can perform parameter estimation for a general RUM, not just PL. Thanks to this flexibility, Algorithm 3 can be a useful tool for learning mixtures of general RUMs. We now consider an expressive generative model for mixtures of \(K\) PLs where Algorithm 3 produces a provably accurate estimate. The generative model assumes that for all mixture components, only the utilities of the first \(L\) items are different while the those of the remaining \(n-L\) items are the same. This model reflects the phenomenon where users from different sub-populations differ in their preference among a few items while the remaining items are essentially interchangeable. Intuitively, one would expect that when \(L\) is small, so is the difference between the subpopulations and it is harder to separate the rankings into the correct clusters. On the other hand, when \(L\) is large, the difference among the subpopulations is large and it is easier to separate the clusters. The following theorem captures this intuition. **Theorem 3.1**.: _Consider a mixture of \(K\) Plackett-Luce models with uniform mixing probabilities. Suppose that \(\theta_{i}^{sk}=0\,\forall i\in[L+1:n]\) and \(\theta_{1:L}^{sk}\sim\mathbb{N}(0,I_{L})\) for \(k\in[K]\). Fix a constant \(\alpha>0\). There exist constants \(c,c_{1},C_{1},C_{2},D\) such that if \(m\geq c\max\{K^{4},Kn\}\) then the output \(\hat{\mathbf{\theta}}\) of Algorithm 3 satisfies the following. If \(L\geq c_{1}\exp\left(C_{1}\sqrt{\log n}\right)\), then_ \[\text{dist}(\hat{\mathbf{\theta}},\mathbf{\theta}^{*})=O\bigg{(}\exp\left(D\sqrt{ \log n}\right)\bigg{(}\sqrt{\frac{K^{2}n\log n}{m}}+\frac{\sqrt{Kn}}{e^{L^{0. 99}}}\bigg{)}\bigg{)}\] _with probability \(1-O(\frac{K}{n^{8}})-O(K^{2}n^{2}\exp\left(-L^{0.99}\right))\). If \(L\geq C_{2}n^{\alpha}\) and assuming that \(n=\omega(\log m)\), then_ \[\text{dist}(\hat{\mathbf{\theta}},\mathbf{\theta}^{*})=O\bigg{(}\exp\left(D\sqrt{ \log n}\right)\,\sqrt{\frac{K^{2}n\log n}{m}}\,\bigg{)}\] _with probability \(1-O(\frac{K}{n^{8}})-O(K^{2}n^{2}\exp\left(-n^{\alpha}\right))\)._ The first error bound is a sum of two terms. The first is the estimation error incurred by Algorithm 2 which diminishes with increasing \(m\). The second comes from the clustering error incurred by Algorithm 1 and is controlled by the SNR of the generative model. One can also check that \(\exp\left(\sqrt{\log n}\right)=o(n^{\alpha})\) for any \(\alpha>0\) and \(\exp\left(\sqrt{\log n}\right)=\omega(\log n)\). When \(L\approx\exp\left(O(\sqrt{\log n})\right)\) (low SNR), there is significant clustering error and the second term scales approximately as \(O(\frac{\sqrt{n}}{e^{L}})=O\big{(}\frac{1}{\text{poly}(n)}\big{)}\). Hence, Algorithm 3 converges to within a small radius around \(\mathbf{\theta}^{*}\) given a sufficiently large \(m\). However, when \(L\) is polynomial in \(n\) (high SNR), estimation error dominates clustering error, giving us the second error bound which diminishes with sample size \(m\). In this regime, the spectral initialization algorithm works well as a _standalone mixture learning algorithm_. Note that this guarantee holds even for a small \(\alpha>0\), when the fraction of 'informative' items diminishes: \(L/n=o(1)\). Our proposed generative model is new and could be a useful analysis framework for future works. To the best of our knowledge, the finite sample error bounds are also the _first of their kind in the literature_. ### Iterative Refinement via EM **Accuracy of the M-step.** The following theorem generalizes Theorem 1 of Maystre and Grossglauser (2015). **Theorem 3.2**.: _The output of weighted LSR (Algorithm 4) is the maximum weighted log-likelihood estimate:_ \[\theta_{q}^{\text{MLE}}:=\arg\max_{\theta}\sum_{l=1}^{m}\,q_{l}\cdot\log \mathbb{P}^{PL}(\pi_{l},z_{l}\,|\,\theta)\,.\] As noted before, the EMM algorithm is an alternative approach that exactly solves the M-step using the (weighted) MM algorithm. In other words, _assuming perfect numerical precision and the same initialization_, EMM and EM-LSR will produce the same final estimate. However, our EM-LSR algorithm is often much faster than EMM (e.g., Figure 1). **Convergence of EM.** It is well known that the EM algorithm converges to a stationary point (Wu, 1983). There is, unfortunately, no guarantee how close such a point is to the global optimum. However, assuming correct model specification and that the initial estimate falls within a neighborhood around \(\theta^{*}\) which satisfies certain high SNR conditions, the EM algorithm will converge to \(\theta^{*}\)(Wang et al., 2015; Wu et al., 2016; Balakrishnan, Wainwright, and Yu, 2017). The area around \(\theta^{*}\) where this desirable behaviour occurs is referred to as the _basin of attraction_. We leave the detailed characterization of the basin of attraction as a subject of future studies. **True Likelihood versus Surrogate Likelihood.** For two other commonly used EM algorithms in the literature - EM-CML and EM-GMM - previous authors use random initialization. On the other hand, ours uses spectral initialization. However, initialization is not the only differentiating characteristic of our algorithm. In fact, our algorithm, EM-CML and EM-GMM are _fundamentally different EM-based algorithms_. To see why, one needs to inspect the objective function of the M-step. Suppose that all three algorithms are initialized at some \(\hat{\mathbf{\theta}}^{(0)}\). Let \(\{q_{l}^{k}\}_{l\in[m]}^{k\in[K]}\) denote the posterior class probabilities conditioned on \(\hat{\mathbf{\theta}}^{(0)}\) per Equation (5). In the first iteration, EM-LSR and EMM maximize the weighted log-likelihood. \[\hat{\mathbf{\theta}}^{(1)}_{\text{LSR}}=\arg\max_{\mathbf{\theta}}\sum_{l=1}^{m}\sum_{ k=1}^{K}\left[q_{l}^{k}\cdot\log\mathbb{P}^{PL}(\pi_{l},z_{l}\,|\,\theta^{k}) \right].\] On the other hand, EM-CML maximizes the _composite (surrogate) marginal likelihood_. \(\hat{\mathbf{\theta}}^{(1)}_{\text{CML}}=\arg\max_{\mathbf{\theta}}\) \[\sum_{l=1}^{m}\sum_{k=1}^{K}\left[\sum_{\begin{subarray}{c}i,j:\\ \pi_{l}(i)<\pi_{l}(j)\end{subarray}}q_{l}^{k}\log\left(\frac{1}{1+\exp\left(-( \theta_{i}^{k}-\theta_{j}^{k})\right)}\right)\right].\] Lastly, EM-GMM _minimizes_ the following function. \[\hat{\mathbf{\theta}}^{(1)}_{\text{GMM}}=\arg\min_{\mathbf{\theta}}\sum_{k=1}^{K}\sum _{i\neq j}\left(\hat{F}_{ij}^{k}-\frac{1}{1+\exp\left(-(\theta_{i}^{k}-\theta _{j}^{k})\right)}\right)^{2},\] where \(\hat{F}_{ij}^{k}=\frac{\sum_{l=1}^{m}\mathbf{1}[\pi_{l}(i)<\pi_{l}(j)]\,q_{l}^{k}} {\sum_{l=1}^{m}q_{l}^{k}}\). One can see that the objective functions are different and so are their solutions. Hence, even if we initialize all three algorithms with the same estimate, their trajectories will be different in general. While EM-LSR and EMM converges to the true MLE when initialized within the basin of attraction, this _may not be true for EM-GMM and EM-CML_. This difference is supported by our experiments, where even with the same initialization, the algorithms produce different final estimates. ## 4 Experiments We compare our spectral EM algorithm to the following baselines: EMM, EM-GMM and EM-CML. **Synthetic Datasets.** We simulate data from the generative model as described in Theorem 3.1. Specifically, we set \(n=100\) and \(L=5\) while varying the number of mixture components \(K\) for different experiments. Figure 1 shows estimation error and total inference time against the sample size \(m\), averaged over 25 trials. Experimentally, spectral initialization consistently gives better initial estimates than both random initialization and GMM initialization [22]. To keep a fair comparison, we use spectral initialization for all algorithms. When \(K\) is small (e.g., Figures 0(a) and 0(b)) all four methods are quite accurate. When the number of mixture components are moderate (e.g., Figures 0(c) and 0(d)), the advantages that EM-LSR enjoys over the other methods become more apparent. While EMM becomes too inefficient for practical purposes, EM-LSR Figure 1: \(\ell_{2}\) error and inference time on **synthetic datasets**. EM-LSR (in blue) is competitive in terms of accuracy and speed to the baseline algorithms. remains relatively efficient and produces more accurate estimates than both EM-CML and EM-GMM. **Real Datasets.** We include commonly used datasets in previous works such as APA, Irish Elections (West, North, Meath) and SUSHI all with \(n<15\). We partition all the rankings with a 80-20 training-testing split; and the train rankings into 80% for inference and 20% for validation. \(K\) is chosen using Bayesian Information Criterion (Gelman, Hwang, and Vehtavi, 2014) on the validation set and the log-likelihood of the final model is evaluated using the test set. For these datasets, EM-LSR and EMM are the most accurate while EM-CML is the fastest, especially on datasets with a large \(m\) such as the Irish election datasets. We have a possible explanation for the relative speed between EM-LSR and EM-CML. The bottle neck in these EM algorithms is the M-step. The most time-consuming procedure in the M-step of EM-LSR is constructing the Markov chain in Algorithm 4 with time complexity \(O(mn^{2})\). For EM-CML, it is solving a constrained concave maximization problem via SLSQP (Virtanen et al., 2020) and may scale at least as \(\Omega(n^{3})\)1. Therefore, EM-CML tends to be faster for datasets with a small \(n\) and a large \(m\). However, its inference time could grow significantly with \(n\). Footnote 1: SLSQP solves a sequence of quadratic optimization problems with \(n\) variables. Each solves a linear system with \(n\) variables and \(n\) equations and generally takes \(O(n^{3})\)(Strang et al., 1993). Indeed, the setting where EM-LSR outperforms the baselines is when \(n\) is large. We perform additional experiments on the ML-10M movie ratings datasets (Harper and Konstan, 2015). To generate rankings, we first run a low rank matrix completion algorithm (Zitnik and Zupan, 2012) on the user-item rating matrix to fill in the missing entries. We then select \(n\) movies from the set of all movies and the rankings are obtained from the completed matrix. Figure 2 shows the performance of the four methods on two versions of the ML-10M datasets with \(n=25\) and \(n=100\) given increasing training data up to 14k. In the supplementary materials, we also include additional experiments, strategies to extend EM-LSR to handle partial rankings with ties and comparisons to a Bayesian method (Mollica and Tardella, 2017). ## 5 Conclusion We have proposed an accurate and efficient algorithm for learning a mixture of Plackett-Luce models. For future works, we would like to consider other initialization methods such as the method of moments or tensor decomposition. Detailed characterization of the basin of attraction within which the EM algorithm converges to the true parameter is also a challenging open question. On a more practical note, incorporating the representation power of deep neural networks into our algorithm will further increase its utility for large scale recommendation systems applications. Figure 2: Test log-likelihood and inference time on **ML-10M datasets**. For larger datasets, EM-LSR (in blue) is more accurate while being competitive in speed to the baseline algorithms. ## 6 Acknowledgements The authors thank the anonymous reviewers for their thoughtful suggestions and comments. The authors are supported by NSF Grant DMS-2112099. A.Z. acknowledges financial support from the Alfred H. Williams Faculty Scholar award. Any opinions expressed in this paper are those of the authors and do not necessarily reflect the views of the National Science Foundation.
2307.06094
On the Galois covers of degenerations of surfaces of minimal degree
We investigate the topological structures of Galois covers of surfaces of minimal degree (i.e., degree n) in n+1 dimensional complex projective space. We prove that for n is greater than or equal to 5, the Galois covers of any surfaces of minimal degree are simply-connected surfaces of general type.
Meirav Amram, Cheng Gong, Jia-Li Mo
2023-07-12T11:37:08Z
http://arxiv.org/abs/2307.06094v1
# On the Galois covers of degenerations of surfaces of minimal degree ###### Abstract We investigate the topological structures of Galois covers of surfaces of minimal degree (i.e., degree \(n\)) in \(\mathbb{CP}^{n+1}\). We prove that for \(n\geq 5\), the Galois covers of any surfaces of minimal degree are simply-connected surfaces of general type. + Footnote †: Email address: M. Amram: [email protected]; C. Gong: [email protected]; Jia-Li Mo: [email protected]; 2020 Mathematics Subject Classification. 05E15, 14J10, 14J25, 14N20. ## 1 Introduction The moduli space of surfaces is a hot topic that mathematicians refer to, see for example [13, 14]. The moduli space of surfaces of general type is a quasi-projective coarse moduli scheme [12]. Unlike the moduli of curves, it is not irreducible. Catanese [10, 11] and Manetti [15] characterized the structures and the number of components of some moduli spaces. Not much more was thereafter known about the moduli space of surfaces of general type. Then, in [21], Teicher defined some new invariants of surfaces that were stable on connected components of moduli space. These new invariants come from the polycyclic structure of the fundamental group of the complement of the branch curve \(S\), of the generic projection from a surface \(X\), to \(\mathbb{CP}^{2}\). The fundamental group \(\pi_{1}(\mathbb{CP}^{2}-S)\) of the complement of \(S\) does not change when the complex structure of \(X\) changes continuously. In fact, all surfaces in the same component of the moduli space have the same homotopy type and therefore have the same group \(\pi_{1}(\mathbb{CP}^{2}-S)\). In [16] and [17], Moishezon-Teicher showed that if \(X_{\mathrm{Gal}}\) is the Galois cover of \(X\), then its fundamental group \(\pi_{1}(X_{\mathrm{Gal}})\) can be obtained as a quotient group of \(\pi_{1}(\mathbb{CP}^{2}-S)\). As a consequence, \(\pi_{1}(X_{\mathrm{Gal}})\) does not change when the complex structure of \(X\) changes continuously. Based on this idea, they constructed a series of simply-connected algebraic surfaces of general type with positive and zero indices, disproving the Bogomolov Conjecture, which states that an algebraic surface of general type with a positive index has an infinite fundamental group. These examples have important value in the geography of algebraic surfaces. To compute the group \(\pi_{1}(X_{\mathrm{Gal}})\), we construct some degenerations and work with special singularities that are defined and explained below. In [23] and [24] Zappa first studied degenerations of scrolls to unions of planes. Then, in [6] and [8], Calabri, Ciliberto, Flamini, and Miranda considered the flat degenerations of surfaces whose general fiber is a smooth projective algebraic surface and whose central fiber is a union of planes in \(\mathbb{CP}^{r},r\geq 3\) with the following singularities: * in codimension 1, double curves that are smooth and irreducible along with two surfaces meeting transversally; * multiple points that are locally analytically isomorphic to the vertex of a cone over the stick curve, with an arithmetic genus of either 0 or 1, which is projectively normal in the projective space it spans. These multiple points will be called _Zappatic singularities_ and the central fiber (a union of planes) will be called _a planar Zappatic surface_. A degeneration is called _a planar Zappatic degeneration_, that is, a smooth surface that flatly degenerates to a planar Zappatic surface, as described in Section 2 and Figure 2. The topological structure of a Zappatic degeneration is complicated. In [6], [7], and [8], the authors discuss the effect of a Zappatic degeneration on its numerical invariants, such as the Euler-Poincare characteristic, sectional genus, geometric genus, and Chern numbers. We are interested in the topological structure of Galois covers of planar Zappatic degenerations, which have not been as extensively known until now. We have discussed these degenerations in [1, 2, 4]. In this paper, we continue to study the planar Zappatic degenerations and find the group \(\pi_{1}(X_{\mathrm{Gal}})\). We focus on special planar Zappatic degenerations -- the degenerations to the cone over the stick curve \(C_{R_{k}}\) (i.e., unions of lines with only nodes as singularities). Now, let \(T_{k}\) be any connected tree with \(k\geq 3\) vertices. This corresponds to a non-degenerate stick curve of degree \(k\) in \(\mathbb{CP}^{k}\), which we denote by \(C_{T_{k}}\). Moreover, when the tree \(T_{k}\) consists of a chain \(R_{k}\) of length \(k\). The curve \(C_{R_{k}}\) is the union of \(k\) lines \(l_{1},l_{2},\ldots,l_{k}\) spanning \(\mathbb{CP}^{k}\), such that \(l_{i}\cap l_{j}=\emptyset\) iff \(|i-j|>1\), as described in Figure 1. It is well known that the smallest possible degree of an irreducible, non-degenerate surface \(X\subset\mathbb{CP}^{n+1}\) is \(n\). Such surface is said to be a surface of minimal degree. Because any surface \(X\) of minimal degree in \(\mathbb{CP}^{n+1}\) can be flatly degenerated to the cone over the stick curve \(C_{T_{n}}\) ([9, Corollary 12.2]), we get our main theorem: **Theorem 1**.: _The Galois covers of surfaces of minimal degree in \(\mathbb{CP}^{n+1}\) are simply-connected surfaces of general type for \(n\geq 5\)._ The paper is organized as follows: In Section 2, we explain the methods we use and the terminology related to the paper. In Section 3, we deal with the special case: degenerations to the cone over the stick curve \(C_{R_{k}}\) (see Theorem 7) and prove that the related Galois covers are of general type. In Section 4, we relate the general case on degeneration to the cone over the stick curve \(C_{T_{k}}\) to the special case in Section 3, and prove Theorem 1. Acknowledgements:We thank Dr. Yi Gu for useful discussions about the degeneration of surfaces. This research was supported by the NSFC and ISF-NSFC joint research program (Grant No. 2452/17). It was also partly supported by the Natural Science Foundation of Jiangsu Province (BK 20181427). We thank two anonymous referees for great comments and suggestions. ## 2 Method and terminology In this section, we describe the methods, the fundamental background, and some terminology that we use in this paper. We will use these methods and terminology in Section 3. The reader can refer to [4] and [18] for more details. We consider planar Zappatic surfaces and are interested in the Galois cover of each such surface. The fundamental group of the Galois cover is a significant invariant of the surface, as explained in the introduction, and we are going to calculate it. To do this, we first need to understand the following setting: Let \(X\) be a projective algebraic surface embedded in projective space \(\mathbb{CP}^{n}\), for some \(n\). Consider a generic projection \(f:\mathbb{CP}^{n}\to\mathbb{CP}^{2}\). The restriction of \(f|_{X}\) is branched along a curve \(S\subset\mathbb{CP}^{2}\). The branch curve \(S\) can tell a lot about \(X\), but it is difficult to describe it explicitly. To tackle this problem we consider degenerations of \(X\), defined as follows. **Definition 2**.: _Let \(\Delta\) be the unit disk, and \(X,X^{\prime}\) be algebraic surface. Assume that \(f\) and \(f^{\prime}\) are projective projections, where \(f:X\to\mathbb{CP}^{2}\), \(f^{\prime}:X^{\prime}\to\mathbb{CP}^{2}\). We say that \(f\) is a projective degeneration of \(f^{\prime}\) if there exists a flat family \(\pi:\mathfrak{X}\to\Delta\) (and where \(\mathfrak{X}\subseteq\Delta\times\mathbb{CP}^{n},n\geq 3\) is a closed subscheme of relative dimension two), and a morphism \(F:\mathfrak{X}\to\Delta\times\mathbb{CP}^{2}\), such that \(F\) composed with the first projective is \(\pi\), and:_ * \(\pi^{-1}(0)\simeq X.\)__ * _There exists_ \(0\neq p_{0}\in\Delta\) _such that_ \(\pi^{-1}(p_{0})\simeq X^{\prime}.\)__ * _The family_ \(\mathfrak{X}-\pi^{-1}(0)\to\Delta-\{0\}\) _is smooth._ * _Restricting to_ \(\pi^{-1}(0),F\simeq\{0\}\times\,f\) _under the identification of_ \(\pi^{-1}(0)\) _with_ \(X\)_._ * _Restricting to_ \(\pi^{-1}(p_{0}),F\simeq\{p_{0}\}\times f^{\prime}\) _under the identification of_ \(\pi^{-1}(p_{0})\) _with_ \(X^{\prime}\)_._ We construct a degeneration of \(X\) into \(X_{0}\), as a sequence of _partial degenerations_\(X:=X_{r}\leadsto X_{r-1}\leadsto\cdots X_{r-i}\leadsto X_{r-(i+1)}\leadsto \cdots\leadsto X_{0}\). The degeneration \(X_{0}\) is a union of planes, and each plane is projectively equivalent to \(\mathbb{CP}^{2}\) (see [19] for detail). Consider generic projections \(\pi^{(i)}:X_{i}\to\mathbb{CP}^{2}\) with the branch curves \(S_{i}\), for \(0\leq i\leq r\). Note that \(S_{i-1}\) is a degeneration of \(S_{i}\). Because \(X_{0}\) is a union of planes, its projection \(S_{0}\) is a line arrangement. One of the principal tools we use is a reverse process of degeneration, and it is called _regeneration_. Using this tool, which was described in [19] as regeneration rules, we can recover \(S_{i}\) from \(S_{i-1}\). Applying it multiple times, we can recover the original branch curve \(S\) from the line arrangement \(S_{0}\). In the following diagram, we illustrate this process. \[\begin{CD}X\subseteq\mathbb{CP}^{n}@>{\text{degeneration}}>{}>X_{0}\subseteq \mathbb{CP}^{n}\\ @V{\text{generic projection}}V{}V@V{}V{\text{generic projection}}V\\ S\subset\mathbb{CP}^{2}@<{}<{\text{regeneration}}V_{0}\subset\mathbb{CP}^{2} \end{CD}\] A line in \(S_{0}\) regenerates to a conic. The resulting components of the partial regeneration are tangent to each other. To get a transversal intersection of components, we regenerate further, and this gives us three cusps for each tangency point (see [18; 19] for more details). Therefore, the regenerated branch curve \(S\) is a cuspidal curve with nodes and branch points. Local braids of such singularities are as follows: 1. for a branch point, \(Z_{j\;j^{\prime}}\) is a counterclockwise half-twist of \(j\) and \(j^{\prime}\) along a path below the real axis, 2. for nodes, \(Z^{2}_{i,j\;j^{\prime}}=Z^{2}_{i\;j}\cdot Z^{2}_{i\;j^{\prime}}\) and \(Z^{2}_{i\;i^{\prime},j\;j^{\prime}}=Z^{2}_{i^{\prime}\;j^{\prime}}\cdot Z^{2}_ {i\;j^{\prime}}\cdot Z^{2}_{i^{\prime}\;j}\cdot Z^{2}_{i\;j}\), 3. for cusps, \(Z^{3}_{i,j\;j^{\prime}}=Z^{3}_{i\;j}\cdot(Z^{3}_{i\;j})^{Z_{j\;j^{\prime}}} \cdot(Z^{3}_{i\;j})^{Z^{-1}_{j\;j^{\prime}}}\). By the braid monodromy technique of Moishezon-Teicher, we derive the braids related to \(S\) as conjugations of the above local forms (i.e., \(a^{b}=b^{-1}ab\)). The reader can learn more about this technique in [18]; in the paper we give the final braids that are computed by this technique, as the computations themselves are too long and tiring. Note that in several places we use the notation \(\bar{a}\) where \(a\) is a braid, which means the same braid as \(a\) but above the real axis. Denote \(G:=\pi_{1}(\mathbb{CP}^{2}-S)\) and its standard generators as \(\Gamma_{1},\Gamma^{\prime}_{1},\ldots,\Gamma_{2m},\Gamma^{\prime}_{2m}\). By the van Kampen Theorem [22] we can get a presentation of \(G\) by means of generators \(\{\Gamma_{j},\Gamma^{\prime}_{j}\}\) and relations of the types: 1. for a branch point, \(Z_{j\;j^{\prime}}\) corresponds to the relation \(\Gamma_{j}=\Gamma^{\prime}_{j}\), 2. for a node, \(Z^{2}_{i\;j}\) corresponds to \([\Gamma_{i},\Gamma_{j}]=\Gamma_{i}\Gamma_{j}\Gamma^{-1}_{i}\Gamma^{-1}_{j}=e\), 3. for a cusp, \(Z^{3}_{i\;j}\) corresponds to \(\langle\Gamma_{i},\Gamma_{j}\rangle=\Gamma_{i}\Gamma_{j}\Gamma_{i}\Gamma^{-1}_ {j}\Gamma^{-1}_{i}\Gamma^{-1}_{j}=e\). To get all the relations, we write the braids in a product and collect all the relations that correspond to the different factors. To each list of relations we add the projective relation \(\prod\limits_{j=m}^{1}\Gamma^{\prime}_{j}\Gamma_{j}=e\). See [18, 19] for full treatment of the subject. This method also enables us to compute the fundamental group of the Galois cover \(X_{\text{Gal}}\) of \(X\). **Definition 3**.: _We consider the fibered product arising from a general projection \(f:X\rightarrow\mathbb{CP}^{2}\) of degree \(n\) as_ \[X\times_{f}\cdots\times_{f}X=\{(x_{1},\ldots,x_{n})\in X^{n}|\ f(x_{1})= \cdots=f(x_{n})\}.\] _Let the extended diagonal be_ \[\triangle=\{(x_{1},\ldots,x_{n})\in X^{n}|\ x_{i}=x_{j},for\ some\ i\neq j\}.\] _The closure \(\overline{X\times_{f}\cdots\times_{f}X-\triangle}\) is called the Galois cover w.r.t. the symmetric group \(S_{n}\) and denoted by \(X_{\text{Gal}}\)._ Then, there is an exact sequence \[0\rightarrow\pi_{1}(X_{\text{Gal}})\to G_{1}\to S_{n}\to 0, \tag{1}\] where \(G_{1}:=G/(\Gamma_{j}{}^{2},\Gamma^{\prime}_{j}{}^{2})\) and the map \(G_{1}\to S_{n}\) is a surjection of \(G_{1}\) onto the symmetric group \(S_{n}\). This epimorphism takes the generators of \(G_{1}\) to transpositions in the symmetric group \(S_{n}\) according to the order of the edges in the degeneration. We thus obtain a presentation of the fundamental group \(\pi_{1}(X_{\text{Gal}})\) of the Galois cover, as the kernel of this epimorphism. Then we simplify the relations to produce a canonical presentation that identifies with \(\pi_{1}(X_{\text{Gal}})\), using the theory of Coxeter covers of the symmetric groups. We use a proposition from [17], as follows: **Proposition 4**.: _If_ \[\frac{G_{1}}{\{\prod_{j=1}^{k}\Gamma^{\prime}_{j}\Gamma_{j}\}}\cong S_{n},\] _then \(X_{\text{Gal}}\) is simply-connected._ ## 3 Degeneration to the cone over \(C_{R_{k}}\) In this section, we pay attention to a special planar Zappatic degeneration -- the degeneration to the cone over the stick curve \(C_{R_{k}}\) in \(\mathbb{CP}^{k+1}\). It is clear that every plane arrangement can be represented by a triangulation as long as no three planes meet in a line and no plane meets more than three other planes. In Figure 2, we depict a schematic representation of \(X_{0}\), which is a cone over the stick curve \(C_{R_{k}}\) in \(\mathbb{CP}^{k+1}\). Each triangle corresponds to a plane \(\mathbb{CP}^{2}\) and each intersection of two triangles corresponds to a common edge between the two planes. The existence of such degeneration can be found in Corollary 10. We give now the following definition of an outer \((k-1)\)-point, then we will explain how it relates to Figure 2. **Definition 5**.: _We call a \((k-1)\)-point that is the intersection of \(k\) planes \(P_{1},\dots,P_{k}\), where \(P_{i}\) intersects \(P_{j}\) in a line iff \(|i-j|=1\), an outer \((k-1)\)-point. Especially noteworthy, a 1-point always comes from the intersection of 2 planes._ Point \(O\) in Figure 2 is an outer \((k-1)\)-point. We have also \(k-1\) vertices that are 1-points. The branch curve \(S_{0}\) is an arrangement of \(k-1\) lines (the dashed lines) that are the images of the \(k-1\) edges through the generic projection of \(X_{0}\) onto \(\mathbb{CP}^{2}\). In Subsections 3.1 and 3.2, we give the braids that are related to an outer 5-point and an outer \(n\)-point respectively. Then we can find the group \(\pi_{1}(X_{\mathrm{Gal}})\) and conclude the following theorem (see Theorem 7): _Let \(\mathfrak{X}_{k}\to\Delta\) be a planar Zappatic degeneration, whose central fiber \(X_{k}\) is the cone over the stick curve \(C_{R_{k}}\) in \(\mathbb{CP}^{k+1}\) (for \(k\geq 5\)). Then the Galois cover of \(X_{k}\) is a simply-connected surface of general type._ In the following subsections, we follow the notations and formulations from [1]. Before we continue to that part of the computations, we give some notations for simplicity and convenience, as follows: we denote \(\Gamma_{j}\) by \(j\) and \(\Gamma_{j}^{\prime}\) by \(j^{\prime}\) in the group \(G\); we use \(B_{k}\) to denote the braid monodromy of an outer \(k\)-point; we write \(F_{k}\) instead of \((B_{k-1})^{Z_{(k-1)\;(k-1)^{\prime},k}^{2}}\), where \(Z_{(k-1)\;(k-1)^{\prime},k}^{2}\) is a full-twist of \(k\) around \(k-1\) and \((k-1)^{\prime}\); and we denote the following formula as \(M_{k}\): \[M_{k}:=Z_{(k-1)\;(k-1)^{\prime},k}^{3}\cdot(Z_{1\;1^{\prime},k} ^{2})^{Z_{2\;2^{\prime},k}^{-2}\cdot\cdot\cdot Z_{(k-2)\;(k-2)^{\prime},k}^{-2 }}\cdot(Z_{2\;2^{\prime},k}^{2})^{Z_{3\;3^{\prime},k}^{-2}\cdot\cdot Z_{(k-2) \;(k-2)^{\prime},k}^{-2}}\] \[\cdots Z_{(k-2)\;(k-2)^{\prime},k}^{2}\cdot Z_{1\;1^{\prime},k^ {\prime}}^{2}\cdot Z_{2\;2^{\prime},k^{\prime}}^{2}\cdot\cdot\cdot Z_{(k-2)\; (k-2)^{\prime},k^{\prime}}^{2}\cdot(Z_{k\;k^{\prime}})^{Z_{(k-1)\;(k-1)^{ \prime},k}^{2}},\hskip 28.452756pt(k=1,2,3,\cdots).\] Figure 2: _A cone over \(C_{R_{k}}\)_ ### The cone over \(C_{r_{6}}\) In [3, Theorem 3.9] we have already considered the case of \(k=5\). In order to help the reader better understand our proof, in this subsection we consider the case of \(k=6\), see Figure 3. **Proposition 6**.: Let \(\mathfrak{X}_{6}\to\Delta\) be a planar Zappatic degeneration whose central fiber \(X_{6}\) is the cone over the stick curve \(C_{R_{6}}\) in \(\mathbb{CP}^{7}\). Then the Galois cover \(X_{6,\operatorname{Gal}}\) of \(X_{6}\) is a simply-connected surface. Proof.: The branch curve \(S_{0}\) in \(\mathbb{CP}^{2}\) is an arrangement of five lines, see Figure 3. We regenerate each vertex in turn and compute the group \(G_{1}\). First, each of the vertices \(i\) is an outer \(1\)-point (for \(i=1,\dots,5\)) that regenerates to a conic; this gives rise to the braids \(Z_{j\;j^{\prime}}\) for \(j=1,\dots,5\). We have the following relations in \(G\) and also in \(G_{1}\): \[1=1^{\prime},\,2=2^{\prime},\,3=3^{\prime},\,4=4^{\prime},\,5=5^{\prime}. \tag{2}\] We will use the relations in (2) as a prerequisite when we simplify relations (3)-(22) in \(G\). Vertex \(O\) is an outer \(5\)-point, and the related braids appear in \(B_{5}\), as follows: \[B_{5}=M_{5}\cdot F_{5}=M_{5}\cdot(M_{4})^{Z_{4}^{2}\,4^{\prime},5}\cdot(B_{3} )^{Z_{3\;3^{\prime},4}^{2}\,2_{4\;4^{\prime},5}^{2}},\] where \[M_{5}= Z_{4\;4^{\prime},5}^{3}\cdot(Z_{1\;1^{\prime},5}^{2})^{Z_{2\;2^{\prime },5}^{-2}\,Z_{3\;3^{\prime},5}^{-2}}\cdot(Z_{2\;2^{\prime},5}^{2})^{Z_{3\;3^{ \prime},5}^{-2}}\cdot Z_{3\;3^{\prime},5}^{2}\cdot\bar{Z}_{1\;1^{\prime},5^{ \prime}}^{2}\cdot\bar{Z}_{2\;2^{\prime},5^{\prime}}^{2}\cdot\bar{Z}_{3\;3^{ \prime},5^{\prime}}^{2}\cdot(Z_{5\;5^{\prime}})^{Z_{4\;4^{\prime},5}^{2}},\] and \[F_{5}= \,(B_{4})^{Z_{4\;4^{\prime},5}^{2}}=(M_{4})^{Z_{4\;4^{\prime},5}^ {2}\,\cdot}\cdot(F_{4})^{Z_{4\;4^{\prime},5}^{2}}\] \[= \,\Big{(}(Z_{3\;3^{\prime},4}^{3})\cdot(Z_{1\;1^{\prime},4}^{2}) ^{Z_{2\;2^{\prime},4}^{-2}\,\cdot}(Z_{2\;2^{\prime},4}^{2})\cdot(\bar{Z}_{1\; 1^{\prime},4^{\prime}}^{2})\cdot(\bar{Z}_{2\;2^{\prime},4^{\prime}}^{2})\cdot( Z_{4\;4\;\nu})^{Z_{3\;3^{\prime},4}^{2}}\Big{)}^{Z_{4\;4^{\prime},5}^{2}} \cdot(F_{4})^{Z_{4\;4^{\prime},5}^{2}}\] \[= (Z_{3\;3^{\prime},4}^{3})^{Z_{4\;4^{\prime},5}^{2}}\cdot(Z_{1\;1^ {\prime},4}^{2})^{Z_{2\;2^{\prime},4}^{-2}\,Z_{4\;4^{\prime},5}^{2}}\cdot(Z_{2 \;2^{\prime},4}^{2})^{Z_{4\;4^{\prime},5}^{2}}\cdot(\bar{Z}_{1\;1^{\prime},4 ^{\prime}}^{2})^{Z_{4\;4^{\prime},5}^{2}}\] \[\cdot(\bar{Z}_{2\;2^{\prime},4^{\prime}}^{2})^{Z_{4\;4^{\prime},5 }^{2}}\cdot(Z_{4\;4^{\prime}})^{Z_{3\;3^{\prime},4}^{2}Z_{4\;4^{\prime},5}^{ 2}}\cdot S_{1^{\prime},2\;2^{\prime}}^{3}\cdot(Z_{1\;1^{\prime}})^{Z_{1^{\prime },2}^{2}\;\nu}\] \[\cdot(Z_{2\;2^{\prime},3}^{3})^{Z_{1^{\prime},2\;2^{\prime},2^{ \prime}}^{2}Z_{3\;3^{\prime},4^{\prime}}^{2}Z_{4\;4^{\prime},5}^{2}}\cdot(Z_{3 \;3^{\prime}})^{Z_{2\;2^{\prime},3}^{2}\,2^{\prime},2^{\prime}}\cdot^{2}_{3\; 3^{\prime},4^{\prime}}Z_{4\;4^{\prime},5}^{2}\cdot(Z_{1\;1^{\prime},3\;3\;3^{ \prime}})^{Z_{3\;3^{\prime},4^{\prime}}^{2}Z_{4\;4^{\prime},5}^{2}}.\] The braid \((Z_{5\;5^{\prime}})^{Z_{4\;4^{\prime},5}^{2}}\) is depicted in the following picture: The braids of \(B_{5}\) give rise to three parts of relations in \(G\). We will write down the first part of the relations, which are the relations of braids of \(M_{5}\): \[\langle 4,5\rangle=\langle 4^{\prime},5\rangle=\langle 4^{-1}4^{\prime}4,5 \rangle=e, \tag{3}\] \[[3^{\prime}32^{\prime}212^{-1}2^{\prime^{-1}}3^{-1}3^{\prime^{-1}},5]=[3^{ \prime}32^{\prime}21^{\prime}2^{-1}2^{\prime^{-1}}3^{-1}3^{\prime^{-1}},5]=e, \tag{4}\] \[[3^{\prime}323^{-1}3^{\prime^{-1}},5]=[3^{\prime}32^{\prime}3^{-1}3^{\prime^{- 1}},5]=e, \tag{5}\] \[[3,5]=[3^{\prime},5]=e, \tag{6}\] \[[4^{\prime}43^{\prime}32^{\prime}212^{-1}2^{\prime^{-1}}3^{-1}3^{\prime^{-1}} 4^{-1}4^{\prime^{-1}},5^{-1}5^{\prime}5]=[4^{\prime}43^{\prime}32^{\prime}21^ {\prime}2^{-1}2^{\prime^{-1}}3^{-1}3^{\prime^{-1}}4^{-1}4^{\prime^{-1}},5^{-1}5 ^{\prime}5]=e, \tag{7}\] \[[4^{\prime}43^{\prime}323^{-1}3^{\prime^{-1}}4^{-1}4^{\prime^{-1}},5^{-1}5^{ \prime}5]=[4^{\prime}43^{\prime}32^{\prime}3^{-1}3^{\prime^{-1}}4^{-1}4^{\prime ^{-1}},5^{-1}5^{\prime}5]=e, \tag{8}\] \[[4^{\prime}434^{-1}4^{\prime^{-1}},5^{-1}5^{\prime}5]=[4^{\prime}43^{\prime}4^ {-1}4^{\prime^{-1}},5^{-1}5^{\prime}5]=e, \tag{9}\] \[5^{\prime}=54^{\prime}454^{-1}4^{\prime^{-1}}5^{-1}. \tag{10}\] We simplify relations (3)-(10), then get the following relations in \(G_{1}\): \[\langle 4,5\rangle=[1,5]=[2,5]=[3,5]=e.\] Now we give the relations in \(G\) of braids from \((M_{4})^{Z_{4\;4^{\prime},5}}\): \[\langle 3,545^{-1}\rangle=\langle 3^{\prime},545^{-1}\rangle=\langle 3^{-1}3^{ \prime}3,545^{-1}\rangle=e, \tag{11}\] \[[2^{\prime}212^{-1}2^{\prime^{-1}},545^{-1}]=[2^{\prime}21^{\prime}2^{-1}2^{ \prime^{-1}},545^{-1}]=e, \tag{12}\] \[[2,545^{-1}]=[2^{\prime},545^{-1}]=e, \tag{13}\] \[[3^{\prime}32^{\prime}212^{-1}2^{\prime^{-1}}3^{-1}3^{\prime^{-1}},54^{-1}4^{ \prime}45^{-1}]=[3^{\prime}32^{\prime}21^{\prime}2^{-1}2^{\prime^{-1}}3^{-1}3^ {\prime^{-1}},54^{-1}4^{\prime}45^{-1}]=e, \tag{14}\] \[[3^{\prime}323^{-1}3^{\prime^{-1}},54^{-1}4^{\prime}45^{-1}]=[3^{\prime}32^{ \prime}3^{-1}3^{\prime^{-1}},54^{-1}4^{\prime}45^{-1}]=e, \tag{15}\] \[3^{-1}3^{\prime^{-1}}54^{-1}4^{\prime}45^{-1}3^{\prime}3=545^{-1}. \tag{16}\] We simplify (11)-(16), using the relations of \(M_{5}\). We obtain the following relations in \(G_{1}\): \[\langle 3,4\rangle=[1,4]=[2,4]=e.\] We write down the relations in \(G\) that are associated with the braids in \((B_{3})^{Z_{3}^{2}\,\mathcal{J},\mathcal{Z}_{4}^{2}\,\mathcal{J},\mathcal{z}}\); the elements of \(G\) will appear with conjugations, according to the conjugation on \(B_{3}\), as follows: \[3\to 434^{-1},\ 3^{\prime}\to 43^{\prime}4^{-1},\ 3^{-1}\to 43^{-1}4^{-1},\ 3^{\prime-1}\to 43^{\prime-1}4^{-1};\] \[4\to 545^{-1},\ 4^{\prime}\to 54^{\prime}5^{-1},\ 4^{-1}\to 54^{-1}5^{-1},\ 4^{\prime-1}\to 54^{\prime-1}5^{-1}.\] We get in \(G\) the relations: \[\langle 1^{\prime},2\rangle=\langle 1^{\prime},2^{\prime}\rangle=\langle 1^{ \prime},2^{-1}2^{\prime}2\rangle=e, \tag{17}\] \[1=2^{\prime}21^{\prime}2^{-1}2^{\prime-1}, \tag{18}\] \[\langle 2^{\prime}21^{\prime}21^{\prime-1}2^{-1}2^{\prime-1},545^{-1}354^{-1}5^ {-1}\rangle=e,\] \[\langle 2^{\prime}21^{\prime}2^{\prime}1^{\prime-1}2^{-1}2^{\prime-1},545^{-1}3 54^{-1}5^{-1}\rangle=e, \tag{19}\] \[\langle 2^{\prime}21^{\prime}2^{-1}2^{\prime}21^{\prime-1}2^{\prime-1}2^{ \prime-1},545^{-1}354^{-1}5^{-1}\rangle=e,\] \[3=54^{-1}5^{-1}2^{\prime}21^{\prime}2^{-1}2^{\prime-1}1^{\prime-1}2^{-1}2^{ \prime-1}545^{-1}3^{\prime}354^{-1}5^{-1}2^{\prime}21^{\prime}2^{\prime}21^{ \prime-1}2^{-1}2^{\prime-1}545^{-1},\] \[[1,545^{-1}354^{-1}5^{-1}]=[1^{\prime},545^{-1}354^{-1}5^{-1}]=e,\] \[[1,545^{-1}3^{\prime}54^{-1}5^{-1}]=[1^{\prime},545^{-1}3^{\prime}54^{-1}5^{-1 }]=e.\] We simplify (17)-(21), using the ones from \(M_{5}\) and \((M_{4})^{Z_{4}\,\mathcal{J},5}\), and get the following relations in \(G_{1}\): \[\langle 1,2\rangle=\langle 2,3\rangle=[1,3]=e.\] Moreover, the projective relation \[5^{\prime}54^{\prime}43^{\prime}32^{\prime}21^{\prime}1=e, \tag{22}\] is trivial in \(G_{1}\). We summarize the relations in \(G_{1}\), as follows: 1. triple relations \[\langle 1,2\rangle=\langle 2,3\rangle=\langle 3,4\rangle=\langle 4,5\rangle=e.\] (23) 2. commutative relations \[[1,3]=[1,4]=[2,4]=[1,5]=[2,5]=[3,5]=e.\] (24) It is easy to see that \(\{1,2,3,4,5\}\) are the generators of \(G_{1}\). These relations are the same as the relations in \(S_{6}\), hence \(G_{1}\cong S_{6}\). It follows that \(\pi_{1}(X_{6,\text{Gal}})\) is trivial, and the Galois cover of \(X_{6}\) is a simply-connected surface. In the next subsection, we will prove the general theorem (Theorem 7) by using the same method. The reader can discern and follow the inductive steps in the transition from Subsection 3.1 to Subsection 3.2. ### The cone over \(C_{R_{n+1}}\) In this subsection, we study the fundamental group of the Galois cover of a Zappatic degeneration whose central fiber is the cone over the stick curve \(C_{R_{k}}\) in \(\mathbb{CP}^{k+1}\). In order to further clarify the expressions in the proof, we set \(k=n+1\), see Figure 5. We then have the following general theorem: **Theorem 7**.: _Let \(\mathfrak{X}_{n+1}\to\Delta\) be a planar Zappatic degenerations whose central fibers \(X_{n+1}\) is a cone over a stick curve \(C_{R_{n+1}}\) in \(\mathbb{CP}^{n+2}\). Then the Galois cover of \(X_{n+1}\) is simply-connected surfaces._ Proof.: First, each of the vertices \(i\) is an outer 1-point, for \(i=1,\dots,n\), that regenerates to a conic, giving rise to the braids \(Z_{j\;j^{\prime}}\) for \(j=1,\dots,n\). We have the following relations in \(G\) and also in \(G_{1}\): \[1=1^{\prime},2=2^{\prime},\dots,n=n^{\prime}. \tag{25}\] We will use the relations in (25) as a prerequisite when we simplify the following \((n-2)\) parts of the relations and the projective relation in \(G\). Vertex \(O\) is an outer \(n\)-point, and the related braids are: \[B_{n}= M_{n}\cdot F_{n}\] \[= M_{n}\cdot(M_{n-1})^{Z_{(n-1)^{\prime},n}^{2}}\cdot(M_{n-2})^{Z _{(n-2)^{\prime},(n-1)}^{2}Z_{(n-1)\;(n-1)^{\prime},n}^{2}}\] \[\cdots(M_{5})^{Z_{5\;j^{\prime},6}^{2}Z_{6\;\theta^{\prime},7}^{ 2}\cdots Z_{(n-1)\;(n-1)^{\prime},n}^{2}}\cdot(M_{4})^{Z_{4\;\theta^{\prime},5 }^{2}Z_{5\;\theta^{\prime},6}^{2}\cdots Z_{(n-1)\;(n-1)^{\prime},n}^{2}}\] \[\cdot(B_{3})^{Z_{3\;3^{\prime},4}^{2}Z_{4\;4\;\theta^{\prime},5 }^{2}\cdots Z_{(n-1)\;(n-1)^{\prime},n}^{2}}.\] In \(G\), the braids of \(B_{n}\) relate to \((n-2)\) parts of relations. The relations from braids in \(M_{n}\) will be listed as the first part: \[\langle n-1,n\rangle=\langle(n-1)^{\prime},n\rangle=\langle(n-1)^{-1}(n-1)^{ \prime}(n-1),n\rangle=e, \tag{26}\] Figure 5: _A cone over \(C_{R_{n+1}}\)_ \[\begin{array}{l}[(n-2)^{\prime}(n-2)\cdots 2^{\prime}212^{-1}{2^{\prime}}^{-1} \cdots(n-2)^{-1}(n-2)^{\prime}{}^{-1},n]=e,\\ \\ [(n-2)^{\prime}(n-2)\cdots 2^{\prime}21^{\prime}2^{-1}2^{\prime}{}^{-1}\cdots(n-2)^{-1 }(n-2)^{\prime}{}^{-1},n]=e,\\ \\ [(n-2)^{\prime}(n-2)\cdots 3^{\prime}323^{-1}3^{\prime}{}^{-1}\cdots(n-2)^{-1 }(n-2)^{\prime}{}^{-1},n]=e,\\ \\ [(n-2)^{\prime}(n-2)\cdots 3^{\prime}32^{\prime}3^{-1}3^{\prime}{}^{-1}\cdots(n-2)^{ -1}(n-2)^{\prime}{}^{-1},n]=e,\\ \\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \[[(n-2)^{\prime}(n-2)(n-3)(n-2)^{-1}(n-2)^{\prime}{}^{-1},n(n-1)^{-1}(n-1)^{\prime} (n-1)n^{-1}]=e, \tag{44}\] \[[(n-2)^{\prime}(n-2)(n-3)^{\prime}(n-2)^{-1}(n-2)^{\prime}{}^{-1},n(n-1)^{-1} (n-1)^{\prime}(n-1)n^{-1}]=e,\] (45) \[[(n-2)^{\prime}(n-2)^{\prime}{}^{-1}n(n-1)^{-1}(n-1)^{\prime}(n-2)^{ \prime}{}^{-1},n(n-2)^{\prime}{}^{-1},n(n-2)^{\prime}{}^{-1},n(n-2)^{\prime}{}^{ -1},n(n-2)^{\prime}{}^{-1},n(n-1)^{\prime}(n-1)n^{-1}]=e,\] (46) \[[(n-2)^{\prime}(n-2)^{\prime}{}^{-1}n(n-1)^{-1}(n-1)^{\prime}(n-2)^{ \prime}{}^{-1},n(n-2)^{\prime}{}^{-1},n(n-2)^{\prime}{}^{-1},n(n-2)^{\prime}{} ^{-1},n(n-2)^{\prime}{}^{-1},n(n-2)^{\prime}{}^{-1},n(n-2)^{\prime}{}^{-1},n(n -2)^{\prime}{}^{-1},n(n-2)^{\prime}{}^{-1},n(n-2)^{\prime}{}^{-1},n(n-2)^{ \prime}{}^{-1},n(n-2)^{\prime}{}^{-1},n(n-2)^{\prime}{}^{-1},n(n-2)^{\prime}{} ^{-1},n(n-2)^{\prime}{}^{-1},n(n-2)^{\prime}{}^{-1},n(n-2)^{\prime}{}^{-1},n(n- 1)^{-1}{}^{\prime}(n-1)n^{-1}]=e,\] (47) \[[(n-2)^{\prime}(n-2)^{\prime}{}^{-1}n(n-2)^{\prime}{}^{-1}n(n-2)^{ \prime}{}^{-1},n(n-1)^{-1}(n-1)^{\prime}(n-1)n^{-1}]=e,\] (48) \[(n-2)^{-1}(n-2)^{\prime}{}^{-1}n(n-1)^{-1}(n-1)^{\prime}(n-1)n^{-1} (n-2)^{\prime}{}^{(n-2)}=n(n-1)n^{-1}. \tag{49}\] Using the relations that are associated with \(M_{n}\) to simplify (37)-(45), we obtain the following relations in the group \(G_{1}\): 1. triple relation \[\langle n-2,n-1\rangle=e,\] (50) 2. commutative relations \[[1,n-1]=[2,n-1]=[n-3,n-1]=e.\] (51) Similarly, we can get the third part of the relations that relate to \((M_{n-2})^{Z_{(n-2)}^{Z_{(n-2)}^{2}{}_{(n-2)^{\prime}},{}_{(n-1)}Z_{(n-1)}^{2 }{}_{(n-1)^{\prime}},n}}\), then use the relations of \(M_{n}\) and \((M_{n-1})^{Z_{(n-1)}^{2}{}_{(n-1)^{\prime}},n}\) to simplify them. We get the following relations in the group \(G_{1}\): 1. triple relation \[\langle n-3,n-2\rangle=e,\] (52) 2. commutative relations \[[1,n-2]=[2,n-2]=\cdots=[n-4,n-2]=e.\] (53) Continuing this process, we can also get the 4th, 5th, \(\ldots,(n-3)\)th parts of the relations and simplify them, then get the following relations in \(G_{1}\): 1. triple relations \[\langle n-4,n-3\rangle=\langle n-5,n-4\rangle=\cdots\cdots\cdots=\langle 3,4 \rangle=e,\] (54) 2. commutative relations \[\begin{array}{l}[1,n-3]=[2,n-3]=\cdots=[n-5,n-3]=e,\\ \\ [1,n-4]=[2,n-4]=\cdots=[n-6,n-4]=e,\\ \\ \cdots\cdots\cdots\\ \\ [1,4]=[2,4]=e.\end{array}\] (51) Finally, we write down the \((n-2)\)th part of the relations in \(G\), coming from the braids in \((B_{3})^{{\cal Z}_{3\;3\;\prime}^{2}{\cal Z}_{4\;4\;\prime}^{2}{\cal Z}_{(n-1) \;(n-1)^{\prime},n}}\); this time they will appear with conjugated elements (\(i=3,\ldots,(n-1)\)), as follows: \[i\rightarrow(i+1)i(i+1)^{-1},i^{\prime}\rightarrow(i+1)i^{\prime}(i+1)^{-1},i ^{-1}\rightarrow(i+1)i^{-1}(i+1)^{-1},i^{\prime-1}\rightarrow(i+1){i^{\prime -1}(i+1)^{-1}}.\] We get the relations in \(G\) as follows: \[\langle 1^{\prime},2\rangle=\langle 1^{\prime},2^{\prime}\rangle=\langle 1^{ \prime},2^{-1}2^{\prime}2\rangle=e, \tag{52}\] \[1=2^{\prime}21^{\prime}2^{-1}2^{\prime-1}, \tag{53}\] \[\begin{array}{l}\langle 2^{\prime}21^{\prime}21{{}^{\prime}}^{-1}2^{-1}2^{ \prime-1},n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}(n-3)n(n-1)n^{-1}(n-2)^{-1}\\ \\ \cdots n(n-1)^{-1}n^{-1}4n(n-1)n^{-1}\cdots n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{-1}n^ {-1}3\\ \\ n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\cdots n(n-1)^{-1}n^{-1}4^{-1}n(n-1)n^{-1} \cdots\\ \\ (n-2)n(n-1)^{-1}n^{-1}(n-3)^{-1}n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{-1}n^{-1})=e,\\ \\ \langle 2^{\prime}21^{\prime}2^{\prime}1{{}^{\prime}}^{-1}2^{-1}2^{\prime-1},n (n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}(n-3)n(n-1)n^{-1}(n-2)^{-1}\\ \\ \cdots n(n-1)^{-1}n^{-1}4n(n-1)n^{-1}\cdots n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{-1} n^{-1}3\\ \\ n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\cdots n(n-1)^{-1}n^{-1}4^{-1}n(n-1)n^{-1} \cdots\\ \\ (n-2)n(n-1)^{-1}n^{-1}(n-3)^{-1}n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{-1}n^{-1})=e,\\ \\ \langle 2^{\prime}21^{\prime}2^{-1}2^{\prime}21{{}^{\prime}}^{-1}2^{-1}2^{ \prime-1},n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}(n-3)n(n-1)n^{-1}(n-2)^{-1}\\ \\ \cdots n(n-1)^{-1}n^{-1}4n(n-1)n^{-1}\cdots n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{-1}n^ {-1}3\\ \\ n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\cdots n(n-1)^{-1}n^{-1}4^{-1}n(n-1)n^{-1} \cdots\\ \\ (n-2)n(n-1)^{-1}n^{-1}(n-3)^{-1}n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{-1}n^{-1})=e,\\ \\ 3=n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\cdots n(n-1)n^{-1}4^{-1}n(n-1)^{-1}n^{-1} \cdots\\ \\ n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{-1}n^{-1}\cdots n(n-1)n^{-1}4^{-1}n(n-1)^{-1}n^ {-1}\cdots\\ \\ n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{-1}n^{-1}3^{-1}3^{-1}3n(n-1)n^{-1}(n-2)n(n-1)^{-1} n^{-1}\cdots\\ \\ n(n-1)n^{-1}4^{-1}n(n-1)^{-1}n^{-1}\cdots n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{-1}n^ {-1}\\ \\ 2^{\prime}21^{\prime}21{{}^{\prime}}^{-1}2^{-1}2{{}^{\prime}}^{-1}n(n-1)n^{-1} (n-2)n(n-1)^{-1}n^{-1}\cdots\\ \\ n(n-1)n^{-1}4n(n-1)^{-1}n^{-1}\cdots n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{-1}n^{-1}, \\ \\ \langle 2^{\prime}21^{\prime}2^{\prime}1{{}^{\prime}}^{-1}2^{-1}2{{}^{\prime }}^{-1}n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\cdots\\ \\ n(n-1)n^{-1}4n(n-1)^{-1}n^{-1}\cdots n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{-1}n^{-1}, \\ \\ \langle 2^{\prime}21^{\prime}2^{\prime}1{{}^{\prime}}^{-1}2^{-1}2{{}^{\prime }}^{-1},n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\rangle\\ \\ \langle 2^{\prime}21^{\prime}2^{\prime}1{{}^{\prime}}^{-1}2^{-1}2{{}^{\prime }}^{-1}n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\rangle\\ \\ \langle 2^{\prime}21^{\prime}2^{\prime}1{{}^{\prime}}^{-1}2^{-1}2{{}^{\prime }}^{-1}n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\rangle\\ \\ \langle 2^{\prime}21^{\prime}2^{\prime}21{{}^{\prime}}^{-1}2^{-1}2{{}^{\prime }}^{-1}n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\rangle\\ \\ \langle 2^{\prime}21^{\prime}2^{\prime}21{{}^{\prime}}^{-1}2^{-1}2{{}^{\prime }}^{-1}n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\rangle\\ \\ \langle 2^{\prime}21^{\prime}2^{\prime}21{{}^{\prime}}^{-1}2^{-1}2{{}^{\prime }}^{-1}n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\rangle\\ \\ \langle 2^{\prime}21^{\prime}2^{\prime}21{{}^{\prime}}^{-1}2^{-1}2{{}^{\prime }}^{-1}n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\rangle\\ \\ \langle 2^{\prime}21^{\prime}2^{\prime}21{{}^{\prime}}^{-1}2{{}^{\prime}}^{-1}2{{}^{ \prime}}^{-1}n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\rangle\\ \\ \langle 2^{\prime}21^{\prime}2^{\prime}21{{}^{\prime}}^{-1}2^{-1}2{{}^{\prime }}^{-1}n \[\begin{split}&[1,n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}(n-3)n(n-1)n^{-1}(n -2)^{-1}\\ &\cdots n(n-1)^{-1}n^{-1}4n(n-1)n^{-1}\cdots n(n-1)n^{-1}(n-2)^{-1} n(n-1)^{-1}n^{-1}3\\ & n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\cdots n(n-1)^{-1}n^{-1}4^{-1} n(n-1)n^{-1}\cdots\\ &(n-2)n(n-1)^{-1}n^{-1}(n-3)^{-1}n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{-1} n^{-1}]=e,\\ &[1^{\prime},n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}(n-3)n(n-1)n^{-1}(n -2)^{-1}\\ &\cdots n(n-1)^{-1}n^{-1}4n(n-1)n^{-1}\cdots n(n-1)n^{-1}(n-2)^{-1 }n(n-1)^{-1}n^{-1}3\\ & n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\cdots n(n-1)^{-1}n^{-1}4^{-1 }n(n-1)n^{-1}\cdots\\ &(n-2)n(n-1)^{-1}n^{-1}(n-3)^{-1}n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{- 1}n^{-1}]=e,\\ &[1,n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}(n-3)n(n-1)n^{-1}(n-2)^{-1} \\ &\cdots n(n-1)^{-1}n^{-1}4n(n-1)n^{-1}\cdots n(n-1)n^{-1}(n-2)^{-1 }n(n-1)^{-1}n^{-1}3^{\prime}\\ & n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\cdots n(n-1)^{-1}n^{-1}4^{-1 }n(n-1)n^{-1}\cdots\\ &(n-2)n(n-1)^{-1}n^{-1}(n-3)^{-1}n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{- 1}n^{-1}]=e,\\ &[1^{\prime},n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}(n-3)n(n-1)n^{-1}(n -2)^{-1}\\ &\cdots n(n-1)^{-1}n^{-1}4n(n-1)n^{-1}\cdots n(n-1)n^{-1}(n-2)^{- 1}n(n-1)^{-1}n^{-1}3^{\prime}\\ & n(n-1)n^{-1}(n-2)n(n-1)^{-1}n^{-1}\cdots n(n-1)^{-1}n^{-1}4^{-1 }n(n-1)n^{-1}\cdots\\ &(n-2)n(n-1)^{-1}n^{-1}(n-3)^{-1}n(n-1)n^{-1}(n-2)^{-1}n(n-1)^{- 1}n^{-1}]=e.\end{split} \tag{58}\] We can use all the previous relations to simplify (52)-(58), and get the following relations in \(G_{1}\): 1. triple relations \[\langle 1,2\rangle=\langle 2,3\rangle=e,\] (59) 2. commutative relation \[[1,3]=e.\] (60) We also have the projective relation: \[n^{\prime}n(n-1)^{\prime}(n-1)\cdots 3^{\prime}32^{\prime}21^{\prime}1=e, \tag{61}\] which is trivial in \(G_{1}\). In conclusion, we consider now all the above simplified relations in the group \(G_{1}\): 1. triple relations \[\langle 1,2\rangle=\langle 2,3\rangle=\cdots=\langle n-1,n\rangle=e,\] (62) 2. commutative relations \[\begin{split}&[1,3]=[1,4]=\cdots=[1,n]=e,\\ &[2,4]=[2,5]=\cdots=[2,n]=e,\\ &\cdots\cdots\cdots\\ &[n-3,n-1]=[n-3,n]=e,\\ &[n-2,n]=e.\end{split}\] (63) It is easy to see that \(\{1,2,\ldots,n\}\) are the generators of \(G_{1}\). These relations are the same as the relations in \(S_{n+1}\). Hence \(G_{1}\cong S_{n+1}\). It is obvious that \(\pi_{1}(X_{n+1,\text{Gal}})\) is trivial, and the Galois cover \(X_{n+1,\text{Gal}}\) of \(X_{n+1}\) is simply-connected. ### General type When considering an algebraic surface \(X\) as a topological \(4\)-manifold, it has the Chern numbers \(c_{1}^{2}(X),c_{2}(X)\) as topological invariants. In this subsection, we will prove that the Galois covers of the surfaces in Subsection 3.2 are surfaces of general type by using \(c_{1}^{2}(X)\). As a first step, we compute the Chern numbers \(c_{1}^{2}(X)\). The formula was treated in [17, Proposition 0.2] (proof there is given by F. Catanese). **Proposition 8**.: _Let \(S\) be the branch curve of an algebraic surface \(X\). Denote the degree of the generic projection by \(d\), \(\deg S=m\). Then,_ \[c_{1}^{2}(X_{\text{Gal}})=\frac{d!}{4}(m-6)^{2}.\] Note that in Subsection 3.2, \(d=n+1\) and \(m=2n\). Then by Proposition 8, we obtain * \(c_{1}^{2}(X_{5,\text{Gal}})=5!\cdot 1\); * \(c_{1}^{2}(X_{6,\text{Gal}})=6!\cdot 2^{2}\); * \(\cdots\cdots\cdots\) * \(c_{1}^{2}(X_{n,\text{Gal}})=(n)!\cdot(n-4)^{2}\); * \(c_{1}^{2}(X_{n+1,\text{Gal}})=(n+1)!\cdot(n-3)^{2}\). It is obvious that \(c_{1}^{2}(X_{n,\text{Gal}})>0\) for \(n\geq 5\). It means that the Galois covers are general type surfaces, as explained in [5, Proposition X.1] or [25, Theorem 1.1]. ## 4 Proof of Theorem 1 In this section we prove Theorem 1. First, we recall the following result of Pinkham: **Theorem 9**.: _([20]) Let \(X\subset\mathbb{CP}^{n}\) be a smooth, irreducible, and projectively Cohen-Macaulay surface. Then \(X\) degenerates to the cone over a hyperplane section of \(X\)._ Let \(C\) be the hyperplane section of \(X\). Suppose that \(C\) can be degenerated to a stick curve \(C_{0}\). In this case, \(S\) can be degenerated to the cone over the stick curve \(C_{0}\). Therefore: **Corollary 10**.: _([9, Corollary 12.2]) Any surface \(X\) of minimal degree (i.e., of degree \(n\)) in \(\mathbb{CP}^{n+1}\) can be degenerated to the cone over the stick curve \(C_{T_{n}}\), for any tree \(T_{n}\) with \(n\) vertices._ Every nondegenerate irreducible surface of degree \(n\) (\(n\geq 5\)) in \(\mathbb{CP}^{n+1}\) is a rational normal scroll. Any hyperplane section of such a surfaces is a normal rational curve. Beginning at a general point \(p_{i}\) on each component of \(C_{T_{n}}\), the line bundle \(\mathcal{O}_{C_{T_{n}}(p_{1}+\dots+p_{n})}\) is very ample. \(C_{T_{n}}\) has arithmetic genus \(0\) and is a flat limit of rational normal curves in \(\mathbb{CP}^{n}\). \(C_{R_{n}}\) is a flat limit of rational normal curves (including \(C_{T_{n}}\)) in \(\mathbb{CP}^{n}\). According to Corollary 10, any surface \(X\) of minimal degree in \(\mathbb{CP}^{n+1}\) can be degenerated to the cone over the stick curve \(C_{R_{n}}\). The fundamental group of the Galois cover \(\pi_{1}(X_{\mathrm{Gal}})\) does not change when the complex structure of \(X\) changes continuously. In the previous narrative, we proved that any surface \(X\) of minimal degree in \(\mathbb{CP}^{n+1}\) can be degenerated to the cone over the stick curve \(C_{R_{n}}\), so we can use Theorem 7 to get Theorem 1.
2302.03732
Adding Explicit Load-Acquire and Store-Release Instructions to the RISC-V ISA
Weak memory models allow for simplified hardware and increased performance in the memory hierarchy at the cost of increased software complexity. In weak memory models, explicit synchronization is needed to enforce ordering between different processors. Acquire and release semantics provide a powerful primitive for expressing only the ordering required for correctness. In this project, we explore adding load-acquire and store-release instructions to the RISC-V ISA. We add support to the herd formal memory model, the gem5 cycle-approximate simulator, and the LLVM/Clang toolchain. Because these instructions do not exist in the RISC-V standard, there is an inherent urgency to ratify explicit load-acquire/store-release instructions in order to prevent multiple ABI implementations and ecosystem fragmentation. We found that for workloads with a high degree of sharing and heavy contention, the impact of less memory ordering is muted, but our changes successfully encode the semantics we desire.
Bryce Arden, Zachary Susskind, Brendan Sweeney
2023-02-07T20:01:46Z
http://arxiv.org/abs/2302.03732v2
# Adding Explicit Load-Acquire and Store-Release Instructions to the RISC-V ISA ###### Abstract Weak memory models allow for simplified hardware and increased performance in the memory hierarchy at the cost of increased software complexity. In weak memory models, explicit synchronization is needed to enforce ordering between different processors. Acquire and release semantics provide a powerful primitive for expressing only the ordering required for correctness. In this project, we explore adding _load-acquire_ and _store-release_ instructions to the RISC-V ISA. We add support to the hard formal memory model, the gems cycle-approximate simulator, and the LLVM/Clang toolchain. Because these instructions do not exist in the RISC-V standard, there is an inherent urgency to ratify explicit _load-acquire/store-release_ instructions in order to prevent multiple ABI implementations and ecosystem fragmentation. We found that for workloads with a high degree of sharing and heavy contention, the impact of less memory ordering is muted, but our changes successfully encode the semantics we desire. * Authors contributed equally ## I Introduction RISC-V [1] is a free, open-source Instruction Set Architecture (ISA) which is growing in popularity across all application domains, ranging from datacenters, to clients, to embedded devices. RISC-V defines and uses a memory consistency model called RVWMO (RISC-V Weak Memory Ordering) [2], which is weaker than those present in some other ISAs like x86, while stronger than those in POWER and ARM. Under RVWMO, the memory operations of a hardware thread (or "hart") appear in-order from the perspective of that same hart, but may appear out-of-order from the perspective of other harts. This creates challenges for writing functionally correct and performant code. RISC-V is constantly evolving thanks to a robust extension framework [3], which allows it to be adapted to target new software applications. However, like any open-source project, RISC-V is susceptible to ecosystem fragmentation. Left unchecked, this could result in two processors, both of which are nominally using the RISC-V ISA, actually supporting radically different sets of instructions, which would undoubtedly harm software compatibility and hardware adoption. As a consequence, RISC-V specifies a small set of _profiles_[4], which are intended to represent and standardize common design decisions for different domains. Specifically, each profile is composed of the base RISC-V ISA, a set of mandatory extensions, and a small set of optional features. While this approach reduces fragmentation, it does not entirely eliminate the issue of application binary interface (ABI) incompatibility, or any other potential complications which may be introduced during backend codegen. ABI breakage remains a challenge in the ecosystem, as a breaking ABI change requires downstream applications to be re-compiled. In this project, we explore the addition of explicit _load-acquire_ and _store-release_ instructions as an extension to the RISC-V ISA. These instructions combine a memory operation with a _one-way_ memory barrier, enabling synchronization in a weak memory model while incurring less overhead than the traditional approach of using full or even partial bi-directional fences for synchronization. The ideas behind _load-acquire_ and _store-release_ instructions for weakly ordered memory consistency models are not new. However, we believe that the absence of these instructions in RISC-V has negative implications for future compatibility at the ABI boundary, and this deficiency is worth addressing sooner rather than later for systems that choose to implement RVWMO. The current upstream lowering for memory-consistent constructs produces unnecessarily restrictive fence instructions. If we continue to ignore this issue, we will be encouraging individual vendors to provide their own "more-performant" ABIs, which introduces fragmentation and vendor lock-in. Therefore, we see an unusual urgency for ld.aq and st.rl instructions in the RISC-V ISA. Our specific contributions in this project are as follows: 1. The augmentation of the RISC-V formal memory model with support for ld.aq and st.rl. This allows for a rigorous definition of what behaviors are allowed or forbidden in the presence of these instructions, which is of use for designing and verifying software and hardware. 2. A proxy study to understand the performance impacts of load-atomic and store-release using Apple M1 silicon (AArch64). We use a modified version of the Clang C++ compiler which makes conservative memory ordering decisions and avoids the ARM-equivalent LDAR and STLR instructions. 3. A study of the feasibility of using gem5 to simulate an extension to the RISC-V ISA incorporating ld.aq and st.rl. We find that although gem5 "supports" acquire and release semantics in AArch64 and RISC V atomic memory operations, in practice, this support is non-performant due to limitations of the simulator memory model. It appears that significant changes to the gem5 weak memory model would be needed to simulate these operations with reasonably accurate performance. The remainder of this document is organized as follows: In Section II, we provide additional background on weak memory models, _load-acquire_ and _store-release_, and prior work in this domain. In Section III, we discuss our experimental methodology for this project. In Section IV, we present the augmented memory model, the results of the modified Clang compiler for AArch64, and our analysis of gem5. Lastly, in Sections V and VI, we discuss future work and conclude. ## II Background and Related Work The behavior of a parallel computer with a weak memory model is in some ways similar to that of a distributed system. Although a hart perceives its own local loads and stores as occurring in program order, it can not make assumptions about the ordering of events on other harts without explicit synchronization. A useful way to conceptualize memory accesses in this environment is as a partially ordered set, or _poset_, ordered under Lamport's [5] happened-before relation "\(\rightarrow\)", where for two events (e.g. memory accesses), \(a\) and \(b\): 1. If \(a\) and \(b\) are events in the same hart, and \(a\) occurred before \(b\), \(a\to b\). 2. If \(a\) and \(b\) are events on separate harts, \(a\) is a message send, and \(b\) is the corresponding message receive, then \(a\to b\). 3. If \(a\to b\) and \(b\to c\), then \(a\to c\). In a distributed system, remote events never become visible without an explicit message, which creates a happened-before relation. However, in a weak-memory system with shared memory, events from other harts will _eventually_ become visible even in the absence of synchronization, but without any deterministic ordering. Therefore, while the happened-before relation may seem conceptually simple, it can hide surprising complexities in the context of a weak memory model. Consider the example shown in Figure 1. Even in the absence of explicit synchronization, hart 2 must observe event B (setting message) before event D (getting a non-null message). However, without further constraints, E may occur before A is observed, causing incorrect behavior. If a memory barrier is inserted in hart 1 between A and B, and in hart 2 between D and E (to prevent microarchitectural reordering of E), then the program behavior is functionally correct. However, the bi-directional barrier in hart 1 also prevents event C from being reordered before A, which would not impact functional correctness, and may have performance implications. Using a store-release for B introduces a unidirectional fence which prevents prior events on hart 1 from being reordered after the st.rl. Using a _load-acquire_ for D introduces an opposite unidirectional fence, preventing subsequent events on hart 2 from being reordered prior to the ld.aq. Like the bidirectional fence-based solution, this is sufficient for functional correctness, but it imposes fewer constraints on the reordering of memory operations. Note that both approaches create an explicit happened-before relation between B and D, which is absent in the version without any synchronization. Arm included _load-acquire_ and _store-release_ semantics in the Armv8-A AArch64 ISA via the LDAR and STLR instructions [7]. These instructions enable microarchitectural optimizations that are not possible with full barriers, reducing the performance impact of synchronization. RISC-V already uses acquire (AQ) and release (RL) flags as part of the Atomic Memory Operation (AMO) instructions introduced in the RV32A and RV64A extensions [2]. The ability to build on existing semantics further strengthens the case for a _load-acquire/store-release_ extension: since support for unidirectional barriers is already required in processors implementing AMO and the RISC-V weak memory model, the hardware overhead for these instructions should be small. ## III Experiment Methodology This section provides an overview of the experimental setup for evaluating potential performance benefits of the ld.aq and st.rl instructions on workloads with a high-degree of sharing. Section III-A describes how the gem5 architectural system simulator was altered in order to explore the performance impact of fence instructions with weaker ordering requirements on RISC-V executables. Section III-B describes how the RISC-V formal memory model was extended with exemplar programs (i.e., litmus tests) that demonstrate the new ld.aq and st.rl instructions ordering semantics by leveraging formal equivalence checking. Section III-C describes a headroom proxy study performed on an Apple Silicon AArch64 machine for evaluating potential performance gains on a modern lock-free concurrent queue implementation, which heavily utilizes the std::memory_order_acquire and std::memory_order_release constructs in order to achieve competitive performance under heavy-concurrency, single-producer-multi-consumer (SPMC), and multi-producer-single-consumer (MPSC) scenarios. Fig. 1: Possible event orders observed by hart 2 in the absence of synchronization, with synchronization via ld.aq and st.rl, and with synchronization via traditional memory fences. Code adapted from [6]. ### _Enhancing gem5 with RISC-V Extension_ The gem5 simulator [8] is a cycle-accurate CPU simulator which has seen broad adoption in academia and industry. gem5 provides a generic out-of-order CPU model, o3, which is intended to be configurable for several different ISAs, including AArch64 and RISC-V. RISC-V provides two opcodes, custom-0 and custom-1, which are currently unused and guaranteed to not be used by future standard extensions. Therefore, we chose to use these opcodes to implement ld.aq and st.rl in the gem5 simulator. Since gem5 provides the LDAR and STLR instructions in AArch64, as well as the AQ and RL flags in the AMO RISC-V instruction, existing structures can be reused to implement these instructions. ### _Adding Support in the RISC-V Formal Memory Model_ The herd[9] suite of tools implements a axiomatic, formal model of the RVWMO (as well as RVTSO) memory model. A formal model is necessary because it allows for defining precisely what behaviours are allowed and which are forbidden; a feature not available with the litmus tests provided for other architectures.1 The formal model also allows the automatic generation of litmus tests, as well as the creation of arbitrary multithreaded programs and evaluating candidate executions to see if they would be allowed by the memory model. It also allows proposed modifications of the memory model (such as adding _load-acquire_ and _store-release_ instructions) to be evaluated by seeing how different executions are allowed or forbidden by the adjustments. Footnote 1: Although the the formal model currently differs slightly from the textual version of the specification, a bug which is currently being worked on. Contrary to the claims on the mailing list [10], herd largely already supported ld.aq and st.rl in the parser for RISC-V, with the notion of release and acquire already present to support the annotations on AMOs (and on other architectures like AArch64 which already have _load-acquire_ and _store-release_). However, there were no litmus tests included (for either base RISC-V or the _load-acquire_ or _store-release_ instructions). Therefore, we added tests to exercise these instructions and verify that they allowed the expected behaviour. ### _Atomics Performance Proxy Study on AArch64 Platform_ The ideas behind _load-acquire_ and _store-release_ instructions are not new; these instructions are present on modern AArch64 platforms and available in consumer devices today. In order to understand how much performance can be gained by emitting lw.aq and st.rl instructions in scenarios where a full fence instruction is not required, a proxy study can be performed by modifying the clang C++ compiler to be conservative with memory ordering on existing AArch64 platforms. In other words, by modifying backend codegen for atomic instructions to enforce more ordering than required, program functional correctness is maintained, and the side-effects of additional ordering in the memory subsystem can be measured. To enforce additional ordering in the memory hierarchy, the target-independent atomic expansion pass and the AArch64 instruction selection pass are altered to bracket all atomic load/store operations with leading/trailing fence instructions. Since LLVM handles __atomic operations with builtins, we can safely alter the emission of atomic instructions without recompiling libc++, glibc, or other internal C++ toolchain dependencies. The use of compiler builtins for primitives that are often implemented with target-specific intinistics (including atomic operations) is common-practice, and enables target-aware optimizations to be applied without creating C++ toolchain fragmentation. An open-source performance benchmark for lock-free queue data-structures [11] was selected as the target for the sensitivity analysis described above. Codegen quality is analysed in order to verify the lock-free queue benchmark provides interesting stimulus for quantifying the impact of additional memory ordering on runtime performance. Specifically, instruction opcode, number of atomic loads/stores, and atomic instruction frequency serves as the primary signals for determining if the benchmark is representative. The lock-free queue benchmark is compiled using latest clang as a reference and with a modified clang with more conservative memory-ordering constraints on an Apple Silicon 2021 MacBook Pro with the M1 Pro chip. It is important that a native system with a weak-memory model is used for collecting performance results. Since Total Store Ordered (TSO) and Sequentially Consistent (SC) systems have implicit _load-acquire_ and _store-release_ semantics by default, the additional fence instructions in the instruction stream would most-likely be elided after the fence instructions are decoded. Apple Silicon systems also provide optional TSO support for compatibility with pre-existing x86 programs through binary translation [12]. Care was taken to ensure that the Apple Silicon system did not have TSO implicitly enabled, which would negatively impact the performance results of the proxy study. An example project is_tso_enabled is provided in order to verify that TSO support has been properly disabled before benchmark results are captured. The following section provides an in-depth discussion of the findings from attempting to improve Gem5 _load-acquire_ and _store-release_ handling, extending the RISC-V formal memory model to support explicit _load-acquire_ and _store-release_ instructions, and estimating the speedup of leveraging ld.aq and st.rl instructions on real programs. ## IV Results Even though the scope of adding explicit _load-acquire_ and _store-release_ instructions appears well-defined, we encountered several obstacles that quickly increased the complexity of the original statement of work. This section is broken down into the three areas explored by this project. Section IV-A provides an overview of our findings from modifying the gem5 simulator to support the explicit ld.aq and st.rl instructions. Section IV-B provides concrete examples of the ordering semantics that the ld.aq and st.rl instructions enforce. Section IV-C provides an overview of the performance impact of the additional memory ordering on an AArch64 system with a weak memory model. ### _Gem5 Support_ We added ld.aq and st.rl instructions to the RISC-V implementation in gem5 by overriding the custom-0 and custom-1 opcodes and reusing normal scalar load/store instruction semantics. However, simulator limitations mean that they are not handled in a usefully accurate way. As discussed previously, gem5 nominally supports acquire and release semantics for AArch64 and the RISC-V AMO instruction. However, analysis of the gem5 source code revealed that these operations are actually being treated as full barriers internally. There is no official documentation on how simulation of these operations is intended to work, but it appears that this is a known issue [13]. From further analysis of the source code, it seems that there are unfinished efforts to add proper support for acquire and release operations to gem5, including ACQUIRE and RELEASE memory flags present in the ISA specifications, but these attributes are unused and no documentation exists on their intended function. Overall, gem5 seems to take the approach of sacrificing fine-grain modeling of memory ordering for ease-of-implementation and less complexity. In general, this seems like an acceptable tradeoff given that memory-ordering effects can be difficult to quantify, but it limits its usefulness for our analysis. While we did explore the possibility of adding proper support for acquire and release operations, it quickly became apparent that this would require major changes to the memory subsystem and be very difficult to verify. Although we did successfully generate RISC-V executables for our desired workloads, any performance data would be unreliable due to simulator limitations. ### _Formal Memory Model Semantics_ We created four litmus tests and associated diagrams to demonstrate, through the herd toolsuite. They all revolve around a release-acquire between threads, with one thread releasing data and the other thread acquiring it. herd has several operators to express relationships between events, the most important are co (coherence), representing the total order of all writes to a location, rf (reads-from), representing the propagation of data from a write to a read, fr (from-reads), representing a write that overwrites the data read by a read and hence must happen afterwards, and ppo (preserved program order), which represents the ISA semantics of what memory orderings must appear to execute in program order, for example through data dependencies or fences. As an initial test of herd, we tried the program shown in Listing 1. This program deliberately has no barriers, and the incorrect execution, where the flag has changed but the data has not, is allowed and shown in Figure 2. The root of the problem is that there is no dependency between either the pair of loads or the pair of stores. Next, we ensured the st.rl and ld.aq instructions had the desired effect. We added them in Listing 2 to the appropriate instructions, ending the sequence of stores with a _store-release_ and beginning the sequence of loads with a load-acquire. That same execution is now disallowed as shown in Figure 3. The incorrect execution is now disallowed because of the ppo (Preserved Program Order) edges between both the pair of stores and pair of loads. Now, the incorrect execution implies a cycle which is disallowed by the memory model. Next, we explored the executions forbidden by the "dumb" implementation of _load-acquire_ and _store-release_ as normal loads and stores paired with fences. The code is shown in Listing 3. A third thread is used to attempt to observe moving the unrelated store to w ahead of the store to x. As shown in Figure 4, this (desired) reordering leads to a cycle because of the ppo edge between the writes to w and x. That comes Fig. 3: Incorrect execution is now forbidden Fig. 2: Allowed, although incorrect, execution from the required fence, which enforces additional ordering beyond which is required in this scenario. With the addition of _load-acquire_ and store-release, expressing only the required orderings is now possible. Listing 4 shows the upgraded code. Note that this is not a safe transformation in general, since if the ordering between \(w\) and x _was_ desired, then it would be lost. Now, as seen in Figure 5, there is no ppo between the stores and this execution is legal. ### _Performance Benchmark Results_ In this section, the results from the memory-ordering performance proxy study on an Apple AArch64 platform (M1 Macbook Pro) are summarized. Figure 6 shows the total number of queuing operations (i.e., both enqueue and dequeue) per second performed by each of the different compiler backends in the study. Because _enqueue_ and _dequeue_ operations require ordering semantics for updating head/tail pointers, the number of queueing operations per second loosely translates to the number of atomic operations performed by the selected workload. Unexpectedly, getting rid of _load-acquire_ and _store-release_ instructions and instead forcing the compiler to emit only bidirectional fence instructions improved performance in some scenarios. In order to verify that our compiler modifications for more conservative memory were correctly impacting downstream instruction emission, the disassembly can be analyzed in order to ensure that additional memory barrier instructions are being emitted by the compiler-under-test. Figure 7 shows that our clang-dumb implementation with conservative memory ordering emits \(\approx 4.5\) times as many data barriers as the reference clang compiler. The large increase in total barriers emitted by the compiler is a strong signal that our modifications to the backend were successful, and that we are enforcing more ordering in the memory hierarchy. The dmb ish instruction is a data memory barrier (inner shareable domain), it is a full fence that only applies to the CPU cores. The dmb ishld is a data memory barrier (inner shareable domain, load) is a load-only fence. We believe that this is because of our choice of benchmark. Consulting with experts in the field has led us to the conclusion that in this benchmark, with a large amount of data sharing between threads and hence cores, the process of actually moving the data through the memory systems dominates and is unable to be covered by the processor. Therefore, the additional flexibility that is available to the memory system cannot be used, and so we are only seeing interference patterns and noise in the data. Benchmarks with less actual data movement, such as lightly-contented locks, may have served as a better way to contrast between the methods. Additionally, we were surprised to find that upstream clang-16 compiler outperformed the native Apple clang proprietary toolchain. ## V Future Work As the upstream RISC-V community is already pushing to ratify the explicit ld.aq and st.rl instructions, future work for this project is to ensure that these instructions are ratified and exposed in the RISC-V Platform Specification [14]. Once the _load-acquire_ and _store-release_ instructions are ratified and added to the ABI requirements defined by the platform spec, work will begin on adding robust support for them in compiler backends. Other future work includes enhancing the gem5 simulator to support finer granularity modeling of acquire and release semantics in weak memory models, integrating our memory model litmus tests into the upstream herd tool suite, and collecting performance results on native RISC-V silicon hardware. Fig. 4: Forbidden, but desired, execution Fig. 5: Allowed and desired execution ## VI Conclusion In this project, we explored adding load-acquire and store-release operations to the RISC-V ISA. RISC-V provides a Weak Memory Ordering model (RVWMO) which enables simplification of the memory subsystem but places the burden of explicit software synchronization on the programmer. Load-acquire and store-release are an alternative to traditional (stronger) software memory barriers which expose additional opportunities for parallelism in the memory hierarchy subsystem. First, we explored adding support in the gem5 simulator for explicit load-acquire and store-release instructions by extending the RISC-V ISA. Although we correctly introduced new instructions to the ISA, we were unable to make a meaningful comparison due to simulator limitations - gem5 is currently incapable of representing load-atomic and store-release instructions at the level of fidelity needed to observe performance characteristics. Second, we enhanced the herd formal memory model toolchain by encoding the memory ordering constraints that the explicit load-acquire and store-release instructions enforce. Although much of what was needed was already present to support atomic AMO operations, we extended the model with explicit support and added four new "litmus tests" intended to demonstrate the new instruction semantics. We found that framing memory ordering behavior as a formal equivalence check against known specifications is a powerful technique for testing the correctness of weak memory Fig. 6: Total Queue Operations Per Second by Compiler Type on AArch64. Fig. 7: Total memory barriers emitted by each codegen strategy. models. Finally, we performed a proxy study on Arm silicon by developing a "dumb" version of the LLVM/Clang toolchain which was incapable of producing load-acquire and store-release instructions, and evaluated performance (measured in atomic operations per second) against upstream Clang and Apple's own proprietary toolchain. We found that in many cases using full barriers actually _improved_ performance, suggesting that compiler support for these operations is immature.
2310.03523
Scanning gate microscopy of nonretracing electron-hole trajectories in a normal-superconductor junction
We theoretically study scanning gate microscopy (SGM) of electron and hole trajectories in a quantum point contact (QPC) embedded in a normal-superconductor (NS) junction. At zero voltage bias, the electrons and holes transported through the QPC form angular lobes and are subject to self-interference, which marks the SGM conductance maps with interference fringes analogously as in normal systems. We predict that for an NS junction at non-zero bias a beating pattern is to occur in the conductance probed with the use of the SGM technique owing to a mismatch of the Fermi wavevectors of electrons and holes. Moreover, the SGM technique exposes a pronounced disturbance in the angular conductance pattern, as the retroreflected hole does not retrace the electron path due to wavevector difference.
S. Maji, K. Sowa, M. P. Nowak
2023-10-05T13:17:32Z
http://arxiv.org/abs/2310.03523v2
# Scanning gate microscopy of non-retracing electron-hole trajectories ###### Abstract We theoretically study scanning gate microscopy (SGM) of electron and hole trajectories in a quantum point contact (QPC) embedded in a normal-superconductor (NS) junction. At zero voltage bias, the electrons and holes transported through the QPC create a branched flow and are subject to self-interference, which marks the SGM conductance maps with interference fringes analogously as in normal systems. We predict that for an NS junction at non-zero bias a beating pattern is to occur in the conductance probed with the use of the SGM technique owing to a mismatch of the Fermi wavevectors of electrons and holes. Moreover, the SGM technique exposes a pronounced disturbance in the branched conductance pattern due to wave vector difference, as the retroreflected hole does not retrace the electron path. Electronic transport through a normal-superconductor (NS) interface is governed by the Andreev reflection [1]. It results in conversion of the electron approaching the superconductor into a retroreflected hole and creation of a Cooper pair in the superconductor, provided the electron energy is within the superconducting gap. This elementary process is nowadays a foundation for the functioning of hybrid structures that combine the rich spin physics of semiconductors with the electron pairing provided by the superconductor. Those devices are used for the realization of topological superconductivity [2; 3; 4], controllable superconducting [5; 6; 7] and Andreev qubits [8], superconducting diodes [9], Andreev molecules [10], Cooper pair splitters [11], and others. As experiments advance in the creation of devices with complex geometry, such as patterned 2DEGs connected to single [12] and multiple superconducting electrodes [13; 14; 15], understanding the electronic transport in these devices is of fundamental interest. So far, transport measurements have focused mainly on spectroscopic techniques that lack the ability to determine the spatial properties of quasiparticle propagation. In this work, we propose and theoretically investigate the scanning gate microscopy (SGM) technique as a tool allowing for the visualization of electron and retro-reflected hole paths in a quantum point contact (QPC) embedded in an NS junction. SGM is a widely used method to visualize electron flow in _semiconducting_ structures It uses a charged atomic force microscope tip that scans above the sample when simultaneously conductance of the system is monitored. In particular, the map of the conductance change can be used to identify the paths of the flowing electrons. This method has been successfully used to visualize the electron flow from a QPC [16], to attribute the conductance quantization to the occupation of subsequent transverse modes of a QPC [17; 18] and to demonstrate the coherent electron self-interference [19; 20; 21; 22]. Scanning gate microscopy was also used to image bending of electron trajectories due to an external magnetic field [23; 24; 25] and was theoretically considered in the context of probing branched flow and self-interference in 2DEGs [26; 27; 28; 29] and monoatomic-layered materials [30; 31; 32; 33]. In this work, we theoretically investigate the application of the SGM technique in a _superconducting_ structure: an NS junction that embeds a QPC. In such a system, the hallmark of Andreev reflection is the amplification of conductance [34; 35], which was recently demonstrated in 2DEG QPCs [12]. Here, we show that in this system the SGM technique not only reveals the branched flow of the electrons escaping the QPC, their self-interference, but also interference of retro-reflected holes. Most importantly, this technique unveils the modification of the transport properties of the structure due to the difference in the Fermi velocity of electrons and holes at non-zero bias when the Andreev limit (\(\Delta\ll\mu\)) is not fulfilled. The latter leads to a pronounced change in the conductance oscillation pattern as the hole does not retrace the electron path. Non-perfect retracing trajectories have been studied so far in the context of electron-hole trajectories in billiards [36; 37] and Chladni figures [38]. Despite significant progress in studies of superconducting heterostructures, SGM imaging in these systems remains vastly unexplored, with the exception of the experimental demonstration of its use for the visualization of bent electron and hole orbits [25] and the theoretical prediction of probing the supercurrent distribution in Josephson junctions [39] in the external magnetic field. The considered system [Fig. 1] consists of a QPC embedded in the normal region of the NS junction. We consider the zero-magnetic field case, hence we use a spinless Hamiltonian written in the electron-hole basis \(H=\left(\hbar^{2}\mathbf{k}^{2}/2m^{*}+V_{\text{QPC}}(x,y)+V_{\text{SGM}}(x,y, x_{\text{tip}},y_{\text{tip}})-\mu\right)\tau_{z}+\Delta(x)\tau_{x}\) with \(\tau_{z}\) (\(\tau_{x}\)) z (x) Pauli matrix, \(\mu=10\) meV the chemical potential and \(\Delta=2\) meV the superconducting gap. \(V_{\text{QPC}}(x,y)\) is the QPC potential modeled after Ref. [40] and \(V_{\text{SGM}}(x,y,x_{\text{tip}},y_{\text{tip}})\) corresponds to the potential at the 2DEG level caused by the SGM tip (located at \((x_{\text{tip}},y_{\text{tip}})\)) [41]. We assume that the scattering region is connected to the biased normal lead, which serves as the source of incoming electrons, and to a grounded superconducting lead, which provides the Andreev reflection. The conductance is calculated according to the Landauer-Buttiker formula [42]\(G=\partial I/\partial V_{b}=2e^{2}/h\cdot(N-R_{ee}+R_{he})\) where \(N\) is the number of electronic modes in the normal lead, \(R_{ee}\) electron-to-electron reflection amplitude, and \(R_{he}\) electron-to-hole reflection amplitude, and \(V_{b}\) the bias voltage [43]. The amplitudes are extracted from the scattering matrix of the system calculated at energy \(E=-|e|V_{b}\) through the solution of stationary Schrodinger equation using Kwant package [44]. To account for the large size of the device in the direction parallel to the QPC gates, we apply open boundary conditions at the edges of our system in the \(y\) direction. The details of the numerical simulation can be found in the Supplementary Materials [45]. The electron incident from the normal lead propagates through the QPC, whose potential is controlled by the \(V_{g}\) voltage; then Andreev reflects at the superconducting contact; and finally the resulting hole is scattered back to the original normal lead. The inset of Fig. 2(a) shows a typical conductance curve versus gate voltage [46], where conductance is amplified by the Andreev reflection and quantized with plateaus in multiples of \(4e^{2}/h\). For the following calculations we choose the value \(V_{g}\) so that the resulting conductance is \(2e^{2}/h\)--at the first step--when a single branched flow is expected [21]. In the map of Fig. 2(a) we show the change of the conductance (difference between the conductance obtained in the presence of the tip and the conductance obtained in the absence of the tip) when the system is scanned by the SGM tip. On the map, pronounced fringes due to self-interference of the quasiparticles are present. The conductance cross-section for the SGM \(y\) coordi Figure 1: The scheme of the considered system. The QPC (dark grey) is embedded in a NS junction. The normal region, behind the QPC, is scanned by charged SGM tip (dark green) which deflects the electron and hole trajectories in the 2DEG beneath. The self-interference of electrons and holes (blue and red colors show the real part of the electron wave function) leads to oscillations of Andreev enhanced conductance. When the energy of quasiparticles is above the Fermi level the oscillations will have different periods and the hole will not retrace the electron path (shown with arrows). Figure 2: (a) The map of the change of the conductance introduced by the SGM tip at zero bias voltage with a single branched flow visible and periodic fringes due to electron and hole self interference. The inset shows conductance versus the QPC gate voltage with the red point denoting the \(V_{g}\) value chosen for the calculation of the map. (b) Conductance (black), electron-electron (green) and electron-hole (red) transmission coefficients obtained for \(y_{\text{tip}}=0\). (c) The Fourier transform of the conductance from (b). (d) Conductance versus the position of the QPC with the SGM tip set in constant distance from the superconductor interface. nate set in the middle of the QPC constriction (\(y_{\rm{tip}}=0\)) is shown in Fig. 2(b) with the black curve. When the tip is moved away from the QPC constriction, the conductance rapidly increases as the electron flow through the QPC is unblocked and then exhibits periodic oscillations. The oscillation fringes are separated by half of the Fermi wavelength due to interference between the waves reflected by the QPC itself and those reflected back by the tip [21]. In Fig. 2(c) we show the Fourier transform of the conductance. We observe that the oscillations occur with a single period that corresponds to the Fermi wavevector of electrons and holes \(k_{e/h}=\sqrt{2m^{*}\mu}/\hbar\)[47] denoted by the black vertical line in the inset with \(\omega_{e/h}=1/(\lambda_{\rm{e/h}}\times 10^{9})\) and \(\lambda_{e/h}=\pi/k_{\rm{e/h}}\). In a normal system such oscillations are a signature of constructive and destructive interference of the electronic wave that occurs between the QPC constriction and the SGM tip. In fact, by changing the distance between the QPC and the SGM tip, we observe conductance oscillations [Fig. 2(d)] that confirm that a similar process also takes place here. We have also checked the conductance oscillation when the superconducting interface was moved away from the QPC and SGM tip and observed their negligible amplitude (not shown). This allows us to conclude that the main path of interference is between the QPC and the SGM tip. In Fig. 2(b) we show the transmission probabilities of the electron-electron (green curve) and electron-hole (red curve) transport process. We clearly see that most of the electrons are backscattered from the QPC giving rise to high overall values of \(R_{ee}\). There is a considerable magnitude of \(R_{he}\), which means that the injected electron, after passing the QPC and scattering at the SGM tip potential, is converted into a hole at the superconducting interface that is traced back to the normal electrode. Note that in this case the interference of electrons and holes is indistinguishable, as they both have the same wavevector. Let us now move to the non-zero voltage bias case, with \(E=0.75\Delta\). We again fix \(V_{g}\), so the conductance without the tip is in the middle of the first conductance step (see the inset of Fig. 3(a)). The corresponding \(\Delta G\) map is shown in Fig. 3(a), where we observe a significant modification in the conductance pattern as compared to the case of Fig. 2(a), with three main ingredients: i) change of the pattern of oscillation fringes; ii) resonant features disturbing the previously clear single-branched flow, iii) general amplification of the conductance by the tip. As we will show in the following, they all are a signature of unequal electron and hole wave vectors. The conductance cross-section is shown in Fig. 3(b) with the black curve. We observe that the oscillations are disturbed by a beating pattern and correspondingly in the Fourier transform (see the inset of Fig. 3(b)) we find two leading frequencies corresponding to different values of the electron and hole wavevector (\(k_{e}\) and \(k_{h}\)) obtained as \(k_{e/h}=\sqrt{2m^{*}(\mu\pm E)}/\hbar\) with \(+\) for the electron and \(-\) for the hole (see the schematic dispersion relation in Fig. 1). Inspecting the electron and hole contributions to the conductance shown with green and red curves, respectively, we observe a significant reduction of the \(R_{ee}\) amplitude compared to the zero-energy case of Fig. 2(b). This means that at the first conductance step the probability of the electron that has passed the QPC to go back to the source electrode is negligible. It is accompanied by a small \(R_{he}\) amplitude. This means that the almost unity conductance is obtained mainly because of the lack of electron reflection to the left lead and not because of the Andreev reflected hole scattered back there. This initially puzzling result becomes clear when one inspects the probability current maps shown in Fig. 4. First, when Figure 3: (a) The map of the change of the conductance introduced by the SGM tip at non-zero voltage (\(E=0.75\Delta\)) with a disturbed fringes pattern due to the non-retracing hole trajectories. The inset shows conductance versus the QPC gate voltage with the red point denoting the \(V_{g}\) value chosen for the calculation of the map. (b) Conductance (black), electron-electron (green) and electron-hole (red) transmission coefficients obtained for \(y_{\rm{tip}}=0\). The inset show the Fourier transform of the conductance cross-section with two prominent frequencies for electrons and holes. considering the zero energy case we see that the electron (a) and (b) the hole currents are mostly the same, as the electron injected from the QPC is Andreev reflects and the resulting hole is back-focused into the QPC constriction. Upon introduction of the SGM tip (marked as a gray circle in the bottom row of Fig. 4) we see a deflection of the electron trajectory, but the hole does trace the electron path. The situation is strikingly different in the non-zero energy case. First, in the absence of the tip, we clearly observe the creation of resonant features that perturb the electron [Fig. 4(c)] and hole [Fig. 4(d)] flow. The electron approaches the superconductor interface with the wavevector \(k_{e}\) at an angle \(\varphi_{e}\) defined with respect to the normal to the interface. After the Andreev reflection, the hole leaving the interface has the wavevector \(k_{h}\neq k_{e}\). The momentum component along the interface is preserved and, hence, for non-zero \(E\) the normal components of \(k_{e}\) and \(k_{h}\) are different. As a result, the hole returns from the interface following a trajectory determined by \(\varphi_{h}\neq\varphi_{e}\) and therefore does not retrace the electron path [36] [ee the arrows in Fig. 1]. As a result the Andreev reflected hole does not focus at the QPC constriction, which results in a low value of the \(R_{he}\) amplitude. In Fig. 4(d) we also observe streams of hole current that point _from_ the QPC to the sink electrodes. The introduction of the SGM tip at those locations results in an amplification of the conductance, as the hole reflected by the tip to the QPC is now able to amplify the conductance, leading to an increase of the conductance by the tip shown in Fig. 3(a). Finally, upon introduction of the tip in Figs. 4(g) and (h) we clearly observe the effect of different incident and reflection angles, which deflects the hole current from the initial trajectory of the incident electron and, in turn, causes disturbance of the conductance pattern as seen in the map of Fig. 3(a). Similar results were obtained for the second conductance step as presented in the Supplementary Materials. In summary, we studied theoretically the possibility of local probing of electron and hole transport in a QPC embedded in an NS junction. We proposed to use the SGM technique to observe electron and hole self-interference. We pointed out that this technique is able to unveil features of the electronic transport at non-zero bias voltage, when due to different electron and hole wavevectors the self-interference conductance oscillations exhibit beating and the Andreev-reflected hole does not retrace the electron path creating pronounced resonance patterns in the SGM probed conductance. This work was supported by the National Science Center, Poland (NCN) agreement number UMO-2020/38/E/ST3/00418 and partially the program,Excellence initiative--research university for the AGH University of Krakow.
2305.11385
Robust MPC with Zone Tracking
We propose a robust nonlinear model predictive control design with generalized zone tracking (ZMPC) in this work. The proposed ZMPC has guaranteed convergence into the target zone in the presence of bounded disturbance. The proposed approach achieves this by modifying the actual target zone such that the effect of disturbances is rejected. A control invariant set (CIS) inside the modified target zone is used as the terminal set, which ensures the closed-loop stability of the proposed controller. Detailed closed-loop stability analysis is presented. Simulation studies based on a continuous stirred tank reactor (CSTR) are performed to validate the effectiveness of the proposed ZMPC.
Zhiyinan Huang, Jinfeng Liu, Biao Huang
2023-05-19T02:02:19Z
http://arxiv.org/abs/2305.11385v1
# Robust MPC with Zone Tracking ###### Abstract We propose a robust nonlinear model predictive control design with generalized zone tracking (ZMPC) in this work. The proposed ZMPC has guaranteed convergence into the target zone in the presence of bounded disturbance. The proposed approach achieves this by modifying the actual target zone such that the effect of disturbances is rejected. A control invariant set (CIS) inside the modified target zone is used as the terminal set, which ensures the closed-loop stability of the proposed controller. Detailed closed-loop stability analysis is presented. Simulation studies based on a continuous stirred tank reactor (CSTR) are performed to validate the effectiveness of the proposed ZMPC. ## 1 Introduction Due to the development in computational hardware, nonlinear model predictive control (MPC) is considered to be a promising advanced control strategy and has attracted high research interests over the past decades. Inspired by the linear quadratic regulator (LQR), MPC has the ability to solve nonlinear optimization problems while considering constraints simultaneously. However unlike LQR problems that can be solved explicitly for infinite horizon, the same cannot be achieved for MPC due to the presence of constraints. Instead, MPC is solved in a receding horizon manner [1, 2], where the optimization problem is solved for a finite control horizon at each sampling instant with only the very first control policy applied to the system. Conventional MPC tracks a referencing trajectory, which is oftentimes an optimal steady-state operating point [2]. This approach, however, may limit the flexibility of the controller and sacrifice the process performance during transient operations. One popular extension of the conventional MPC is economic MPC (EMPC), which considers a general economic objective directly in dynamic optimizations [3]. Improved transient economic performance has been observed in EMPC compared to MPC [4, 5]. Another approach that helps to improve the flexibility of conventional MPC is zone MPC (ZMPC). ZMPC relaxes the set-point-tracking objective to a zone-tracking one, which aims to drive the system into a bounded set [6, 7]. More degrees of freedom are provided by ZMPC, which are beneficial for handling multiple control objectives (e.g. tracking and economic) simultaneously. Furthermore, ZMPC is potentially more robust in the presence of noise or disturbance. Zone-tracking is also a natural objective that rises in many real-world problems. For example, in agricultural practice, maintaining the soil moisture in a certain range is often sufficient and more practical rather than attempting to maintain it at a particular level. Various applications of ZMPC have been reported in the literature, for example, treatment of diabetes in [8, 9], control of heat system inside a building in [10], control of coal-fired boiler-turbine generating systems in [11], and control of irrigation systems in [12] and [13]. Closed-loop stability of the system is always an essential aspect to be considered in controller design, as unstable controllers can lead to drastic safety risks in process operation. The closed-loop stability theories of MPC and EMPC are well-developed and widely used in literature based on the Lyapunov stability theory [14, 15, 16, 17, 18]. However, most of these stability theories rely on the assumption of the existence of an optimal operating point for both MPC and EMPC. On the other hand, ZMPC is often considered to be a relaxation to conventional MPC with not many investigations done in terms of the stability property. The stability of ZMPC with a secondary economic objective is investigated in [19]. An extension of [19] considered the presence of disturbance, of which a ZMPC that tracks an alternative economic zone is proposed [20]. In both works, the stability theories rely on the assumption that there exists an optimal steady-state operating condition. On the other hand, a generalized ZMPC was proposed with no assumption on steady state optimal operating point in [21], however without considering any system disturbance or noise. In this work, we propose a robust ZMPC with guaranteed convergence in the presence of bounded disturbance, which is an extension of the results obtained in [21]. The proposed design rejects the effect of disturbance by modifying the target set. The modified target set is a subset of the actual target set, which the shrinkage is proved to be upper-bounded. A terminal constraint is employed to ensure the closed-loop stability, which forces the system state to converge to a forward control invariant set (CIS) inside the modified target set. Based on the stability theory, a practical guideline for determining the modified target zone is proposed. On top of the generalized robust ZMPC, we consider a secondary economic objective. Instead of introducing economic zones as proposed in [20], we prioritize the zone tracking objective and establish stability based on the ZMPC alone. This implies that the economic objective is optimized only if the zone-tracking objective is satisfied, which provides a larger feasible region and more flexibility, as the economic objective can be modified without affecting the stability property of the controller. The performance of the proposed ZMPC is validated through simulations based on a continuous stirred tank reactor (CSTR). The performance of the generalized ZMPC proposed in [21] is employed as the benchmark reference. To show the effectiveness of the proposed design, simulations based on alternative controller formulations are performed and the results are compared with those obtained based on the proposed design. The effectiveness of the proposed practical guideline is tested through simulations with different values of the tuning parameter. A remark regarding the selection of the tuning parameter is summarized. The remainder of this script is organized as follows: Section 2 introduces the preliminaries and the system of interest, Section 3 presents the proposed ZMPC formulation, of which the stability proof is presented in Section 4. Based on the stability theory, an algorithm for estimating the modified target set is designed in Section 5. Section 6 introduces the proposed ZMPC with additional economic objective. The simulation results obtained based on the CSTR are presented in 7. Finally, concluding remarks and ideas for future works are summarized in Section 8. ## 2 Preliminaries ### Notation Throughout this work, \(|\cdot|\) denotes the Euclidean norm of a scalar or a vector. The operator \(||\cdot||_{n}\) denotes the \(n\)-norm of a scalar or a vector. \(\mathbb{I}_{M}^{N}\) represents the set of integers from \(M\) to \(N\)\(\{M,M+1,\cdots,N\}\). \(\mathbb{I}_{\geq 0}=\{0,1,2,...\}\) denotes the set of non-negative integers. For two sets \(A\) and \(B\), the set subtraction is defined as \(A\backslash B=\{a\in A|a\notin B\}\). The operators \(\max\left\{\cdot\right\}\) and \(\min\left\{\cdot\right\}\) find the maximum and minimum of each element in a vector variable respectively. The operator \(a\odot b\) represents element-wise multiplication between vectors \(a\) and \(b\). ### Description and problem formulation The following discrete-time nonlinear system is considered in this work: \[x(n+1)=f(x(n),u(n),w(n)) \tag{1}\] where \(x(\cdot)\in\mathbb{X}\subseteq\mathbb{R}^{n_{x}}\) denotes the state vector, \(u(\cdot)\in\mathbb{U}\subseteq\mathbb{R}^{n_{u}}\) denotes the control input vector, and \(w(\cdot)\in\mathbb{W}\subseteq\mathbb{R}^{n_{w}}\) denotes the disturbance vector. \(n\in\mathbb{I}_{\geq 0}\) denotes the time step. A few assumptions are enforced on the system of interest throughout this work and are listed below: **Assumption 1**: _The constraint sets \(\mathbb{X}\) and \(\mathbb{U}\) are compact and coupled such that:_ \[(x(n),u(n))\in\mathbb{Z}\subseteq\mathbb{X}\times\mathbb{U}\] _Furthermore, the sets can be expressed as following element-wise constraints:_ \[\mathbb{X}:=\left\{x\Big{|}x_{lb}\leq x\leq x_{ub}\right\},\;\mathbb{U}:= \left\{u\Big{|}u_{lb}\leq u\leq u_{ub}\right\}\] _where \(x_{lb}\), \(x_{ub}\), \(u_{lb}\), and \(u_{ub}\) are constant vectors, with each elements being the element-wise lower and upper bound of the state and input. The expressions imply that the constraint sets are polyhedrons in the corresponding space bounded by hyperplanes._ **Assumption 2**: _The disturbance vector \(w\) is bounded and contains the origin in its domain:_ \[\mathbb{W}:=\left\{w\in\mathbb{R}^{n_{w}}:||w||_{\infty}\leq\theta,\theta>0\right\} \tag{2}\] _such that the infinity norm of \(w\) is bounded by a positive constant \(\theta\)._ **Assumption 3**: _The function \(f:\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{u}}\times\mathbb{R}^{n_{w}}\to \mathbb{R}^{n_{x}}\) is locally Lipschitz with respect to \(x\) and \(w\) for all \(x\in\mathbb{X}\), \(u\in\mathbb{U},w\in\mathbb{W}\). This implies there exist positive constants \(L_{x}\) and \(L_{w}\) such that:_ \[|f(x,u,w)-f(z,u,0)|\leq L_{w}|w|+L_{x}|x-z| \tag{3}\] _which indicates that the function \(f\) cannot change drastically as \(x\) and \(w\) changes._ The control objective of this work is to drive the state of system (1) into a tracking target set \(\mathbb{X}_{t}\) and keeps it inside \(\mathbb{X}_{t}\) thereafter in the presence of process uncertainties. The target set \(\mathbb{X}_{t}\) can be expressed as the following element-wise constraints: \[\mathbb{X}_{t}:=\left\{x\Big{|}x_{lb}^{t}\leq x\leq x_{ub}^{t}\right\} \tag{4}\] where \(x_{lb}^{t}\) and \(x_{ub}^{t}\) are constant vectors, with each element being the element-wise lower and upper bound on the state. To achieve the control objective, we propose a robust ZMPC with guaranteed closed-loop stability in the Lyapunov sense, which is introduced in Section 3. The proposed robust ZMPC is an extension of the generalized ZMPC proposed in [21], which is introduced first in Section 2.3 for the completeness. ### A generalized ZMPC formulation The generalized ZMPC proposed in [21] is presented in this section. This generalized ZMPC assumes perfect knowledge regarding the system dynamics (i.e. \(x(n+1)=f(x(n),u(n),0)\)). Throughout this work, we refer to this generalized ZMPC as the nominal ZMPC, which is used as the referencing benchmark for the performance validation of our proposed ZMPC. The generalized ZMPC formulation for time instant \(n\) is defined as follows: \[V_{N}^{0}(x(n))= \min_{u(0),\cdots,u(N-1)}\sum_{i=0}^{N-1}\ell_{z}(\hat{x}(i)) \tag{5a}\] \[\text{s.t.}\ \hat{x}(i+1)=f(\hat{x}(i),u(i),0),\ \ i\in\mathbb{I}_{0}^{N-1}\] (5b) \[\hat{x}(0)=x(n)\] (5c) \[\hat{x}(i)\in\mathbb{X},\ \ i\in\mathbb{I}_{0}^{N-1}\] (5d) \[u(i)\in\mathbb{U},\ \ i\in\mathbb{I}_{0}^{N-1}\] (5e) \[\hat{x}(N)\in\mathbb{X}_{f} \tag{5f}\] where (5b) is the nominal process model constraint, (5c) is the initial condition constraint, and (5d) and (5e) are the state and input constraints, respectively. (5f) denotes the terminal constraint, where \(\mathbb{X}_{f}\) denotes the terminal set. \(N\) is a positive constant that represents the control horizon of the controller. \(\hat{x}(i)\) denotes the predicted states over the prediction horizon for \(i\in\mathbb{I}_{0}^{N-1}\). \(\ell_{z}(\cdot)\) represents the zone-tracking objective and is defined as follows: \[\ell_{z}(z)=\min_{z_{z}}c_{1}||z-z_{z}||_{1}+c_{2}||z-z_{z}||_{2}^ {2} \tag{6a}\] \[s.t.\ z_{z}\in\mathbb{X}_{t} \tag{6b}\] where \(c_{1}\) and \(c_{2}\) are non-negative weighting factors, \(\mathbb{X}_{t}\subset\mathbb{X}\) denotes the target zone, and \(z_{z}\) is a slack variable that is forced to stay inside the target zone by (6b). The zone-tracking objective is a weighted summation of the 1-norm and squared 2-norm of the minimum difference between the actual state and the slack variable, which is a representation of the distance between the system state \(x\) and the target set \(\mathbb{X}_{t}\). When the system state is outside the target set, \(\ell_{z}\) is positive. When the system state converges to the target set, \(\ell_{z}\) equals to zero. Taking the definition of (6) into consideration, the objective of the controller (5) is to find the optimal input sequence for \(N\) future steps such that the system state is driven towards and kept inside the target zone while constraints (5b) - (5e) are satisfied. In addition, the terminal state \(x_{N}\) is forced to converge to \(\mathbb{X}_{f}\subseteq\mathbb{X}_{t}^{M}\subseteq\mathbb{X}_{t}\), where \(\mathbb{X}_{t}^{M}\) denotes the largest forward control invariant set (CIS) inside \(\mathbb{X}_{t}\). The definition of a forward CIS is provided as follows: **Definition 1**: _(Forward control invariant set [22]) A set \(\mathbb{X}_{r}\subseteq\mathbb{X}\) is said to be a forward control invariant set (CIS) for the system \(x(n+1)=f(x(n),u(n),0)\) if there exists a control policy \(u(n)=\mu(x(n))\) for every \(x(n)\in\mathbb{X}_{r}\) such that \(x(n+1)=f(x(n),\mu(x(n)),0)\in\mathbb{X}_{r}\)._ We refer to the forward CIS as CIS hereafter for simplicity. The nominal ZMPC is proved to be closed-loop stable in the Laypunov sense when the system model is exactly known. In this work, we extend the design of the nominal ZMPC to consider nonlinear systems with model-plant mismatch in the form of process disturbance as shown in (1). A robust ZMPC with guaranteed closed-loop stability is proposed in the following section. ## 3 The proposed robust ZMPC The proposed robust ZMPC is presented in this section. The effect of process uncertainty is encountered through modifying the tracking target set and the terminal set. The proposed robust ZMPC is designed based on the nominal system \(\tilde{x}(n+1)=f(x(n),u(n),0)\), as the value of the disturbance \(w(n)\) is unknown at any given time \(n\in\mathbb{I}_{\geq 0}\). The proposed ZMPC design for time instant \(n\) is presented as follows: \[V_{N}^{0}(x(n))= \min_{u(0),\cdots,u(N-1)}\sum_{i=0}^{N-1}\tilde{\ell}_{z}(\tilde {x}(i))\] (7a) s.t. \[\tilde{x}(i+1)=f(\tilde{x}(i),u(i),0),\ \ i\in\mathbb{I}_{0}^{N-1} \tag{7b}\] \[\tilde{x}(0)=x(n)\] (7c) \[\tilde{x}(i)\in\mathbb{X},\ \ i\in\mathbb{I}_{0}^{N-1}\] (7d) \[u(i)\in\mathbb{U},\ \ i\in\mathbb{I}_{0}^{N-1}\] (7e) \[\tilde{x}(N)\in\tilde{\mathbb{X}}_{f} \tag{7f}\] where \(\tilde{\ell}_{z}(\cdot)\) is defined as follows: \[\tilde{\ell}_{z}(z)= \min_{z_{z}}c_{1}||z-z_{z}||_{1}+c_{2}||z-z_{z}||_{2}^{2} \tag{8a}\] \[s.t.\ z_{z}\in\tilde{\mathbb{X}}_{t} \tag{8b}\] Similar notations employed in (5) and (6) are adopted in the proposed controller (7) and (8) except that \(\tilde{x}\) is used to represent the nominal state predicted by the nominal model. Compare to the nominal ZMPC, the key changes took place in the proposed design are the modified target set and the terminal set \(\tilde{\mathbb{X}}_{f}\) in (8b) and (7f), respectively. The modified target set is a subset of the original target set (\(\tilde{\mathbb{X}}_{t}\subset\mathbb{X}_{t}\)). The terminal constraint is needed to ensure the closed-loop stability with a CIS inside the modified target zone \(\tilde{\mathbb{X}}_{t}\). The proposed ZMPC is applied to system (1) in the receding horizon fashion. At each time instant \(n\in\mathbb{I}_{\leq 0}\), the optimal input \(u(i|n)^{*},\ i\in\mathbb{I}_{0}^{N-1}\) for \(N\) future steps is obtained by solving the optimization problem (7). Only the optimal input for \(i=0\) is applied to the system at time \(n\). At time \(n+1\), the optimization problem (7) is evaluated again with updated initial condition. Figure 1 presents potential trajectories of the closed-loop nominal state and the actual state based on the proposed robust ZMPC. The solid box represents the state space \(\mathbb{X}\), while the rectangles bounded by the dashed and the dotted lines represent the actual target set \(\mathbb{X}_{t}\) and the modified target set \(\tilde{\mathbb{X}}_{t}\), respectively. The ellipsoid bounded by the dotted line denotes the largest CIS \(\tilde{\mathbb{X}}_{t}^{M}\) inside the modified target set, which is used as the terminal set \(\tilde{\mathbb{X}}_{f}\) of the proposed controller. Oftentimes, a nominal ZMPC tends to drive the system state to the boundary of the target set, which likely leads to target set violation in the presence of process uncertainties. By ensuring the closed-loop convergence of the nominal state to the modified target set, which is a subset of the actual target set, a buffer zone is provided to reject the effect of process uncertainties. The terminal constraint (7f) ensures that the nominal state will stay inside the control invariant terminal set \(\tilde{\mathbb{X}}_{f}\) and thus \(\tilde{\mathbb{X}}_{t}\) once it has first converged. More details regarding the closed-loop stability of the proposed controller are discussed in the following section. **Remark 1**: _Throughout the manuscript, we use \(x(\cdot)\) to denote the true system state at a given time Figure 1: Potential nominal state trajectory (dotted line) and actual state trajectory (dashed line) of the closed-loop system under the proposed ZMPC. step. Contrarily, \(\tilde{x}(\cdot)\) denotes the system state predicted based on the knowledge available to the controller. Following similar ideas, the set \(\mathbb{X}_{t}\) reflects the true zone control objectives we have on the system of interest, which is referred to as the actual target set. For example, the objective is to keep the reactor temperature in a certain range. The set \(\tilde{\mathbb{X}}_{t}\) in comparison, is mathematically determined based on the actual target set with no physical significance and is referred to as the modified target set. Furthermore, we define the set \(\mathbb{X}_{t}^{M}\) and \(\tilde{\mathbb{X}}_{t}^{M}\) as the largest CIS inside \(\mathbb{X}_{t}\) and \(\tilde{\mathbb{X}}_{t}\), respectively._ ## 4 Stability analysis In this section, we investigate the closed-loop stability of the proposed robust ZMPC (7). First, we introduce the definition of the \(N\)-step reachable set, which is essential in characterizing the feasibility of the proposed robust ZMPC. **Definition 2**: _(\(N\)-step reachable set [21]) We use \(\mathbb{X}_{N}(\mathbb{X},\mathbb{X}_{f})\) to denote the set of states in \(\mathbb{X}\) that can be steered to \(\mathbb{X}_{f}\) in \(N\) steps while satisfying the state and input constraints \((x,u)\in\mathbb{Z}\). That is,_ \[\mathbb{X}_{N}(\mathbb{X},\mathbb{X}_{f})=\left\{x(0)\in\mathbb{X}\mid\exists \left(x(n),u(n)\right)\in\mathbb{Z},n\in\mathbb{I}_{0}^{N-1},x(N)\in\mathbb{X }_{f}\right\}\] The following assumption is employed to ensure the feasibility of the proposed robust ZMPC. **Assumption 4**: _The terminal set \(\tilde{\mathbb{X}}_{f}\) and the \(N\)-step reachable set \(\mathbb{X}_{N}(\mathbb{X},\tilde{\mathbb{X}}_{f})\) are compact and nonempty._ For the completion of this work, here we summarize the essential results obtained in [21] regarding the nominal ZMPC as Theorem 1. **Theorem 1**: _(Stability of the nominal ZMPC [21]) If Assumptions 1 and 4 hold, \(x_{0}\in\mathbb{X}_{N}(\mathbb{X},\mathbb{X}_{f})\), \(w(n)=0\), and in addition, the optimal value function \(V_{N}^{0}(x(n))\) is locally continuous on \(\mathbb{X}_{t}\), then: (i) the zone MPC is recursively feasible with \(x(n)\rightarrow\mathbb{X}_{t}^{M}\), (ii) the terminal set \(\mathbb{X}_{t}^{M}\) is asymptotically stable, (iii) the following inequality holds:_ \[V_{N}^{0}(\tilde{x}(n+1))-V_{N}^{0}(\tilde{x}(n))\leq-\ell_{z}(\tilde{x}(n)), \ \ n\in\mathbb{I}_{\geq 0} \tag{9}\] _where \(V_{N}^{0}\) is the optimum control objective function and is a Lyapunov function with respect to the set \(\mathbb{X}_{t}^{M}\), and \(\ell_{z}(\cdot)\) denotes the zone-tracking objective function defined in (6)._ As the proposed ZMPC is applied to the system of interest in the receding horizon fashion, the optimization problem is solved at each time step with the updated information regarding the actual state at the initial time step over the prediction horizon (i.e. \(\tilde{x}(n)=x(n)\)). This implies only the one-step-ahead deviation caused by the plant-model mismatch between the nominal model and the actual model needs to be considered. The following Proposition provides an upper bound on the one-step-ahead deviation of the predicted state from the actual state: **Proposition 1**: _(c.f. [20]) Consider the actual system (1) and the nominal system with \(w(n)=0\). If Assumption 2 is satisfied, starting from a known initial condition \(x(n)\), the deviation of the actual state \(x(n+1)\) from the predicted state \(\tilde{x}(n+1)\) in one time step is bounded:_ \[|x(n+1)-\tilde{x}(n+1)|\leq\sqrt{n_{x}}L_{w}\theta,\ \forall x(n+1),\ \tilde{x}(n+1)\in\mathbb{X} \tag{10}\] Based on the upper bound of the one-step-ahead deviation in the state, Proposition 2 provides an upper bound on the progression of the Lyapunov function \(V_{N}^{0}(\cdot)\) in one time step from \(x(n)\) to \(x(n+1)\). **Proposition 2**: _Consider the Lyapunov function \(V_{N}^{0}(x)\). There exist positive constants \(K_{V}\) and \(H\) such that:_ \[V_{N}^{0}(x(n+1))-V_{N}^{0}(x(n))\leq-\tilde{\ell}_{z}(\tilde{x}(n))+f_{V}( \sqrt{n_{x}}L_{w}\theta) \tag{11}\] _where \(n_{x}\) and \(\theta\) are known parameters, \(\tilde{\ell}_{z}(\cdot)\) denotes the zone-tracking objective function defined in (8), and \(f_{V}\) is a function defined as follows:_ \[f_{V}(x)=K_{V}x+Hx^{2}\] _where \(K_{V}\) and \(H\) are positive constants._ **Proof.** Upon applying Taylor expansion to \(V_{N}^{0}(x)\) around the predicted state \(\tilde{x}(n+1)\), the following relation can be obtained: \[\begin{split}& V_{N}^{0}(x(n+1))=V_{N}^{0}(\tilde{x}(n+1))+\\ &\frac{\partial V_{N}^{0}}{\partial x}\bigg{|}_{\tilde{x}(n+1)} |x(n+1)-\tilde{x}(n+1)|+H.O.T.\end{split} \tag{12}\] where \(H.O.T.\) includes the higher order terms of the Taylor expansion. A positive constant \(H\) can be found for \(x\in\mathbb{X}\) such that the following holds: \[H.O.T.\leq H|x(n+1)-\tilde{x}(n+1)|^{2} \tag{13}\] Consider (12) and (13) correspondingly, we can obtain the following inequality regarding the dif ference in \(V_{N}^{0}\) due to the effect of disturbance in one step: \[\begin{split} V_{N}^{0}(x&(n+1))\leq V_{N}^{0}(\tilde{ x}(n+1))\\ &+\left.\frac{\partial V_{N}^{0}}{\partial x}\right|_{\tilde{x}(n+1)}|x(n+1)-\tilde{x}(n+1)|\\ &+H|x(n+1)-\tilde{x}(n+1)|^{2}\end{split} \tag{14}\] If Assumptions 3 and 4 are satisfied, there exists a positive constant \(K_{V}\) such that the magnitude of the partial derivative \(\frac{\partial V_{N}^{0}}{\partial x}\) is bounded by it: \[\left|\frac{\partial V_{N}^{0}}{\partial x}\right|\leq K_{V} \tag{15}\] Define \(f_{V}(x)=K_{V}x+Hx^{2}\). Based on (14) and (15), applying Proposition 1, the following relationship can be derived: \[V_{N}^{0}(x(n+1))\leq V_{N}^{0}(\tilde{x}(n+1))+f_{V}(\sqrt{n_{x}}L_{w}\theta) \tag{16}\] Taking into account of (9) and (16), plus the fact that \(x(n)=\tilde{x}(n)\), the relationship in (11) can be obtained and this proves Proposition 2. \(\blacksquare\) To derive the amount of shrinkage required in the target set that ensures the closed-loop convergence of the system in the presence of disturbance, we introduce the following definition of Lyapunov function level set: **Definition 3**: _(Level set of Lyapunov function) The set \(\Omega_{\rho}\) is defined to be the level set of the Lyapunov function \(V_{N}^{0}(\cdot)\):_ \[\Omega_{\rho}:=\{x\in\mathbb{X}:V_{N}^{0}(x)\leq\rho\} \tag{17}\] Let \(\Omega_{\rho_{max}}\) be the largest level set of \(V_{N}^{0}\) inside \(\mathbb{X}_{t}^{M}\): \[\rho_{max}:=max\{\rho:x\in\mathbb{X}_{t}^{M},V_{N}^{0}(x)\leq\rho\} \tag{18}\] Define \(\Omega_{\rho_{e}}\subset\Omega_{\rho_{max}}\) that satisfies: \[\rho_{e}=\rho_{max}-f_{V}(\sqrt{n_{x}}L_{w}\theta) \tag{19}\] **Assumption 5**: \(\Omega_{\rho_{e}}\) _is nonempty._ **Theorem 2**: _Consider system (1) under the control of the proposed ZMPC (7). If \(x(n)\in\Omega_{\rho_{e}}\) and Assumptions 3 - 5 are satisfied, then it is guaranteed that \(x(n+1)\in\Omega_{\rho_{max}}\)._ **Proof.** At a given time instant \(n\), with the initial condition \(x(n)\in\Omega_{\rho_{e}}\), it is guaranteed that there exists a feasible control policy such that the nominal state at the next time step \(\tilde{x}(n+1)\in\Omega_{\rho_{e}}\), as \(\Omega_{\rho_{e}}\) is control invariant. Take the inequality expression (16) into consideration, the deviation in the value of Lyapunov function \(V_{N}^{0}\) from \(\tilde{x}(n+1)\) to \(x(n+1)\) is bounded by \(f_{V}(\sqrt{n_{x}}L_{w}\theta)\). Recall that \(\Omega_{\rho_{e}}\) is a shrunk Lyapunov function level set with respect to \(\Omega_{\rho_{max}}\), with the shrinkage defined in the Lyapunov function value by \(f_{V}(\sqrt{n_{x}}L_{w}\theta)\), which proves Theorem 2. \(\blacksquare\) Theorem 2 indicates the convergence of the actual state into a control invariant subset of the actual target set \(\mathbb{X}_{t}\) if the initial state \(x(n)\in\Omega_{\rho_{e}}\). The following theorem provides statements on the upper bound of the shrinkage amount in the target set required in the proposed ZMPC design such that the state is guaranteed to be driven into \(\Omega_{\rho_{e}}\). **Definition 4**: _(The smallest polyhedron set around a known set) We define the smallest polyhedron set bounded by orthogonal hyperplanes \(\mathbb{P}(\mathbb{S})\) around a given set \(\mathbb{S}\subseteq\mathbb{R}^{n_{s}}\) as follows:_ \[\mathbb{P}(\mathbb{S})=\{s|s_{lb}\leq s\leq s_{ub}\} \tag{20}\] _where \(s_{lb},s_{ub}\in\mathbb{R}^{n_{s}}\) are vectors with each element \(s^{i}_{lb},\ i=1,2,\cdots,n_{s}\) and \(s^{i}_{ub},\ i=1,2,\cdots,n_{s}\) being the lower and upper bound of each dimension of the set \(\mathbb{S}\):_ \[s^{i}_{lb} =\min\{s^{i}|s\in\mathbb{S}\},\ i=1,2,\cdots,n_{s} \tag{21a}\] \[s^{i}_{ub} =\max\{s^{i}|s\in\mathbb{S}\},\ i=1,2,\cdots,n_{s} \tag{21b}\] _where \(s^{i}\) is the \(i\)-th element of the vector \(s\)._ **Theorem 3**: _Consider system (1) under the control of the proposed ZMPC (7). If Assumptions 1 - 5 are satisfied, \(x(0)\in\mathbb{X}_{N}(\mathbb{X},\tilde{\mathbb{X}}_{f})\), and the following relationship holds:_ \[-\tilde{\ell}_{z}(x)+f_{V}(\sqrt{n_{x}}L_{w}\theta)<0,\ \forall x\in \mathbb{X}_{N}(\mathbb{X},\tilde{\mathbb{X}}_{f})\backslash\Omega_{\rho_{e}} \tag{22}\] _then there exists \(\overline{\varepsilon},\underline{\varepsilon}\in\mathbb{R}^{n_{x}}\) with non-negative elements that define the modified target zone \(\tilde{\mathbb{X}}_{t}\) as follows:_ \[\tilde{\mathbb{X}}_{t}=\{x:\tilde{x}^{t}_{lb}\leq x\leq\tilde{x}^{t}_{ub}| \tilde{x}^{t}_{lb}-x^{t}_{lb}\leq\underline{\varepsilon},x^{t}_{ub}-\tilde{x }^{t}_{ub}\leq\overline{\varepsilon}\} \tag{23}\] _such that the proposed ZMPC controller is recursively feasible and guarantees the closed-loop convergence of the system state into the actual target zone \(\mathbb{X}_{t}\). Note that \(x^{t}_{lb}\) and \(x^{t}_{ub}\) are known vectors containing the element-wise lower and upper bound of the actual target set \(\mathbb{X}_{t}\) as defined in Assumption 1._ **Proof.** We define the modified target set as the smallest polyhedron set around \(\Omega_{\rho_{e}}\) (i.e. \(\tilde{\mathbb{X}}_{t}=\mathbb{P}(\Omega_{\rho_{e}})\)) and the corresponding terminal set \(\tilde{\mathbb{X}}_{f}\) as the largest CIS inside \(\tilde{\mathbb{X}}_{t}\). If Assumption 5 holds, it means there exists a valid set of \(\tilde{x}_{lb}^{t}\) and \(\tilde{x}_{ub}^{t}\) such that \(\tilde{\mathbb{X}}_{t}\subset\mathbb{X}_{t}\) is bounded and nonempty. Recall that the actual target set \(\mathbb{X}_{t}\) is bounded, meaning that the elements in vectors \(\overline{\varepsilon},\underline{\varepsilon}\in\mathbb{R}^{n_{x}}\) are bounded and non-negative. Take the relationship defined by (22) into consideration. The value of the Lyapunov function \(V_{N}^{0}\) is guaranteed to be decreasing until the state converges to \(\Omega_{\rho_{e}}\), thus ensuring the convergence of the nominal state \(\tilde{x}\) into \(\Omega_{\rho_{e}}\). In the presence of disturbance, the actual state \(x\) may deviate from \(\Omega_{\rho_{e}}\) after the first convergence of the nominal state \(\tilde{x}\). Taking Theorem 2 into consideration, it is guaranteed that with \(x(n)\in\Omega_{\rho_{e}}\), \(x(n+1)\in\Omega_{\rho_{max}}\), meaning the actual state is kept inside \(\Omega_{\rho_{max}}\subset\mathbb{X}_{t}^{M}\subset\mathbb{X}_{t}\) in one step. As \(\Omega_{\rho_{e}}\subset\Omega_{\rho_{max}}\subset\mathbb{X}_{N}(\mathbb{X}, \tilde{\mathbb{X}}_{f})\), the actual state that has deviated in one step due to the effect of disturbance is guaranteed to be driven back to \(\Omega_{\rho_{e}}\) in finite steps as (22) holds. This indicates that the proposed ZMPC controller is able to drive the system state into the actual target set \(\mathbb{X}_{t}\) and keeps it inside. \(\blacksquare\) **Remark 2**: _By definition, any CIS inside the modified target set \(\tilde{\mathbb{X}}_{t}\) can be used as the terminal set \(\tilde{\mathbb{X}}_{f}\). However, in order to gain a controller with a larger feasibility region and more degrees of freedom, we define \(\tilde{\mathbb{X}}_{f}=\tilde{\mathbb{X}}_{t}^{M}\). The largest CIS inside the modified target set is approximated using the algorithm proposed in [23]. The algorithm provides an inner approximation of the largest CIS in the form of a polyhedron bounded by hyperplanes._ ## 5 An algorithm for estimating the modified target set \(\tilde{\mathbb{X}}_{t}\) The stability proof in the previous section ensures the existence of an upper bound of the shrinkage amount on the actual target set, however, it is very challenging to obtain explicit expressions of the Lyapunov function and numerical values of the parameters. In order to make the proposed controller more practical in applications, we propose an algorithm for estimating and tuning the modified target set. The proposed algorithm is inspired by a study on the problem in the context of linear systems. Consider a linear system as follows: \[x(n+1)=Ax(n)+Bu(n)+Ew(n) \tag{24}\] where \(A\), \(B\), and \(E\) are matrices of the appropriate dimensions. Let us assume that the linear system is controllable and there exists the following quadratic Lyapunov function for the system: \[V_{N}^{0}(x)=x^{T}Px \tag{25}\] where \(P\) is a symmetric positive definite matrix. Recall the Lipschitz continuity Assumption 3. If we substitute the linear system model (24) into the left hand side of the inequality (3), the following expression can be obtained: \[|f(x,u,w)-f(x,u,0)|=|Ew|\leq L_{w}|w| \tag{26}\] By the property of norm, the following expression holds: \[|Ew|\leq|E||w| \tag{27}\] Thus, we may define \(L_{w}=|E|\) such that Assumption 3 holds. Recall the inequality relation (10): \[|x(n+1)-\tilde{x}(n+1)|\leq\sqrt{n_{x}}L_{w}\theta\] where \(n_{x}\) is the dimension of the state, \(\theta\) is the upper bound of the one-norm of the disturbance and is assumed to be known. Thus for the linear system, the right-hand side of the inequality can be determined explicitly. By calculating the term \(\sqrt{n_{x}}L_{w}\theta\), a conservative approximation of the norm of the one-step-ahead difference between the actual state \(x(n+1)\) and the nominal state \(\tilde{x}(n+1)\) in the presence of disturbance is obtained. For the nonlinear system of interest, the matrix \(E\) is essentially the sensitivity matrix of the state with respect to the disturbance (i.e. \(\frac{\partial x}{\partial w}\)). The value of the sensitivity matrix is a function of the state, input, and disturbance and can be calculated explicitly with a given combination of the variables. Practically, based on inequality (10), we propose the following definition of the shrinkage amount \(s\) in the state boundaries of the target zone for the nonlinear system: \[s=\gamma x_{max}^{d} \tag{28}\] where \(\gamma\in\mathbb{R}^{+}\) is a risk factor that helps to tune how conservative the proposed controller is, and \(x_{max}^{d}\) essentially estimates the largest potential effect the disturbance can have on the actual state compared to the nominal state in one-step-ahead prediction, of which the definition is inspired by (10). \(x_{max}^{d}\) can be calculated using Algorithm 1 based on sensitivity analysis. ``` 0:\(x\in\mathbb{X}_{t}^{M}\), \(u\in\mathbb{U}\), \(w\in\mathbb{W}\) \(I_{t}\leftarrow[\;]\) for\(i=0:n_{x}-1\)do if\(x_{lb}<x_{lb}^{t}\) or \(x_{ub}>x_{ub}^{t}\)then \(I_{t}\leftarrow[I_{t},1]\) else\(I_{t}\leftarrow[I_{t},0]\) endif endfor Define \(z\in\mathbb{R}^{n_{z}}\leftarrow[x,u]\), \(n_{z}=n_{x}+n_{u}\) Determine the sensitivity function \(\frac{\partial x}{\partial w}:\mathbb{R}^{n_{z}+n_{w}}\rightarrow\mathbb{R}^{ n_{x}\times n_{w}}\) Find and store the maximum and minimum of each variables \[z_{max}\in\mathbb{R}^{n_{z}}\leftarrow\max\left\{z\right\},\:z_{min}\in \mathbb{R}^{n_{z}}\leftarrow\min\left\{z\right\}\] \[w_{max}\in\mathbb{R}^{n_{w}}\leftarrow\max\left\{w\right\},\:w_{min} \in\mathbb{R}^{n_{w}}\leftarrow\min\left\{w\right\}\] Calculate the set of the accumulated sensitivity \(\mathbb{X}^{d}\) for the state variables with zone-tracking objective at each extreme point of \(z\) and \(w\): \[\mathbb{X}^{d}:=\left\{\left[\frac{\partial x}{\partial w}\Big{|}_{z^{i},w^{j} }\cdot w\right]\odot I_{t}\;\Big{|}z^{i}\in[z_{max}^{i},z_{min}^{i}],i\in \mathbb{I}_{0}^{n_{z}-1},w^{j}\in[w_{max}^{j},w_{min}^{j}],j\in\mathbb{I}_{0} ^{n_{w}-1}\right\}\] Find the element \(x_{max}^{d}\) with the largest norm in \(\mathbb{X}^{d}\) \[x_{max}^{d}\leftarrow\left\{x_{max}^{d}\;\Big{|}x_{max}^{d}\in\mathbb{X}^{d}, |x_{max}^{d}|\geq|x^{d}|,\forall x^{d}\in\mathbb{X}^{d}\right\}\] return\(x_{max}^{d}\) ``` **Algorithm 1** Estimation of the state deviation due to the presence of disturbance In Algorithm 1, the set \(\mathbb{X}^{d}\) contains \(2^{n_{z}+n_{w}}\) number of vectors. Each vector represents the amount of accumulated effects of the disturbance \(w\) have on the state \(x\) at one of the extremes of the joint space \(\mathbb{X}^{M}_{t}\times\mathbb{U}\times\mathbb{W}\). It is worth pointing out that the sensitivity matrix is computed for states inside the maximum CIS \(\mathbb{X}^{M}_{t}\), that is, inside the actual target set. This is because it has been proved in [21] that for the nominal system, guaranteed convergence into the actual target set \(\mathbb{X}_{t}\) is equivalent to the convergence into its maximum control invariant subset \(\mathbb{X}^{M}_{t}\). Since the purpose of modifying the target set is to avoid zone violation once it has converged, we focus on rejecting the effect of disturbance when \(x\in\mathbb{X}^{M}_{t}\). Furthermore, the effect of disturbance is only investigated for the individual state variables that have a zone-tracking objective on them, as it is common that the zone-tracking objective is not enforced on all state variables. The modified target zone can be calculated based on the shrinkage amount \(s\) as follows: \[\tilde{\mathbb{X}}_{t}:=\left\{x\Big{|}x^{t}_{lb}+s\leq x\leq x^{t}_{ub}-s\right\} \tag{29}\] **Remark 3**: _The inequality relationship (10) is defined in terms of the norm of vectors and matrix. However, in practice, the shrinkage amount \(s\) should be a vector that has the same dimension of the state, as the shrinkage required for each state needs to be determined individually. Thus, \(x^{d}_{max}\) is selected to be the accumulated effect of \(w\) on \(x\) that has the largest norm, which mimics the idea of (10), however in the state space._ **Remark 4**: _In Algorithm 1, the value of the sensitivity matrix needs to be calculated \(2^{n_{z}+n_{w}}\) times, as every combination of the maximum and minimum of the state, input, and disturbance are considered. For higher-dimension systems, the computational effort may be significant. One way to reduce the computational cost is to utilize available physical knowledge in the implementation. For example, the optimal operating condition may be estimated based on the nominal model, or it is known to be on one edge of the zone. In these cases, the sensitivity matrix may be calculated only at the estimated optimal operating point, or at the extremes on one edge._ _Furthermore, (29) assumes that the target set shrinks in the same amount on the upper and lower bound. If the optimal operating point is known to be on one edge of the target zone, then the shrinkage can be applied to that edge only. This again helps to preserve a larger modified target zone and terminal zone, which helps to reduce the economic performance loss and feasibility issue._ ## 6 Consideration of economic objectives A natural extension to the proposed ZMPC (7) is to include an economic objective, as the proposed design provides the system with more degrees of freedom to optimize additional objectives. The proposed ZMPC design that handles additional economic objective is presented as follows: \[\begin{split}\min_{u_{0},\cdots,u_{N-1}}&\sum_{i=0}^{N- 1}\tilde{\ell}_{z}(\tilde{x}_{i})+\ell_{e}(\tilde{x}_{i},u_{i})\\ \text{s.t. }(\ref{eq:20})-(\ref{eq:20})\end{split} \tag{30}\] With weighting factors being large enough on the zone control objective, namely \(c_{1}\) in (8), the controller will prioritize the zone-tracking objective over the economic objective such that the economic objective will be optimized only after the system enters the target zone [21]. Thus the stability properties of the controller (30) remain unchanged from that of (7). In the following section, simulation results based on (30) are presented. **Remark 5**: _By tuning the parameter \(\gamma\) in (28), a trade-off between the size of the initial feasible region, the economic performance, and zone-tracking performance can be achieved due to changing target zone shrinkage \(s\). A larger \(\gamma\) leads to a smaller modified target zone and correspondingly, a smaller terminal zone. This enhances convergence to the actual target zone while potentially reducing the flexibility and feasibility of the controller, as the system is forced to converge into a smaller region at the end of the control horizon. Higher sacrifice in the economic performance may be observed as well. On the other hand, smaller \(\gamma\) poses less effect on the controller feasibility and economic performance but the closed-loop system has a greater chance to violate the actual target zone._ ## 7 Simulations In this section, we use a continuous stirred tank reactor (CSTR) as the benchmark process to demonstrate the effectiveness of the proposed design. After introducing the CSTR process (Section 7.1) and the detailed setting of the proposed controller (Section 7.2), the performance of the proposed controller is first validated without and with a secondary economic objective with respect to the nominal ZMPC in Section 7.3. The importance of shrinking the target set in the proposed controller design is verified in Section 7.4. The significance of the terminal constraint in ensuring stability is investigated in Section 7.5. Finally, the impact of the tuning parameter \(\gamma\) is investigated in Section 7.6. ### Process description The benchmark CSTR process is introduced in this section. An irreversible first-order exothermic reaction \(A\to B\) takes place inside the CSTR reactor, where A and B are the reactants and the desired product, respectively. A cooling jacket is used to control the temperature of the CSTR. Assuming perfect mixing and constant mixture volume inside the reactor, the system can be described by the following ordinary differential equations: \[\frac{dC_{A}}{dt} =\frac{q}{V}(C_{A_{f}}-C_{A})-k_{0}\exp\Big{(}-\frac{E}{RT}\Big{)} C_{A} \tag{31a}\] \[\frac{dT}{dt} =\frac{q}{V}(T_{f}-T)+\frac{UA}{V\rho C_{p}}(T_{c}-T)+\frac{- \triangle H}{\rho C_{p}}k_{0}\exp\Big{(}-\frac{E}{RT}\Big{)}C_{A} \tag{31b}\] where \(C_{A}\)\([mol/L]\) denotes the molar concentration of the reactant, \(T\)\([K]\) represents the temperature inside the reactor, \(T_{c}\)\([K]\) represents the temperature of the cooling stream, and \(C_{A_{f}}\)\([mol/L]\) and \(T_{f}\)\([K]\) denote the molar concentration of the reactant and the temperature of the feed stream, respectively. \(q\)\([L/min]\) represents the volumetric flow rate of the streams entering and leaving the reactor, \(V\)\([L]\) and \(\rho\)\([g/L]\) denote the volume and the density of the mixture inside the reactor respectively, and \(k_{0}\), \(E\) and \(R\) are reaction-related parameters, namely the pre-exponential factor of the reaction rate, the activation energy, and the universal gas constant. \(C_{p}\) denotes the specific heat capacity of the mixture, and \(\triangle H\) and \(UA\) denote the heat of reaction and the heat transfer coefficient between the reactor and the cooling jacket, respectively. The values of the parameters are adopted from [20]. Recall the system defined in (1). For the CSTR example specifically, we define the system state vector to be \(x=[C_{A},T]^{T}\) and the control input to be \(u=T_{c}\). The disturbance vector is defined as \(w=[C_{A_{f}}-\bar{C}_{A_{f}},T_{f}-\bar{T}_{f}]^{T}\), where \(\bar{C}_{A_{f}}=1.0\,mol/L\) and \(\bar{T}_{f}=350\,K\) are the nominal values of the corresponding variables. The following bounds are employed for the state, input, and disturbance, respectively: \[[0.0,345.0]^{T} \leq x\leq[1.0,355.0]^{T} \tag{32}\] \[285.0 \leq u\leq 315.0\] (33) \[[-0.1,-2.0]^{T} \leq w\leq[0.1,2.0]^{T} \tag{34}\] ### Controller setup and implementation The proposed ZMPC controller is implemented on the CSTR system without and with a secondary economic objective. Following the design proposed in (7) and (30), we define the zone control objective and economic objective. The zone control objective is to hold the reactor temperature \(T\) between \(348.0\,K\) to \(352.0\,K\). Recall the definition of \(\ell_{z}\) in (6), which implies the following: \[\mathbb{X}_{t}=\{x:[0.0,348.0]^{T}\leq x\leq[1.0,352.0]^{T}\}\] proposed in [23]. We computed the value of \(x_{max}^{d}\) to be \(0.511\) for the states inside the largest CIS with the input and disturbances in their stated boundaries. It was found that the disturbance has the strongest effect on the state at the following point: \[x=[0.754,352.0],\,u=315.0,\,w=[0.1,2]\] Unless otherwise mentioned, \(\gamma=1\) is used, which means \(s=x_{max}^{d}=0.511\). The economic objective is to minimize the concentration of the reactant inside the reactor, i.e. \(\ell_{e}=C_{A}\). For all the simulations, the control horizon \(N=5\) unless otherwise mentioned. The value of the disturbance \(w(n)\) at any time step is selected randomly inside the boundaries specified by (34). Except for the CIS approximation that is carried out in Julia, all other simulations are performed in Python. The optimization problems are solved using the CasADi toolbox developed by [24]. Figure 2: Proposed ZMPC without economic objectives. ### Control performance validation In this section, we validate the performance of the proposed controller. Figure 2 presents the state trajectories under the proposed ZMPC without any economic objective (7) with and without the presence of disturbance. Figures 3 and 4 present the performance of the proposed ZMPC with economic objective (30) in comparison to the nominal ZMPC based on different initial conditions. In all figures, the solid black box represents the overall state space, and the dashed blue and red lines denote the boundaries of the original target set \(\mathbb{X}_{t}\) and the modified target set \(\tilde{\mathbb{X}}_{t}\) respectively. The shaded areas represent the largest CIS inside \(\mathbb{X}_{t}\) and \(\tilde{\mathbb{X}}_{t}\). The larger shaded area represents \(\mathbb{X}_{t}^{M}\), while the smaller one is \(\tilde{\mathbb{X}}_{t}^{M}\). It can be observed from the blue trajectory in Figure 2 that without any economic objective, the system converges to a steady-state in the center of the target zone. In the presence of disturbance, as represented by the red trajectory, the state oscillates around the optimal steady state but is able to stay inside the terminal CIS once entered. Thus, for this particular setup, the benefit of the proposed controller is not significant. To validate the benefits of the proposed approach, the secondary economic objective is added to the system. With the economic objective considered, the optimal operating condition shifts to Figure 3: Proposed ZMPC v.s. the nominal ZMPC with economic objective, \(x_{0}=[0.12,355]\). the boundary of the target zone. In Figures 3 and 4, starting from different initial conditions, the state progressions under the control of the nominal ZMPC are shown in blue, while the state trajectories controlled by the proposed ZMPC are red. Under the nominal ZMPC, the state leaves the terminal CIS after converging to the optimal operating point due to the fluctuation caused by the presence of disturbance. These violations on the zone-tracking objective are unfavored and should be eliminated. As presented by the red trajectories, violations are avoided by the proposed ZMPC. Although the state trajectories violate the upper bound of the modified target zone \(\tilde{\mathbb{X}}_{t}\), they are still enclosed in the actual target zone \(\mathbb{X}_{t}\). Furthermore, Figures 3 and 4 verify that the proposed controller is able to drive the system state to the same economically-optimal neighborhood even under different initial conditions. ### The significance of modifying the target set One natural question that rises is that instead of modifying just the terminal set, why both the tracking target set and the terminal set are modified in the proposed approach. Figure 5 investigates the controller performance if the original target set \(\mathbb{X}_{t}\) is tracked and only the terminal set is modified. The state trajectories under the control of the nominal ZMPC (blue solid trajectory with hexagon markers), the proposed ZMPC (red dashed trajectory with dot markers), and the ZMPC Figure 4: Proposed ZMPC v.s. the nominal ZMPC with economic objective, \(x_{0}=[0.9,345]\). that tracks the original target set \(\mathbb{X}_{t}\) and the modified terminal set \(\tilde{\mathbb{X}}_{t}^{M}\) (cyan dotted trajectory with cross markers) are presented, respectively. It can be observed that if only the terminal set is modified, the optimal state trajectory is identical to that obtained based on the nominal ZMPC. This implies that the control performance is not affected if only the terminal set is modified. This matches the theory proposed in [19], which indicates the controller performance remains equivalent in terms of stability as far as \(\tilde{\mathbb{X}}_{f}\subseteq\mathbb{X}_{t}^{M}\). Note that the state trajectory obtained based on the proposed ZMPC is presented in the figure for reference, which verifies the effectiveness of modifying the tracking target set. ### The significance of the terminal constraint The significance of the terminal constraint in ensuring closed-loop stability is investigated in this section. With the control horizon \(N=3\), Figure 6 shows the state trajectory under the ZMPC without any terminal constraint in cyan with cross markers, while the trajectory obtained based on the proposed controller is presented in red with dot markers for reference. Without the terminal constraint, the controller is able to drive the system to the optimal operating point, however with aggressive violations of the hard state constraint represented by the solid black box. These Figure 5: ZMPC with economic objective that tracks the original target set (\(\mathbb{X}_{t}\)) with modified terminal set (\(\mathbb{X}_{f}=\mathbb{X}_{t}^{M}\)). violations are avoided by the proposed controller with the terminal constraint. Furthermore, it can be observed that under the controller without terminal constraint, the system takes significantly more steps to converge compared to that taken based on the proposed approach, indicating the effectiveness of the terminal constraint in ensuring closed-loop stability and enhancing convergence. ### The impact of \(\gamma\) The impact of the tuning parameter \(\gamma\) is investigated in this section. Figure 7 presents the modified target sets and the corresponding maximum CIS inside the modified target sets with different values of \(\gamma\). Figure 8 displays the accumulated zone-tracking cost and the economic cost as \(\gamma\) increases. The accumulated zone-tracking costs are presented in blue with pentagon markers, while the accumulated economic costs are shown in red with circle markers. Note that the accumulated zone-tracking cost is calculated based on the actual target set for a fair comparison, as the modified target zone provided to the controller changes with different \(\gamma\). Table 1 acts as a supplement of Figure 8 and presents the number of violations with respect to the actual target set after the system converges to the optimal operating neighborhood for the first time, plus the average magnitude of violation out of all the violations with different \(\gamma\). The actual target set is bounded by solid black lines in Figure 7. The sets with \(\gamma\) equal to 0.5, Figure 6: ZMPC without the terminal constraint v.s. the proposed ZMPC with economic objective. 1, and 3 are bounded by green dash-dotted, red dashed, and blue dotted curves, respectively. The information regarding the optimal operating condition is not considered, both the upper bound and the lower bound of the target zone are shrunk equally. Figure 7 indicates the size of the target set and the corresponding largest terminal set available to the controller reduce as the value of \(\gamma\) increases. This implies the controller becomes more conservative with a smaller feasible region, which leads to a sacrifice in the economic performance. The economic performance is affected in two ways. First, since a smaller feasible zone leads to a shifting in the feasible economically-optimal operating condition. Second, with a more conservative target and terminal zone, the controller tends to drive the system into the zone more aggressively and potentially sacrifice the transit economic performance. This can be seen from Figure 8 that the accumulated economic cost function continuously increases as \(\gamma\) increases. It is however noteworthy that a more conservative controller does not necessarily leads to better zone-tracking performance. It is observed that the accumulated zone-tracking cost first decreases and then increases after reaching a minimum at \(\gamma=0.6\). From Table 1, it can be observed that at \(\gamma=0.6\), two violations are observed with a very small magnitude of the average violation. At Figure 7: The modified target sets and the corresponding maximum CISs under different \(\gamma\). \(\gamma=0.7\), no violation is observed after convergence, however, the accumulative zone-tracking cost is slightly higher than that obtained with \(\gamma=0.6\). The reason for the increase in zone-tracking cost as \(\gamma\) increase is due to the shrinkage in the feasible operating range and the more aggressive control action chosen by the controller, which sacrifices the transit performance of the system. A larger \(\gamma\) helps to reduce the zone-violation and thus the accumulative tracking cost; however, this reduction is saturated once the violations are insignificant (\(\gamma=0.6\)) or are fully eliminated (\(\gamma=0.7\)). The sacrifice in the transit performance on the other hand, continuously increases as \(\gamma\) increase, which slowly overtakes the cost reduced by shrinking the target zone. **Remark 6**: _It is observed from Figure 8 and Table 1 that the optimal zone-tracking performance is not achieved at \(\gamma=1\). This indicates that directly shrinking the target zone by the \(x_{max}^{d}\) only provides a conservative estimation of the amount of shrinkage required. It is essential to tune \(\gamma\) to obtain a better zone-tracking performance. On the other hand, it is noteworthy that the economic performance always reduces as \(\gamma\) increases. Thus, it is recommended to use the smallest \(\gamma\) that provides reasonable zone-tracking performance in applications._ Figure 8: The zone-tracking performance and economic performance under different \(\gamma\). ## 8 Conclusions In this work, we proposed a robust ZMPC formulation with guaranteed convergence to the actual target set in the presence of bounded disturbance. This is achieved by modifying the actual target set, which helps to reject the effect of the disturbance. A terminal constraint is utilized to ensure the closed-loop stability of the system. The system state is forced to converge to a CIS inside the modified target set at the end of the control horizon. Without assuming the existence of an optimal steady-state operating condition, this generalized approach provides more degrees of freedom. Furthermore, the proposed design is able to handle a secondary economic objective without affecting closed-loop stability. Apart from the theoretical stability proof, a practical guideline for determining the modified target set is provided. The proposed design is applied to a CSTR system and is proved to be effective from various perspectives. The effect of the tuning parameter in the proposed practical guideline is investigated with the rule of thumb for parameter selection provided. ## 9 Acknowledgement Financial support from Natural Sciences and Engineering Research Council of Canada is gratefully acknowledged.
2305.01196
On simultaneous similarity of families of commuting operators
Characterization of simultaneously similarity for commuting $m$-tuples of operators is an open problem even in finite-dimensional spaces; known as ``A wild problem in linear algebra". In this paper we offer a criteria for simultaneously similarity of $m$-tuples of $k$-cyclic commuting operators on an arbitrary Banach space. Moreover, we obtain an additional equivalence condition in the case of finite dimensional Banach spaces, which extends the result found in \cite{BS13} for pairs of cyclic commuting matrices. We also present two applications of our results, one in the case of general multiplication operators on Banach spaces of analytic function, and one for $m$-tuples of commuting square matrices.
Sherwin Kouchekian, Boris Shektman
2023-05-02T04:03:58Z
http://arxiv.org/abs/2305.01196v1
# On simultaneous similarity of families of commuting operators ###### Abstract. Characterization of simultaneously similarity for commuting \(m\)-tuples of operators is an open problem even in finite-dimensional spaces; known as "A wild problem in linear algebra". In this paper we offer a criteria for simultaneously similarity of \(m\)-tuples of \(k\)-cyclic commuting operators on an arbitrary Banach space. Moreover, we obtain an additional equivalence condition in the case of finite dimensional Banach spaces, which extends the result found in [7] for pairs of cyclic commuting matrices. We also present two applications of our results, one in the case of general multiplication operators on Banach spaces of analytic function, and one for \(m\)-tuples of commuting square matrices. 2000 Mathematics Subject Classification: Primary: 15A21, 47A16, 47B99; Secondary: 47L22, 15A30 ## 1. **Introduction** Characterization of simultaneously similarity for pairs of commuting square matrices is a central problem in classifying algebras with wild type representations, see [1], [2], and [5]. Gelfand and Ponomorev [4] showed that characterization of simultaneously similarity for pairs of commuting square matrices would provide a characterization of simultaneously similarity for arbitrary pairs of square matrices. Since then, the problem of characterization of simultaneously similarity for pairs of commuting square matrices has become known as "A wild problem in linear algebra". In [7], a necessary and sufficient condition for cyclic pairs of commuting square matrices was given in terms of vanishing ideals of polynomials. In this paper we offer an extension of this result to \(k\)-cyclic commuting \(m\)-tuples of operators on Banach spaces. We being by introducing some notations. In what follows, \((\mathscr{X},\|\cdot\|)\) will always denote a complex Banach space, and \(\mathscr{B}(\mathscr{X})\) the space of all bounded linear operators from \(\mathscr{X}\) into \(\mathscr{X}.\) An \(m\)-tuple \((A_{1},\ldots,A_{m})\) is a vector in \((\mathscr{B}(\mathscr{X}))^{m}\) and is called a _commuting \(m\)-tuple_ on \(\mathscr{X}\) if \(A_{i}A_{j}=A_{j}A_{i}\) for all \(i,j=1,\ldots,m\). Two commuting \(m\)-tuples \((A_{1},\ldots,A_{m})\) and \((B_{1},\ldots,B_{m})\) on \(\mathscr{X}\), denoted by \(\mathbf{A}\) and \(\mathbf{B}\) respectively, are called _simultaneously similar_ if there exists an invertible operator \(S\) in \(\mathscr{B}(\mathscr{X})\) such that \(B_{j}=SA_{j}S^{-1}\) for all \(j=1,\ldots,m\). Moreover, we let \(\mathbb{C}[\mathbf{z}]:=\mathbb{C}[z_{1},...,z_{m}]\) stand for the algebra of polynomials in \(m\) variables over the complex field \(\mathbb{C}.\) Finally, if \(p(\mathbf{z})=p(z_{1},...,z_{m})\) is a polynomial in \(\mathbb{C}[\mathbf{z}]\) and if \(\mathbf{A}=(A_{1},\ldots,A_{m})\) belongs to \(\left(\mathscr{B}(\mathscr{X})\right)^{m},\) then by \(p(\mathbf{A})\) we mean the operator \(p(A_{1},...,A_{m})\) in \(\mathscr{B}(\mathscr{X}).\) This paper is organized as follows. Section 2 contains our main result, Theorem 2.3, which provides an equivalent condition for simultaneously similarity between pairs of \(k\)-cyclic \(m\)-tuples of commuting operators on a Banach space. In Section 3, we show that one can offer an additional useful necessary and sufficient condition to Theorem 2.3 for finite dimensional case, which is not true in general Banach spaces. The provided simple examples in this section show the stark difference between finite and infinite dimensional Banach spaces. Section 4 concludes our paper with two general applications of Theorem 2.3 and Theorem 3.2. ## 2. **Main Result** To start with, we need the following definition. **Definition 2.1**.: For a vector \(\mathbf{u}=(u_{1},\ldots,u_{k})\) in \(\mathscr{X}^{k}\) and an \(m\)-tuple \(\mathbf{A}=(A_{1},\ldots,A_{m})\) in \((\mathscr{B}(\mathscr{X}))^{m}\) define \[\mathcal{L}_{\mathbf{A}}(\mathbf{u})=\left\{\sum_{j=1}^{k}p_{j}(\mathbf{A})u_ {j}:p_{j}\in\mathbb{C}[\mathbf{z}]\right\}. \tag{2.2}\] If \(\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) is dense in \(\mathscr{X}\), we call \(\mathbf{u}\) a _cyclic \(k\)-tuple for \(\mathbf{A}\),_ or \(\mathbf{A}\) is \(k\)-cyclic with respect to the \(k\)-tuple \(\mathbf{u}.\) Moreover if \(\mathbf{u}\) is understood, \(\mathbf{A}\) is simply called a \(k\)_-cyclic \(m\)-tuple_. It is clear that \(\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) is a (linear) subspace of \(\mathscr{X}\), and our definition extends the standard definition of cyclicity to an \(m\)-tuple of operators. We are now in the position to state our main result. **Theorem 2.3**.: _Let \(\mathbf{A}=(A_{1},\ldots,A_{m})\) be a commuting \(m\)-tuple on \(\mathscr{X},\) and suppose \(\mathbf{u}=(u_{1},...,u_{k})\) is a cyclic \(k\)-tuple for \(\mathbf{A}\). A commuting \(m\)-tuple \(\mathbf{B}=(B_{1},\ldots,B_{m})\) on \(\mathscr{X}\) is simultaneously similar to \(\mathbf{A}\) if and only if there exists a cyclic \(k\)-tuple \(\mathbf{v}=(v_{1},...,v_{k})\) for \(\mathbf{B}\) and a positive constant \(c>0\) such that_ \[c^{-1}\left\|\sum_{j=1}^{k}p_{j}(\mathbf{B})v_{j}\right\|\leq\left\|\sum_{j=1 }^{k}p_{j}(\mathbf{A})u_{j}\right\|\leq c\left\|\sum_{j=1}^{k}p_{j}(\mathbf{B })v_{j}\right\| \tag{2.4}\] _for all polynomials \(p_{j}\) in \(\mathbb{C}[\mathbf{z}]\)._ Proof.: First, suppose that \(\mathbf{A}\) and \(\mathbf{B}\) are simultaneously similar. Thus, there exists an invertible operator \(S\) in \(\mathscr{B}(\mathscr{X})\) such that \(B_{j}=SA_{j}S^{-1}\) for all \(j=1,\ldots,m\). Define \(v_{j}=Su_{j}\) for \(j=1,\ldots,k\), and let \(\mathbf{v}=(v_{1},\ldots,v_{k}).\) For any \(p\) in \(\mathbb{C}[\mathbf{z}]\), since \(\mathbf{B}\) is a commuting \(m\)-tuple, we have \[p(\mathbf{A})=p(A_{1},\ldots,A_{m})=p(S^{-1}B_{1}S,\ldots,S^{-1}B_{m}S)=S^{-1 }p(B_{1},\ldots,B_{m})S=S^{-1}p(\mathbf{B})S.\] Therefore, \[\sum_{j=1}^{k}p_{j}(\mathbf{A})u_{j}=\sum_{j=1}^{k}S^{-1}p_{j}(\mathbf{B})Su_ {j}=S^{-1}\sum_{j=1}^{k}p_{j}(\mathbf{B})v_{j}\quad\text{ for all }\quad p_{j}\in\mathbb{C}[\mathbf{z}]. \tag{2.5}\] Recalling the definition (2.2), it follows from (2.5) that \(\mathcal{L}_{\mathbf{B}}(\mathbf{v})=S\mathcal{L}_{\mathbf{A}}(\mathbf{u}).\) Since \(S\) is onto and \(\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) is dense in \(\mathscr{X}\) by the assumption, it follows that \(\mathcal{L}_{\mathbf{B}}(\mathbf{v})\) is dense in \(\mathscr{X}\) as well. Thus, \(\mathbf{v}=(v_{1},...,v_{k})\) is a cyclic \(k\)-tuple for \(B\). Moreover, (2.5) also implies that \[\left\|\sum_{j=1}^{k}p_{j}(\mathbf{A})u_{j}\right\|\leq\left\|S^{-1}\right\| \left\|\sum_{j=1}^{k}p_{j}(\mathbf{B})v_{j}\right\|\quad\text{ for all }\quad p_{j}\in\mathbb{C}[\mathbf{z}]. \tag{2.6}\] Applying \(S\) from the left to (2.5) and taking the norm, we obtain \[\frac{1}{\|S\|}\left\|\sum_{j=1}^{k}p_{j}(\mathbf{B})v_{j}\right\|\leq\left\| \sum_{j=1}^{k}p_{j}(\mathbf{A})u_{j}\right\|\quad\text{ for all }\quad p_{j}\in\mathbb{C}[\mathbf{z}]. \tag{2.7}\] Now inequality (2.4) follows from (2.6) and (2.7) with \(c=\max\{\|S\|,\|S^{-1}\|\}.\) This establishes the proof of the necessity part. Conversely, suppose that \(\mathbf{B}\) is a commuting \(m\)-tuple, and let \(\mathbf{v}=(v_{1},...,v_{k})\) be a cyclic \(k\)-tuple for \(\mathbf{B}\) which satisfies (2.4). Define \(\mathfrak{L}\) from \(\mathcal{L}_{\mathbf{A}}(\mathbf{u})\subseteq\mathscr{X}\) onto \(\mathcal{L}_{\mathbf{B}}(\mathbf{v})\subseteq\mathscr{X}\) as \[\mathfrak{L}\Bigl{(}\sum_{j=1}^{k}p_{j}(\mathbf{A})u_{j}\Bigr{)}=\sum_{j=1}^{ k}p_{j}(\mathbf{B})v_{j}\quad\text{ for all }\quad p_{j}\in\mathbb{C}[\mathbf{z}]. \tag{2.8}\] First we show that \(\mathfrak{L}\) is well-defined. To see this, suppose that \(x\in\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) has two representations \(x=\sum_{j=1}^{k}p_{j}(\mathbf{A})u_{j}\) and \(x=\sum_{j=1}^{k}q_{j}(\mathbf{A})u_{j}\) for some \(p_{j},q_{j}\in\mathbb{C}[\mathbf{z}]\) and \(1\leq j\leq k.\) In other words, \(\sum_{j=1}^{k}(p_{j}(\mathbf{A})-q_{j}(\mathbf{A}))u_{j}=0.\) Now the first inequality of (2.4) implies that \(\sum_{j=1}^{k}(p_{j}(\mathbf{B})-q_{j}(\mathbf{B}))v_{j}=0;\) or equivalently, \(\sum_{j=1}^{k}p_{j}(\mathbf{B})v_{j}=\sum_{j=1}^{k}q_{j}(\mathbf{B})v_{j}.\) In view of the definition for \(\mathfrak{L},\) the last equality is equivalent to \(\mathfrak{L}\,x=\mathfrak{L}(\sum_{j=1}^{k}p_{j}(\mathbf{A})u_{j})=\mathfrak{ L}(\sum_{j=1}^{k}q_{j}(\mathbf{A})u_{j}).\) Thus, \(\mathfrak{L}\) is well-defined. Furthermore, \(\mathfrak{L}\) is clearly linear. It follows trivially from the first inequality of (2.4) that \(\mathfrak{L}\) is also bounded on \(\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) with \(\|\mathfrak{L}\|\leq c.\) Since \(\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) is dense in \(\mathscr{X},\) the bounded linear transformation (BLT) theorem implies that \(\mathfrak{L}\) has a unique norm-preserving extension to \(\mathscr{X};\) that is, there exists a unique \(S\) in \(\mathscr{B}(\mathscr{X})\) such that \(Sx=\mathfrak{L}x\) for all \(x\) in \(\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) and \(\|S|=\|\mathfrak{L}\|.\) We prove that \(S\) is the desired operator which provides the simultaneous similarity between \(\mathbf{A}\) and \(\mathbf{B}.\) To show \(S\) is invertible, in view of the open mapping theorem, it suffices to prove that \(S\) is a bijection. To do this, first let \(x\in\mathcal{L}_{\mathbf{A}}(\mathbf{u}).\) Then \(x\) has the form \(x=\sum_{j=1}^{k}p_{j}(\mathbf{A})u_{j}\) for some \(p_{j}\) in \(\mathbb{C}[\mathbf{z}].\) Now, suppose \(x\in\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) such that \(Sx=0.\) Using the fact that \(Sx=\mathfrak{L}x,\) the definition of \(\mathfrak{L}\) together with the second inequality of (2.4) imply that \(x=0.\) Since \(\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) is dense in \(\mathscr{X},\) it follows from the continuity of \(S\) that \(\operatorname{Ker}\left(S\right)=\{0\},\) where \(\operatorname{Ker}\left(S\right)\) stands for the kernel of \(S.\) This proves that \(S\) is one-to-one. Next, let \(y\in\mathscr{X}.\) By density of \(\mathcal{L}_{\mathbf{B}}(\mathbf{v}),\) there exists a sequence \(\{y_{n}\}\subseteq\mathcal{L}_{\mathbf{B}}(\mathbf{v})\) such that \(y_{n}\to y\) in \(\mathscr{X};\) that is, \(\lim_{n\to\infty}\|y_{n}-y\|=0,\) where \(y_{n}=\sum_{j=1}^{k}p_{j}^{(n)}(\mathbf{B})v_{j}\) for some sequence of polynomials \(\{p_{1}^{(n)},\ldots,p_{k}^{(n)}\}.\) Letting \(x_{n}=\sum_{j=1}^{k}p_{j}^{(n)}(\mathbf{A})u_{j},\) we have from the definition of \(\mathfrak{L}\) that \(\mathfrak{L}x_{n}=y_{n}\) for all \(n\geq 1.\) Since \(\{y_{n}\}\) converges in \(\mathscr{X},\) and hence a Cauchy sequence, the second inequality of (2.4) implies that \(\{x_{n}\}\) is also a Cauchy sequence in \(\mathscr{X}.\) Therefore, there exists an \(x\in\mathscr{X}\) such that \(x_{n}\to x\) in \(\mathscr{X}.\) Now, it follows from the continuity of \(S\) that \[Sx=\lim_{n\to\infty}Sx_{n}=\lim_{n\to\infty}\mathfrak{L}\,x_{n}=\lim_{n\to \infty}y_{n}=y. \tag{2.9}\] Thus, \(S\) is also surjective. It remains to show \(\mathbf{A}\) and \(\mathbf{B}\) are simultaneously similar. We start by fixing \(x\in\mathscr{X}\) and use the density of \(\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) to get a sequence \(\{x_{n}\}\subseteq\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) such that \(x_{n}\to x\) in \(\mathscr{X},\) where \(x_{n}=\sum_{j=1}^{k}p_{j}^{(n)}(\mathbf{A})u_{j}\) for some sequence of polynomials \(\{p_{1}^{(n)},\ldots,p_{k}^{(n)}\}\) in \(\mathbb{C}[\mathbf{z}].\) For each fixed \(n\geq 1\) and \(1\leq j\leq k\), let \(q_{\ell,j}^{(n)}=z_{\ell}p_{j}^{(n)},\) where \(1\leq\ell\leq m.\) Clearly for each fixed \(\ell,\) \[A_{\ell}x_{n}=A_{\ell}\sum_{j=1}^{k}p_{j}^{(n)}(\mathbf{A})u_{j}=\sum_{j=1}^{k }A_{\ell}p_{j}^{(n)}(\mathbf{A})u_{j}=\sum_{j=1}^{k}q_{\ell,j}^{(n)}(\mathbf{A })u_{j}.\] Thus, it follows from the definitions of \(\mathfrak{L}\) and \(q_{\ell,j}^{(n)}\) that \[\mathfrak{L}A_{\ell}x_{n}=\mathfrak{L}\sum_{j=1}^{k}q_{\ell,j}^{(n)}(\mathbf{ A})u_{j}=\sum_{j=1}^{k}q_{\ell,j}^{(n)}(\mathbf{B})v_{j}=B_{\ell}\sum_{j=1}^{k}p_{j}^ {(n)}(\mathbf{B})v_{j}=B_{\ell}\mathfrak{L}x_{n}. \tag{2.10}\] Finally, the equation (2.10), together with a similar argument as in (2.9), implies \[SA_{\ell}x=\lim_{n\to\infty}SA_{\ell}x_{n}=\lim_{n\to\infty}\mathfrak{L}A_{ \ell}x_{n}=\lim_{n\to\infty}B_{\ell}\mathfrak{L}x_{n}=\lim_{n\to\infty}B_{ \ell}Sx_{n}=B_{\ell}Sx. \tag{2.11}\] Since \(x\) is arbitrary, it follows from (2.11) that \(SA_{\ell}=B_{\ell}S\) for all \(\ell=1,\ldots,m.\) This finishes the proof of the theorem. We make a couple of observations here. First of all, we note that the positive constants \(c^{-1}\) and \(c\) in (2.4) could have been equivalently substituted by two arbitrary positive constants \(c_{1}\) and \(c_{2},\) respectively. The given format, however, makes (2.4) invariant under the substitution of \((\mathbf{A},\mathbf{u})\) with \((\mathbf{B},\mathbf{v}),\) or vice versa. This also agrees with the statement of Theorem 2.3, which is symmetric with respect to \(\mathbf{A}\) and \(\mathbf{B}.\) Next, the proof of necessity part of Theorem 2.3 clearly shows that if \(\mathbf{A}\) and \(\mathbf{B}\) are two simultaneously similar \(m\)-tuples, then \(A\) is \(k\)-cyclic if and only if \(\mathbf{B}\) is \(k\)-cyclic. For instance, if \(\mathbf{u}=(u_{1},\ldots,u_{k})\) is a cyclic \(k\)-tuple for \(\mathbf{A},\) then \(\mathbf{v}=(Su_{1},\ldots,Su_{k})\) is a cyclic \(k\)-tuple for \(\mathbf{B},\) where \(\mathbf{u}\) and \(\mathbf{v}\) both satisfy (2.4). In general, however, the converse is not true; that is, there are simultaneously similar \(k\)-cyclic commuting \(m\)-tuples \(\mathbf{A}\) and \(\mathbf{B},\) where the corresponding cyclic \(k\)-tuples \(\mathbf{u}\) and \(\mathbf{v}\) do not satisfy (2.4). At the end of the next section, we provide a simple example by utilizing Theorem 3.2. ## 3. **The Finite Dimensional Case** In this section, we assume \(\mathscr{X}\) has a finite dimension. Therefore, it may be assumed without loss of generality that \(\mathscr{X}=\mathbb{C}^{N}\) for some natural number \(N.\) It is also clear that an \(m\)-tuple \(\mathbf{A}=(A_{1},\ldots,A_{m})\) in \((\mathscr{B}(\mathscr{X}))^{m}\) is now simply an \(m\)-tuple of \(N\times N\) matrices on \(\mathbb{C}^{N}.\) Our main goal here is Theorem 3.2, where we show that one can add another very useful equivalent condition to Theorem 2.3. This equivalent condition, however, does not hold in the infinite dimensional case. A simple counterexample will be provided at the end of this section. We start with a general lemma. **Lemma 3.1**.: _Suppose \(\mathbf{A}=(A_{1},\ldots,A_{m})\) and \(\mathbf{B}=(B_{1},\ldots,B_{m})\) are two \(m\)-tuples of commuting \(N\times N\) matrices on \(\mathbb{C}^{N}.\) If \(\mathbf{u}=(u_{1},...,u_{k})\) and \(\mathbf{v}=(v_{1},...,v_{k})\) are two arbitrary vectors in \(\left(\mathbb{C}^{N}\right)^{k},\) then the following conditions are equivalent._ 1. _There exists a positive constant_ \(c>0\) _such that_ \[c^{-1}\left\|\sum_{j=1}^{k}p_{j}(\mathbf{B})v_{j}\right\|\leq\left\|\sum_{j=1}^ {k}p_{j}(\mathbf{A})u_{j}\right\|\leq c\left\|\sum_{j=1}^{k}p_{j}(\mathbf{B})v_{ j}\right\|,\] _for all polynomials_ \(p_{j}\) _in_ \(\mathbb{C}[\mathbf{z}]\)_._ 2. \(\left\{p_{1},\ldots,p_{k}\in\mathbb{C}[\mathbf{z}]:\sum_{j=1}^{k}p_{j}( \mathbf{A})u_{j}=0\right\}=\left\{p_{1},\ldots,p_{k}\in\mathbb{C}[\mathbf{z}]: \sum_{j=1}^{k}p_{j}(\mathbf{B})v_{j}=0\right\}.\) _In other words if_ \(p_{1},\ldots,p_{k}\in\mathbb{C}[\mathbf{z}],\) _then_ \[\sum_{j=1}^{k}p_{j}(\mathbf{A})u_{j}=0\quad\text{if and only if}\quad\sum_{j=1}^{k}p_{j}(\mathbf{B})v_{j}=0.\] Proof.: Since all norms on a finite a dimensional space are equivalent, we may assume the norm in (I) is any arbitrary norm on \(\mathbb{C}^{N}.\) Now if \(\sum_{j=1}^{k}p_{j}(\mathbf{A})u_{j}=0\) for some \(p_{1},\ldots,p_{k}\) in \(\mathbb{C}[\mathbf{z}],\) then it follows from the first inequality of (I) that \(\sum_{j=1}^{k}p_{j}(\mathbf{B})v_{j}=0,\) and vice versa. Thus (I) implies (II). To prove (II) implies (I), define \(\mathfrak{L}\) from \(\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) onto \(\mathcal{L}_{\mathbf{B}}(\mathbf{v})\) as in (2.8). Using an exact similar argument as in the proof of Theorem 2.3, replacing inequality (2.4) with the condition (II), one concludes that \(\mathfrak{L}\) is well defined. Since \(\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) and \(\mathcal{L}_{\mathbf{B}}(\mathbf{v})\) are now subspaces of \(\mathbb{C}^{N},\) they are both closed, and thus complete. This implies that \(\mathfrak{L}\) is a bounded operator from \(\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) onto \(\mathcal{L}_{\mathbf{B}}(\mathbf{v})\). Moreover if \(x=\sum_{j=1}^{k}p_{j}(\mathbf{A})u_{j}\) belongs to \(\mathcal{L}_{\mathbf{A}}(\mathbf{u})\) such that \(\mathfrak{L}\,x=0;\) or equivalently \(\sum_{j=1}^{k}p_{j}(\mathbf{B})v_{j}=0,\) then it follows from (II) that \(x=0;\) that is \(\operatorname{Ker}\left(S\right)=\{0\}.\) This shows that \(\mathfrak{L}\) is also one to one, and thus it is invertible. Now using a similar argument given in the first part of the proof of Theorem 2.3, we obtain the inequalities in (II) with \(c=\max\{\|\mathfrak{L}\|,\|\mathfrak{L}^{-1}\|\}.\) An immediate consequence of Lemma 3.1 when combined with Theorem 2.3 is the following result for the finite dimensional case. **Theorem 3.2**.: _Let \(\mathbf{A}=(A_{1},\ldots,A_{m})\) and \(\mathbf{B}=(B_{1},\ldots,B_{m})\) be two \(m\)-tuples of commuting \(N\times N\) matrices on \(\mathbb{C}^{N}.\) If \(\mathbf{A}\) is \(k\)-cyclic with respect to the \(k\)-tuple \(\mathbf{u}=(u_{1},...,u_{k}),\) then the following statements are equivalent._ 1. \(\mathbf{B}\) _is simultaneously similar to_ \(\mathbf{A}.\)__ 2. _There exists a cyclic_ \(k\)_-tuple_ \(\mathbf{v}=(v_{1},...,v_{k})\) _for_ \(\mathbf{B}\) _and a positive constant_ \(c>0\) _such that_ \[c^{-1}\left\|\sum_{j=1}^{k}p_{j}(\mathbf{B})v_{j}\right\|\leq\left\|\sum_{j=1 }^{k}p_{j}(\mathbf{A})u_{j}\right\|\leq c\left\|\sum_{j=1}^{k}p_{j}(\mathbf{B} )v_{j}\right\|,\] _for all polynomials_ \(p_{j}\) _in_ \(\mathbb{C}[\mathbf{z}]\)_._ 3. _There exists a cyclic_ \(k\)_-tuple_ \(\mathbf{v}=(v_{1},...,v_{k})\) _for_ \(\mathbf{B}\) _such that_ \[\left\{p_{1},\ldots,p_{k}\in\mathbb{C}[\mathbf{z}]:\sum_{j=1}^{k}p_{j}(\mathbf{ A})u_{j}=0\right\}=\Big{\{}p_{1},\ldots,p_{k}\in\mathbb{C}[\mathbf{z}]:\sum_{j=1 }^{k}p_{j}(\mathbf{B})v_{j}=0\Big{\}}.\] As mentioned in the beginning of this section, condition (c) of Theorem 3.2 is not equivalent to simultaneous similarity in infinite dimensional case. Here is a simple example. **Example 3.3**.: Let \((\mathscr{X},\|\cdot\|)=(H^{2},\|\cdot\|_{2}),\) where \(H^{2}\) denotes the Hardy space of analytic functions on the open unit disk \(\mathbb{D}=\{z\in\mathbb{C}:|z|<1\}\) defined as (see [3]) \[H^{2}=\Bigg{\{}f:f(z)=\sum_{n=0}^{\infty}a_{n}z^{n},\ \ z\in\mathbb{D},\ \ a_{n}\in\mathbb{C},\ \ \ \text{and}\ \ \ \|f\|_{2}^{2}=\sum_{n=0}^{\infty}|a_{n}|^{2}<\infty\Bigg{\}}.\] Let \(A=M_{z}\) be the operator of multiplication by the independent variable \(z\) on \(H^{2}\) defined by \(M_{z}:f(z)\mapsto zf(z).\) Clearly \(A\in\mathscr{B}(\mathscr{X}).\) Since polynomials are dense in \(H^{2},\)\(A\) is cyclic with the cyclic vector, say, \(u=1.\) Next, we set \(B=2A=M_{2z}.\) It follows that \(B\in\mathscr{B}(\mathscr{X})\) and \(B\) is also cyclic with respect to the same cyclic vector \(u=1.\) Now if \(p\in\mathbb{C}[z],\) then it follows from the definition that \(p(A)u=p(z).\) Thus, \(p(A)u=0\) if and only if \(p(z)\equiv 0;\) or equivalently, \(\{p\in\mathbb{C}[z]:p(A)u=0\}=\{0\}.\) A similar argument also implies that \(\{p\in\mathbb{C}[z]:p(B)u=0\}=\{0\}.\) Therefore, condition (c) of Theorem 3.2 holds. However, \(A\) and \(B=2A\) are not (simultaneously) similar to each other. For the sake of completeness, we provide a proof here. So suppose that there exists an invertible \(S\in\mathscr{B}(\mathscr{X})\) such that \(2A=SAS^{-1}.\) Iteration of the last equality for \(n\geq 1\) gives \(2^{n}A^{n}=SA^{n}S^{-1}.\) Consequently, \[2^{n}\|A^{n}\|=\|2^{n}A^{n}\|=\|SA^{n}S^{-1}\|\leq\|S\|\|A^{n}\|\|S^{-1}\|\ \ \ \ \text{for all $n\geq 1$}.\] Since \(\|A^{n}\|>0\) (\(n\geq 1\)), the above inequality implies that \(\|S\|\|S^{-1}\|\geq 2^{n}\) for all \(n\geq 1,\) which is absurd as both \(\|S\|\) and \(\|S^{-1}\|\) are finite. Therefore, no such \(S\) can exist. We should mention that in the infinite dimensional case, such as in Example 3.3, it may very well happen that the condition in part (c) of Theorem 3.2 is trivially verified because both sets described in this part turn out to be the zero ideal \(\{0\}.\) In the finite dimensional case, however, this phenomena can never occur. To see this, recall that by the Cayley-Hamilton theorem every \(N\times N\) matrix \(A\) satisfies its own characteristic polynomial; that is, there always exists a polynomial \(p,\) and hence infinitely many, such that \(p(A)\) vanishes identically. This is the underlying reason why the ideal approach in the infinite dimensional case cannot serve as a fruitful strategy. We conclude this section with an example of two cyclic (simultaneously) similar operators \(A\) and \(B\) with a common cyclic \(k\)-tuple for which the condition (2.4) of Theorem 2.3 does not hold, see the remarks at the end of Section 2. **Example 3.4**.: Let \(\mathscr{X}=\mathbb{C}^{2},\) considered as the usual \(2\)-dimensional vector space over \(\mathbb{C}\) with \(\{e_{1},e_{2}\}=\big{\{}(1,0)^{T},(0,1)^{T}\big{\}}\) as its standard basis. Note also that \((e_{1},e_{2})\) is trivially a cyclic \(2\)-tuple for any \(2\times 2\)-matrix. Consider \(A=\begin{pmatrix}0&1\\ 0&0\end{pmatrix}\) and \(B=\begin{pmatrix}-1&1\\ -1&1\end{pmatrix}.\) A routine check shows that \(B=SAS^{-1},\) where \(S=\begin{pmatrix}-1&2\\ -1&1\end{pmatrix}.\) Thus, \(A\) and \(B\) are (simultaneously) similar with a common cyclic \(2\)-tuple \(\mathbf{u}=\mathbf{v}=(e_{1},e_{2}).\) Now if we let \(p_{1}(z)=1\) and \(p_{2}(z)=-z,\) then clearly \(\sum_{j=1}^{2}p_{j}(A)u_{j}=e_{1}-Ae_{2}=0,\) whereas \(\sum_{j=1}^{2}p_{j}(B)v_{j}=e_{1}-Be_{2}\neq 0.\) This shows that condition (c) of Theorem 3.2 does not hold. But in light of Lemma 3.1, conditions (b) and (c) of Theorem 3.2 are always equivalent in the finite dimensional case. Thus, (2.4) does not hold either. This completes our argument. We make a final remark that the provided trivial example was only possible due to the added equivalence condition in the finite dimensional case, which can be easily checked. ## 4. **Examples and Illustrations** We conclude this paper with some applications of the obtained result. As a first application, let \(\Omega\) be a non-empty open subset of \(\mathbb{C}^{m},\) and denote by \((\mathscr{X}(\Omega),\|\cdot\|_{\mathscr{X}})\) the Banach space of holomorphic functions on \(\Omega\) for which \(\mathbb{C}[\mathbf{z}]\) is a dense subset. For example, one could consider Dirichlet-type spaces on the \(m\)-dimensional unit polydisc \[\mathbb{D}^{m}=\{(z_{1},\ldots,z_{m})\in\mathbb{C}^{m}:|z_{j}|<1,\ \ j=1\ldots,m\},\] which includes the well-known Hardy and Bergman spaces over the polydiscs and the polydisc algebra \(A(\mathbb{D}^{m}),\) see [6]. Furthermore let \(M_{z_{j}}\) denote the operator of multiplication by the \(j^{th}\)-coordinate \(z_{j}\) on \(\mathscr{X}(\Omega)\) defined as \(M_{z_{j}}:f(\mathbf{z})\mapsto z_{j}f(\mathbf{z}).\) It follows that \(\mathbf{M}=(M_{z_{j}},\ldots,M_{z_{m}})\) is a cyclic \(m\)-tuple of commuting operators on \(\mathscr{X}(\Omega)\) with the cyclic vector \(1.\) Since \(p(\mathbf{M})1=p(\mathbf{z})\) for all \(p\in\mathbb{C}[\mathbf{z}],\) the following result is an immediate consequence of Theorem 2.3. **Corollary 4.1**.: _A commuting \(m\)-tuple \(A=(A_{1},\ldots,A_{m})\) on \(\mathscr{X}(\Omega)\) is simultaneously similar to \(\mathbf{M}=(M_{z_{1}},\ldots,M_{z_{m}})\) if and only if \(\mathbf{A}\) is cyclic and there exists a cyclic vector \(u\) in \(\mathscr{X}(\Omega)\) for \(\mathbf{A}\) such that_ \[c^{-1}\left\|p\right\|_{\mathscr{X}}\leq\left\|p(\mathbf{A})u\right\|_{ \mathscr{X}}\leq c\left\|p\right\|_{\mathscr{X}},\] _for some constant \(c>0\) and all polynomials \(p\) in \(\mathbb{C}[\mathbf{z}].\)_ Our next and final result is concerned with the finite dimensional case generalizing the idea already presented in Example 3.4. So let \(\mathscr{X}=\mathbb{C}^{N}\) with the standard basis \[e_{1}=(1,0,\ldots,0)^{T},\quad e_{2}=(0,1,\ldots,0)^{T},\quad e_{N}=(0,0, \ldots,1)^{T}.\] As noted in Example 3.4, \(\mathbf{u}=(e_{1},\ldots,e_{N})\) is clearly a cyclic \(m\)-tuple for any \(m\)-tuple of commuting \(N\times N\) matrices \(\mathbf{A}=(A_{1},\ldots,A_{m}).\) Therefore, as a consequence of Theorem 3.2, we have the following corollary which provides an answer to a "A wild problem in Linear Algebra" for the cyclic case. **Corollary 4.2**.: _Two \(m\)-tuples of commuting \(N\times N\) matrices \(\mathbf{A}=(A_{1},\ldots,A_{m})\) and \(\mathbf{B}=(B_{1},\ldots,B_{m})\) are simultaneously similar if and only if there exists a basis \(\mathbf{v}=(v_{1},...,v_{N})\) for \(\mathbb{C}^{N}\) such that_ \[\Bigg{\{}p_{1},\ldots,p_{N}\in\mathbb{C}[\mathbf{z}]:\sum_{j=1}^{N}p_{j}( \mathbf{A})e_{j}=0\Bigg{\}}=\Bigg{\{}p_{1},\ldots,p_{N}\in\mathbb{C}[\mathbf{ z}]:\sum_{j=1}^{N}p_{j}(\mathbf{B})v_{j}=0\Bigg{\}}.\]
2310.05146
Large Language Model (LLM) as a System of Multiple Expert Agents: An Approach to solve the Abstraction and Reasoning Corpus (ARC) Challenge
We attempt to solve the Abstraction and Reasoning Corpus (ARC) Challenge using Large Language Models (LLMs) as a system of multiple expert agents. Using the flexibility of LLMs to be prompted to do various novel tasks using zero-shot, few-shot, context-grounded prompting, we explore the feasibility of using LLMs to solve the ARC Challenge. We firstly convert the input image into multiple suitable text-based abstraction spaces. We then utilise the associative power of LLMs to derive the input-output relationship and map this to actions in the form of a working program, similar to Voyager / Ghost in the MineCraft. In addition, we use iterative environmental feedback in order to guide LLMs to solve the task. Our proposed approach achieves 50 solves out of 111 training set problems (45%) with just three abstraction spaces - grid, object and pixel - and we believe that with more abstraction spaces and learnable actions, we will be able to solve more.
John Chong Min Tan, Mehul Motani
2023-10-08T12:37:28Z
http://arxiv.org/abs/2310.05146v1
Large Language Model (LLM) as a System of Multiple Expert Agents: An Approach to solve the Abstraction and Reasoning Corpus (ARC) Challenge ###### Abstract We attempt to solve the Abstraction and Reasoning Corpus (ARC) Challenge using Large Language Models (LLMs) as a system of multiple expert agents. Using the flexibility of LLMs to be prompted to do various novel tasks using zero-shot, few-shot, context-grounded prompting, we explore the feasibility of using LLMs to solve the ARC Challenge. We firstly convert the input image into multiple suitable text-based abstraction spaces. We then utilise the associative power of LLMs to derive the input-output relationship and map this to actions in the form of a working program, similar to Voyager / Ghost in the MineCraft. In addition, we use iterative environmental feedback in order to guide LLMs to solve the task. Our proposed approach achieves 50 solves out of 111 training set problems (45%) with just three abstraction spaces - grid, object and pixel - and we believe that with more abstraction spaces and learnable actions, we will be able to solve more. Machine Learning, Reasoning, Reasoning, Reasoning, Reasoning ## 1 Introduction The Abstraction and Reasoning Corpus (ARC) Challenge is a key milestone in the march towards artificial general intelligence (AGI) as it requires forming concepts and abstractions (Chollet, 2019). Fig. 1 illustrates a sample ARC task. One of the key difficulties of the ARC challenge is that it requires doing something counter to mainstream deep learning - learning from very few samples. Deep learning typically uses tens of thousands of samples to do well. Humans, in comparison, can learn how to identify different animals by just one or two different observations. For instance, a child can identify a giraffe in real life for the first time, even though the only other time they may have been exposed to a giraffe was through a cartoon flash card. Such capabilities are not well endowed in modern AI systems, and that means that such AI systems will need to be trained extensively before deploying in the real world. After deploying them in the real world, they will also be limited in their ability to adapt and learn as the environment changes. In contrast, traditional symbol-based systems (e.g., GOFAI (Boden, 2014)) can "learn" quite fast, as any new situation can be interpreted without any learning phase, provided that there are existing symbols which can represent it. However, the history of GOFAI has shown that it is difficult to engineer these symbols, and at many times, even humans face difficulty to come up with symbols as they may not be able to express it in words. As can be seen, there are shortcomings with the above two approaches, and a new kind of approach will be needed in order to learn fast and generalise to new situations, in order to even have a chance at solving the ARC Challenge. In this paper, we address this challenge by proposing to use Large Language Models (LLMs) as a system grounded in functional action spaces to tackle the ARC challenge. This can be said to be an intermediate ground between both deep learning and GOFAI approaches - The functional action spaces are more flexible than symbols in GOFAI; LLMs Figure 1: A sample ARC task. The challenge is to infer the abstract rule(s) governing the demonstration transformations and apply it to the test input. Example from: [https://aiguide.substack.com/p/why-the-abstraction-and-reasoning](https://aiguide.substack.com/p/why-the-abstraction-and-reasoning) which are a form of deep learning that are adaptable to new situations via prompting. Specifically the contributions of the paper are as follows: * We showcase a novel method of using LLMs as a system of multiple expert agents (without any pre-training) to solve the ARC Challenge * We highlight the importance of a combination of multiple abstraction spaces from which to associate the input space to the output space * We demonstrate the feasibility of grounding in functional space for program synthesis by LLMs. ## 2 Related Work **ARC Challenge.** The ARC challenge (Chollet, 2019) comprises 400 public training tasks, 400 public evaluation tasks and 200 private test tasks. Each of these tasks has multiple "Task Demonstration" Input/Output grids, of which the task-taker must infer a common relation out of them. This common relation is then applied to the "Test Input", from which we get the "Test Output". The "Test Output" must match perfectly for it to be considered solved. The grids comprise grid sizes of 1x1 to 30x30, of which pixels can take on 10 different values. **Domain Specific Language (DSL) Approaches.** The majority of ARC Challenge solutions are mainly DSL ones (Alford, 2021; Ferre, 2021; Xu et al., 2023). This is also the case for the first-place solution of the ARC Kaggle competition ([https://www.kaggle.com/code/icecuber/arc-1st-place-solution](https://www.kaggle.com/code/icecuber/arc-1st-place-solution)). **LLM-Based approaches.** One way to approach the ARC challenge will be to use text to describe the visual characteristics of objects (Camposampiero et al., 2023). Indeed, 88% of ARC tasks can be solved via language description alone without input-output examples as shown in Fig. 2(Acquaviva et al., 2021). For certain problems, denoting pixels in terms of objects can significantly boost the solve rate from 13 to 23 out of 50 object-related ARC tasks (Xu et al., 2023). Some work has also been done to do end-to-end input to program description generation with just LLMs alone to some success (Min, 2023). Other approaches have used Decision Transformers (Chen et al., 2021) to find a sequence of primitive actions from the input to output (Park et al., 2023), however, as noted by the authors, huge amounts of data (10000 training data for 2000 testing data) are needed to train this method, it is unlikely it can generalise to unseen inputs. Recently, LLMs have been used to take the ASCII text view of the grid as input for next token prediction and have solved 85 out of 800 ARC tasks (Mirchandani et al., 2023). **Code as Skills and Environmental Feedback.** Voyager is an embodied lifelong learning agent powered by LLMs (Wang et al., 2023). It features a skill library of functions to build up complex behaviour, and an iterative prompting mechanism with the environment to learn from environmental feedback. Ghost in the Minecraft (Zhu et al., 2023) does something similar as well, though they constrain the action space to a list of functions. Similarly, we use code generation with primitive functions to approximate using a skill library, and use iterative prompting using ARC task output as feedback to learn from the environment. **Our Method.** In line with the existing LLM approaches, we agree that we should use language as an alternate abstraction space in addition to the original pixel grid. Unlike existing approaches, we believe we should use more than one abstraction space. Hence, the LLM will be both the Builder and the Describer in Fig. 2, but the Builder can also reference input-output pairs. We also believe we should integrate LLMs with a kind of DSL approach, but can afford to have an even more expressive DSL because an LLM is able to do matching of functions via semantics much more effectively than traditional DSL approaches. ## 3 Broad Overview of Method In this section, we provide an overview of our proposed approach and discuss several key ideas behind it. We have not implemented out all parts of the proposed approach, but it is already doing well. Generative Pre-trained Transformer 4 (GPT-4) is a multimodal LLM created by OpenAI and released in March 2023 (OpenAI, 2023). For now, we exclusively use GPT-4 for our model, as we empirically observe that GPT-3.5 and other open source models are not able to perform well enough for this method to work. The overall method is shown in Fig. 3. **Problem Type Classification (Not Implemented).** ARC tasks test various concepts. If we can use past examples to ground the LLM, and let the LLM decide what problem category an ARC task belongs to, we can proceed with a Figure 2: 88% of ARC tasks can be solved by the Builder from just the description alone given by the Describer, without input-output examples. Can GPT-4 function as both the describer and the builder? Image reproduced from Fig. 4 of Acquaviva et al. (2021). specialised workflow to target solving that particular style of task. Presently, we simply run through all the various agent types and select the agent types which work. Implementing this classifier will not affect performance but will significantly help reduce the costs. **Useful Abstraction Spaces.** While GPT-4 has proven to be a general purpose solver, being (currently) a text-based model, GPT-4 lacks some of the innate human priors necessary to solve the ARC challenge. For example, GPT-4 is not able to identify objects accurately from text alone. Objects are defined as continuous sections of the grid with the same non-zero value. Hence, providing such an object view as an abstraction space using text greatly helps with the GPT-4's ability to form associations with the input-output pair and is better able to find a solution (Xu et al., 2023b). Moreover, we can provide more than one abstraction space to GPT-4, which can increase the chance that one or more abstraction spaces contain a simple mapping from input to output, thereby reducing the complexity of the problem. Do note that these abstraction spaces are unchangeable, and are fixed since the beginning of learning. Hence, the agents will have to do processing based on these fixed priors. **Encoding Human Biases via Helper/Primitive Functions.** An initial implementation of using GPT-4 to solve ARC was done with just prompting the human biases and action spaces via text. This did not do so well due to lack of grounding using words alone. A key innovation in this work is to use primitive functions as action spaces, as a way to encode human priors. If we could use functions for grounding, and express the semantic meaning of the function in words, GPT-4 could use the function to provide the code needed for the solution. Hence, the problem now becomes finding out what are the primitive functions we need to encode in order for the LLM to solve any generic ARC problem. **Using Memory for Additional Context (Not Implemented).** New problems might mix and match aspects of previous solutions, so having a memory bank to provide examples of similar solved problems in the past can help to ground the LLM to better generate the answer. This is currently not implemented due to constraints of context length. Once the context length for GPT-4 increases or fine-tuning becomes available, we intend to let each agent have memory of relevant previously solved problems and their solutions, so that it can ground the agent's output. This is akin to Retrieval Augmented Generation (Lewis et al., 2020). **Utilising Feedback from Environment.** Another key idea is that a learning system would need to utilise feedback from the environment, and so a recursive loop feeding in feedback from the environment (whether there is compile error, whether the code matches the intended output) can help a lot in getting the right answer. This is akin to what is done in Voyager and Ghost in the MineCraft (Wang et al., 2023; Zhu et al., 2023). **LLMs as a System.** Humans do not operate with only one system. We have various systems to call for various tasks. Similarly, we can have multiple expert agents for each task (such as _Object View_, _Pixel View_, _Grid View_) and call on them to give their interpretation of the task, and select the most promising agent. This greatly helps narrow the search Figure 3: Process Flowchart of LLMs as a System to solve the ARC Challenge. space for the solution. Then, we utilise the specialised functions this agent has and solve the problem. Interfacing this agent with environment feedback, the problem-type specific abstraction space, past examples and action spaces can greatly help filter and ground GPT-4 to generate a plausible solution. We believe that, with better grounding via expert agents, better abstraction space representations and better primitive function grounding, we will eventually be able to solve most of the ARC tasks using the proposed approach. ## 4 Detailed Overview of Method We now go into some details of our method. Refer to Appendix A and B for the full GPT-4 prompt. ### Different Abstraction Spaces We utilise various ways of encoding the abstraction spaces so that GPT-4 can better associate between the Input-Output pairs. It has been shown in Image-Joint Embedding Predictive Architecture (I-JEPA) (Assran et al., 2023) and Stable Diffusion (Rombach et al., 2022) that prediction in the latent/abstraction space leads to better downstream tasks than predicting in the input space. However, instead of just one abstraction space, we believe that there are many possible abstraction spaces which are fixed, and it is up to the solver to choose which is the best for the task at hand. We believe by incorporating more useful views and refining current ones, we can solve more ARC tasks. For our method, we use only three views - _Grid View_, _Object View_, _Pixel View_ - and that has already achieved quite good results. In brief, _Grid View_ provides the entire grid representation, except we change the pixel numbers to characters so that we do not bias GPT-4 to treat it as an arithmetic problem to perform arithmetic on the pixel values. This also has the added benefit of ensuring that GPT-4 has not seen the ARC tasks before as it is now of a different form. The _Object View_ groups pixels that are contiguous together, so that they can be manipulated as a group. _Pixel View_ gives the coordinates for each pixel, which can help with more fine-grained movement tasks or relational tasks between pixels. Refer to Appendix C for more details. ### JSON-based output format LLMs are well known for being verbose and also relatively free-form in the output, making it hard for any automated program to use it. Here, we explicitly ask GPT-4 to output in a JSON format via prompting. This JSON format also facilities Chain-of-Thought (CoT) prompting (Wei et al., 2022), as it is done in a specific sequence to encourage broad to specific thinking. ### CoT Prompting CoT enables the output to be structured and the LLM will be able to condition the generation of the later output based on the earlier ones. This enables a more broad to specific style of prompting, helping the LLM to think and reflect on various areas, narrowing the search space, and ultimately may help to solve the problem. Here, we do CoT prompting directly using JSON format (See Appendix D for some examples of GPT output in this JSON format). We ask GPT-4 to output: 1. "reflection": "reflect on the answer", 2. "pixel_changes": "describe the changes between the input and output pixels, focusing on movement or pattern changes", 3. "object_changes": "describe the changes between the input and output objects, focusing on movement, object number, size, shape, position, value, cell count", 4. "helper_functions": "list any relevant helper_functions for this task", 5. "overall_pattern": "describe the simplest input-output relationship for all input-output pairs", 6. "program_instructions": "Plan how to write the python function and what helper functions and conditions to use", 7. "python_program": "Python function named 'transform_grid' that takes in a 2D grid and generates a 2D grid. Output as a string in a single line with \(\backslash\)n and \(\backslash\)t." ### Helper/Primitive Functions For the functions, we basically zero-shot prompt by stating the function name plus the input parameters and the description of the function. We find that this format of zero-shot prompting works very well for most functions, especially if the name of the function is already indicative of what it does. This is very similar to the approach taken in Visual ChatGPT (Wu et al., 2023), as well as OpenAI Functions ([https://openai.com/blog/function-calling-and-other-api-updates](https://openai.com/blog/function-calling-and-other-api-updates)). As this method of prompting is not sufficient to imbue biases that are not inherent in text (i.e. rotation, flipping), we also provide one-shot examples of how to use the function. ### Conditional Functions: Rather than letting GPT-4 free-form generate its own code, we ask it to generate a conditional flow on the primitive functions. This greatly helps to reduce compilation errors. Such a conditional flow is needed, as some ARC tasks require using logic that only applies if a particular condition is met (e.g., turn the shape red if it has exactly 6 cells). Without this conditional flow, the program would need many more steps before it can solve the problem. An example of such a conditional flow is: _If [condition]: [Primitive Function]_ ## 5 Methodology **Select Problems by Context Length.** We firstly filter the ARC training set problems to only those whose Grid View and Object View (mono-color, no diagonals) can fit into a context length of 3000 tokens. This is important because later when we incorporate environmental feedback, we will need additional token length, and by empirical observation, 3000 tokens is necessary to guarantee some buffer token amount so that the entire prompt can fit within 8000 tokens later. This is the current maximum context length for the GPT-4 web browser, as well as for the basic GPT-4 API. In the future, we envision that our approach can work for more ARC tasks when the context length for GPT-4 increases. **Mass Sampling and Filtering.** Next, we use the OpenAI API for GPT-4 May 24 2023 version with a temperature of 0.7 to ensure a diverse range of outputs. We use the OpenAI API and the web browser interface for GPT-4 interchangeably. We employ a mass sampling and filtering process to generate code, much like in AlphaCode (Li et al., 2022) (see Fig. 4). _Grid View_ is always there unless there is context length limitation. We can choose between toggling _Object View_ (10 types) and _Pixel View_ for the agents (at least one must be active), which leads to a total of \(10\times 2=20\) agents (See Appendix C for details). We utilise each expert agent three times each, with at most three feedback loop iterations, and filter the output codes which can solve the Task Demonstration to try it out on the Task Input. If there are multiple such codes, we randomly pick three to test it out. Any of these three solutions passing the Test Input will be counted as a solve, which is in line with the Kaggle competition and Lab 42's ARCathon. ## 6 Results **Overall.** Overall, as shown in Table 1, our method solves 50 out of 111 Training Set ARC tasks which could fit within the context length. This is about a 45% solve rate, which is quite remarkable as the current ARC world record solve rate is 30.5% (though this is on the hidden test set), according to [https://lab42.global/arcathon/updates/](https://lab42.global/arcathon/updates/). **Coding Issues.** To see how many of the unsolved problems are due to coding issues, we check how many of them have the correct description as evaluated by a human, but not have the correct code. This turns out to be 8 out of 61, as shown in Table 2 (See Appendix E for details). This means that if we could learn the primitive/helper functions better and have a wider range to choose from, we can improve solve rate. To solve the rest of the problems, we will have to incorporate better views - it is observed that GPT-4 cannot solve line continuation tasks, especially for diagonal lines, grid manipulation tasks, and symmetry tasks easily, and these could easily be incorporated as additional views. **Iterative Feedback.** To see how much iterative environmental feedback helps, we look at number of tasks solved with the iterative environment feedback loop. This turns out to be 7 tasks out of 50, as shown in Table 3 (See Appendix E for details). This is quite significant, and highlights the importance of environmental feedback. ## 7 Discussion The results are promising, and GPT-4 agents with various combination of views can solve different types of problems well, as compared to just using the original _Grid View_. It was also sometimes observed that _Object View_ had to go with _Pixel View_ for a consolidation of information across both views in order to solve the task. This reinforces the view that there should not be just one abstraction space, but multiple abstraction spaces which could be used in combination with each other. Empirical observation has shown that GPT-4 with primitive function grounding can solve more tasks than without. It is a better way at encoding priors than with just text alone. Overall, GPT-4 is great at solving tasks which are made up of a combination of primitive functions. It was observed that function names and descriptions are Figure 4: The overall Mass Sampling and Filtering process with various expert agents very important - GPT-4 tends to choose functions semantically similar to what it intends to do, and the changing of a function name to something irrelevant may cause it not to be used. ## 8 Improvements GPT-4 agents cannot do tasks that have no relevant priors encoded in the primitive functions well, such as scaling of objects, symmetry, continuation of lines, overlay of grids with logical rules, grid manipulation like cropping, translating, changing of shape. Furthermore, it is weak when there is more than one relation, and this type of problems benefit from the iterative environment feedback loop. By setting the new input as the output that GPT-4's program outputs, it is in effect taking a step towards the solution and helps GPT-4 better associate the simpler input-output relationship. GPT-4 has been observed to use primitive functions not meant for the view, for example, _Pixel View_ Agent using the get_objects function. Hence, giving too much context might affect performance. This is similar to Xu et al. (2023) when the performance declined after adding in relations between objects. This reinforces our idea that it is best to split up into multiple expert agents with separate views and only relevant primitive functions. Based on our experimental results, we propose new views/agents in Appendix F. ## 9 Future Work Currently, we use all agents in a brute-force manner for a task. In order to reduce computation (and cost), we could perhaps have a classifier which takes in previous examples as input to learn how to classify a new problem into a category, so that the right agents can be used to solve it. Currently, the primitive functions are hand-engineered based on observation of the first 50 tasks in the training set, and are also not a complete set. We will try to incorporate a way for GPT-4 to be prompted to create new primitive functions, and add those successful functions which could solve a new task to the list of primitive functions, much like Voyager (Wang et al., 2023). One way is to add any transform_grid function that is successful as a new primitive function, as long as the description of the function is different from existing ones. ## 10 Conclusion Overall, LLMs as a system of multiple expert agents with environmental feedback is a promising approach towards solving the ARC Challenge. To facilitate further research using this approach, our code can be found at [https://github.com/tanchongmin/ARC-Challenge/](https://github.com/tanchongmin/ARC-Challenge/). ## Acknowledgements This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-GC-2019-002). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore. Many thanks for the various intelligent people who have encouraged me to pursue the GPT4 route to solve ARC or have provided valuable insights - Pascal Kaufmann, Rolf Pfister, Michael Hodel, Simon Strandgaard, Douglas Miles, Richard Cottrill, Leonard Tan and many others. If your name is not here, do not worry, you can be in the future paper improving on this work:) \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Total Tasks** & **Tasks Solved** & **Tasks Not Solved** & **Tasks Partially Solved** \\ \hline 111 & 50 & 58 & 3 \\ \hline \end{tabular} \end{table} Table 1: Number of tasks solved, not solved and partially solved (Program works for Task Demonstration but not for Test Input/Output out of 111 Training Set tasks). See Appendix E for breakdown of tasks solved by each view type. \begin{table} \begin{tabular}{|c|c|c|} \hline **Total** & **Incorrect Output** & **Compile Error** \\ \hline 50 & 6 & 1 \\ \hline \end{tabular} \end{table} Table 3: Tasks solved with iterative feedback loop after either incorrect output or compile error \begin{table} \begin{tabular}{|c|c|c|} \hline **Total Tasks Not Solved** & **Correct description** \\ \hline 61 & 8 \\ \hline \end{tabular} \end{table} Table 2: Tasks not solved but with correct description
2302.05357
On real Calabi-Yau threefolds twisted by a section
We study the mod $2$ cohomology of real Calabi-Yau threefolds given by real structures which preserve the torus fibrations constructed by Gross. We extend the results of Casta\~no-Bernard-Matessi and Arguz-Prince to the case of real structures twisted by a Lagrangian section. In particular we find exact sequences linking the cohomology of the real Calabi-Yau with the cohomology of the complex one. Applying SYZ mirror symmetry, we show that the connecting homomorphism is determined by a ``twisted squaring of divisors'' in the mirror Calabi-Yau, i.e. by $D \mapsto D^2 + DL$ where $D$ is a divisor in the mirror and $L$ is the divisor mirror to the twisting section. We use this to find an example of a connected $(M-2)$-real quintic threefold.
Diego Matessi
2023-02-10T16:18:29Z
http://arxiv.org/abs/2302.05357v1
# On real Calabi-Yau threefolds twisted by a section ###### Abstract. We study the mod \(2\) cohomology of real Calabi-Yau threefolds given by real structures which preserve the torus fibrations constructed by Gross. We extend the results of Castano-Bernard-Matessi and Arguz-Prince to the case of real structures twisted by a Lagrangian section. In particular we find exact sequences linking the cohomology of the real Calabi-Yau with the cohomology of the complex one. Applying SYZ mirror symmetry, we show that the connecting homomorphism is determined by a "twisted squaring of divisors" in the mirror Calabi-Yau, i.e. by \(D\mapsto D^{2}+DL\) where \(D\) is a divisor in the mirror and \(L\) is the divisor mirror to the twisting section. We use this to find an example of a connected \((M-2)\)-real quintic threefold. ## 1. Introduction A real structure on a complex manifold \(X\) is an anti-holomorphic involution \(\iota:X\to X\), as for example conjugation on an algebraic variety \(X\subset\mathbb{CP}^{n}\) defined over \(\mathbb{R}\). The real part of \(X\) is the fixed point set of \(\iota\), which we denote by \(\Sigma\). Understanding the topology of \(\Sigma\) is a notoriously difficult problem, see for instance Wilson's classical survey on Hilbert's sixteenth problem [22]. A remarkable method to construct real hypersurfaces in toric varieties with controlled topology is Viro's patchworking technique [20, 21, 17], which laid the foundations for the field of tropical geometry. Indeed patchworking allowed the construction of many interesting examples and counterexamples, especially in the case of curves and surfaces ([13, 14]). One of the questions one may ask is about the relationship between the topologies of \(\Sigma\) and \(X\). For instance, a famous result is the Smith-Thom inequality relating the \(\mathbb{Z}_{2}\) Betti numbers of \(\Sigma\) and \(X\): \[\sum b_{j}(\Sigma,\mathbb{Z}_{2})\leq\sum b_{j}(X,\mathbb{Z}_{2}).\] When the equality is satisfied, then \(\Sigma\) is said to be maximal, or an \(M\)-real hypersurface. We say it is of type \((M-k)\) if the difference between the two sums of Betti numbers is \(2k\). There are many examples of maximal hypersurfaces in the case of curves and surfaces, but little is known in higher dimensions [14]. Another problem is to find bounds on individual Betti numbers. For instance, a sharp bound on individual Betti numbers of real surfaces in \(\mathbb{RP}^{3}\) is unknown in high degrees [15]. One may investigate the same questions for real hypersurfaces constructed via patchworking. In this context, Itenberg [15] conjectured that if \(\Sigma\) is a hypersurface in \(\mathbb{RP}^{n+1}\) constructed by primitive patchworking then \[b_{q}(\Sigma,\mathbb{Z}_{2})\leq\begin{cases}h^{q,q}(X)\text{ if }q=n/2,\\ h^{q,n-q}(X)+1\text{ otherwise.}\end{cases}.\] This conjecture has been recently proved by Renaudineau and Shaw [18], who actually proved a more general version for real hypersurfaces in toric varieties (see inequalities (23)). In this paper we investigate similar questions but for real structures arising in a different context. ### Lagrangian fibrations with real structures Our goal is to generalize the results of Castano-Bernard and Matessi [4] and Arguz and Prince [2] on the cohomology of real Calabi-Yau threefolds constructed via Lagrangian torus fibrations. In this context \((X,\omega)\) is a \(2n\)-dimensional symplectic manifold, with symplectic form \(\omega\), together with a Lagrangian torus fibration \(f:X\to B\) onto a real \(n\)-dimensional manifold \(B\). A compatible real structure is an anti-symplectic involution \(\iota\), i.e. \(\iota^{*}\omega=-\omega\), which preserves the fibres of the torus fibration. The real variety \(\Sigma\) is the fixed point set of \(\iota\). Let \(\pi:\Sigma\to B\) be the restriction of \(f\) to \(\Sigma\). The general idea in [4, 2] and in this paper is to relate the cohomology with \(\mathbb{Z}_{2}\) coefficients of \(X\) and \(\Sigma\) by comparing the Leray spectral sequences associated to \(f\) and \(\pi\). The torus fibrations which we consider in this paper are those constructed topologically by Gross in [9] starting from the data of a three dimensional affine manifold with singularities \(B\). It follows from [10] that one can construct these fibrations over affine manifolds with singularities associated to Calabi-Yau hypersurfaces or complete intersections in toric Fano varieties. It is expected, although not yet proved, that \(X\) is homeomorphic to the corresponding Calabi-Yau. In the case of the quintic threefold in \(\mathbb{P}^{4}\), this has been proved by Gross in [9]. The main feature of Gross' fibrations is that they are built to naturally incorporate Strominger-Yau-Zaslow (SYZ) mirror symmetry at a topological level. In fact there is a standard procedure to dualize the torus fibrations to obtain the mirror Calabi-Yau \(\check{X}\) together with a fibration \(\check{f}:\check{X}\to B\). In [5] it was shown that Gross' fibrations could be made into Lagrangian fibrations with respect to a symplectic form extending the natural one existing on the union of smooth fibres. These fibrations also come with a Lagrangian zero section \(\sigma_{0}:B\to M\). A family of fibre preserving real structures on \(X\) was constructed in [6]. We have the "standard real structure" which fixes the zero section. Denote the corresponding real Calabi-Yau by \(\Sigma\). Then \(\Sigma\) has at least two connected components, one of them being the zero section, isomorphic to \(B\). Given a Lagrangian section \(\tau:B\to X\), one can "twist" the standard real structure to get another real structure \(\iota_{\tau}\). If \(\tau\) is not the square of another section, then \(\iota_{\tau}\) does not fix a section and therefore it is not standard. Let us denote by \(\Sigma_{\tau}\) the corresponding real Calabi-Yau. The results in [4] and [2] concern the topology of the standard real Calabi-Yau. In this paper we generalize to the twisted case. ### The Leray spectral sequence and mirror symmetry The Leray spectral sequence of a Gross fibration was investigated in [7, 8, 9]. Given some group of coefficients \(G\), we have the sheaves \(R^{p}f_{*}G\) on \(B\), whose stalk at a point \(b\) is the cohomology of the fibre \(F_{b}\), i.e. \(H^{p}(F_{b},G)\). The second page of the Leray spectral sequence is given by \(E_{2}^{q,p}=H^{q}(B,R^{p}f_{*}G)\). Mirror symmetry between \(X\) and \(\check{X}\) implies the following isomorphism \[H^{q}(B,R^{p}f_{*}G)\cong H^{q}(B,R^{n-p}\check{f}_{*}G).\] Gross shows that for various choices of \(G\) (e.g. \(G=\mathbb{Q}\), \(\mathbb{Z}\) or \(\mathbb{Z}_{p}\)) and with some assumptions on \(B\), \(X\) and \(\check{X}\), the spectral sequence degenerates at the \(E_{2}\) page. In this case, the cohomology of \(X\) can be read off from the \(E_{2}\) page. In particular the Hodge numbers of \(X\) satisfy \[h^{p,q}(X)=\dim H^{q}(B,R^{p}f_{*}\mathbb{Q}).\] This equality holds in higher dimensions and it has been proved in more generality in [12]. Notice that together with the above mirror symmetry isomorphism, this implies the famous relationship between the Hodge numbers of mirror Calabi-Yau manifolds \(h^{p,q}(X)=h^{n-p,q}(\check{X})\). ### Main results Let \(B_{0}\) be the set of regular values of \(f\), so that for every \(b\in B_{0}\), the fibre \(F_{b}=f^{-1}(b)\) is a smooth \(n\)-dimensional torus. By the Arnold-Liouville theorem, \(F_{b}\) is of the type \(V/\Lambda^{*}\) where \(V\) is an affine space modeled on \(T_{b}^{*}B_{0}\) and \(\Lambda^{*}\cong\mathbb{Z}^{n}\) is an \(n\)-dimensional lattice in \(T_{b}^{*}B_{0}\). It follows that a compatible real structure \(\iota\) on \(X\), restricted to the fibre \(F_{b}\), acts as reflection with respect to some point on \(V\). In particular \(\pi^{-1}(b)=\Sigma\cap F_{b}\) consists of \(2^{n}\) points which have the structure of an \(n\)-dimensional affine space defined over \(\mathbb{Z}_{2}\). In the case of Gross' fibrations, \(\pi^{-1}(b)\) is finite for all \(b\in B\). In particular, the Leray spectral sequence of \(\pi\) is trivial: the cohomology of \(\Sigma\) satisfies \[H^{q}(\Sigma,\mathbb{Z}_{2})\cong H^{q}(B,\pi_{*}\mathbb{Z}_{2}).\] Our results considers the case when \(X\) is a Calabi-Yau threefold, i.e. \(n=3\). The first result is the following. **Theorem 1**.: Let \(\tau\) be a Lagrangian section of \(f:X\to B\) and \(\iota_{\tau}\) the associated real structure. There exist sheaves \(\mathcal{L}^{1}_{\tau}\) and \(\mathcal{L}^{2}_{\tau}\) over \(B\) and a short exact sequence \[0\longrightarrow\mathcal{L}^{1}_{\tau}\longrightarrow\pi_{\tau_{*}}\mathbb{Z }_{2}\longrightarrow\mathcal{L}^{2}_{\tau}\longrightarrow 0,\] such that \(\mathcal{L}^{1}_{\tau}\) and \(\mathcal{L}^{2}_{\tau}\) are related to the topology of \(X\) by the following short exact sequences \[0\longrightarrow\mathbb{Z}_{2}\longrightarrow\mathcal{L}^{1}_{\tau} \longrightarrow R^{1}f_{*}\mathbb{Z}_{2}\longrightarrow 0,\] \[0\longrightarrow R^{2}f_{*}\mathbb{Z}_{2}\longrightarrow\mathcal{ L}^{2}_{\tau}\longrightarrow\mathbb{Z}_{2}\longrightarrow 0.\] Notice that at a regular value \(b\in B_{0}\), we have \[(\pi_{\tau_{*}}\mathbb{Z}_{2})_{b}=\operatorname{Maps}(\pi^{-1}(b),\mathbb{Z }_{2}).\] The sheaf \(\mathcal{L}^{1}_{\tau}\) in \(b\) coincides with the affine maps. This also explains the second sequence, which is the usual splitting of affine functions as the sum of a constant function and a linear function. In the case of the standard real structure, the first sequence coincides with the one found in [4]. Indeed in this case \(\pi^{-1}(b)\) is naturally a vector space, since the zero section defines an origin. This implies that the second and third sequence are both split. The first sequence gives a long exact sequence in cohomology which computes the \(\mathbb{Z}_{2}\) cohomology of \(\Sigma_{\tau}\). In particular we have the connecting homomorphism \[\beta:H^{1}(B,\mathcal{L}^{2}_{\tau})\to H^{2}(B,\mathcal{L}^{1}_{\tau}).\] By composing \(\beta\) with the morphisms from the second and third sequence (see diagram (17)) we get the homomorphism \[\beta^{\prime}:H^{1}(B,R^{2}f_{*}\mathbb{Z}_{2})\to H^{2}(B,R^{1}f_{*} \mathbb{Z}_{2}).\] It was shown in [4] that in the case \(B\) is a homology \(\mathbb{Z}_{2}\)-sphere and \(H^{1}(X,\mathbb{Z}_{2})=H^{1}(X,\mathbb{Z}_{2})=0\), then the untwisted \(\Sigma\) has exactly two connected components. Under the same hypothesis, we prove here that in the twisted case, \(\Sigma_{\tau}\) is connected. In both cases the cohomology of \(\Sigma_{\tau}\) is uniquely determined by \(\beta^{\prime}\). As a corollary of this construction we also get that if in addition the integral cohomologies of \(X\) and \(\tilde{X}\) have no torsion, then the Betti numbers of \(\Sigma_{\tau}\) satisfy the same bounds as those proved by Renaudineau-Shaw (inequalities (23)). Indeed, in the twisted case the bound is stronger: a twisted \(\Sigma_{\tau}\) can be at most of type \((M-2)\) and this happens if and only if \(\beta^{\prime}\) is the zero map. If we apply the mirror symmetry isomorphism we can view \(\beta^{\prime}\) as a map \(\beta^{\prime}:H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2})\to H^{2}(B,R^{2} \check{f}_{*}\mathbb{Z}_{2})\) on the cohomology of the mirror \(\check{X}\). We now have that a Lagrangian section \(\tau\) can be naturally viewed as a class \(\tau\in H^{1}(B,R^{2}\check{f}_{*}\mathbb{Z}_{2})\). If we apply mirror symmetry to \(\tau\), we get an element \(L_{\tau}\in H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2})\). Notice that the latter group can be interpreted as a \(\mathbb{Z}_{2}\) version of the Picard group of \(\check{X}\) and in particular \(L_{\tau}\) can be viewed as the line bundle mirror to the section \(\tau\). We can now state the main theorem of this paper. **Theorem 2**.: The map \(\beta^{\prime}:H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2})\to H^{2}(B,R^{2} \check{f}_{*}\mathbb{Z}_{2})\) coincides with the map \[S_{\tau}:H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2}) \longrightarrow H^{2}(B,R^{2}\check{f}_{*}\mathbb{Z}_{2})\] \[D \longmapsto D^{2}+DL_{\tau}\] This theorem generalizes the main result in [2], which proves the untwisted case, i.e. when \(L_{\tau}=0\). As an application we find a connected \((M-2)\)-real quintic. We use the torus fibration on a quintic in \(\mathbb{P}^{4}\) constructed by Gross in [9]. Then, on the mirror quintic \(\check{X}\), we have that \(H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2})\cong H^{2}(\check{X},\mathbb{Z}_{2})\), which coincides with the Picard group \(\bmod 2\). To find an \((M-2)\)-real quintic it is then enough to find an \(L\in H^{2}(\check{X},\mathbb{Z}_{2})\) such that \(D^{2}+DL=0\) for all \(D\in H^{2}(\check{X},\mathbb{Z}_{2})\). The real quintic will then be \(\Sigma_{\tau}\), such that \(L=L_{\tau}\). We find such an \(L\) by using the explicit description of the triple intersection form on \(\check{X}\) given in [9]. Arguz and Prince have computed the Betti numbers of the untwisted real quintic \(\Sigma\), obtaining \(b_{1}(\Sigma)=29\). In particular \(\Sigma\) is far from being maximal, therefore none of the real quintics constructed with this method is maximal. ### Structure of the paper In Section 2 we review the necessary background on Gross' construction of torus fibrations, topological mirror symmetry and the Leray spectral sequence. In Section 3 we recall the setup in [6] where the standard and twisted real structures are defined and we discuss the results in [4] and [2] for the standard real structure. In Section 4 we prove Theorem 1 (i.e. Theorem 4.1). In Section 5 we prove Theorem 2 (i.e. Theorem 5.1). In Section 6 we prove some consequences, such as connectedness and bounds on the Betti numbers. In Section 7 we discuss the relationship between our short exact sequences and the spectral sequence constructed Renaudineau and Shaw. In Section 8 we explain the construction of the connected \((M-2)\) real quintic. ### Acknowledgments I wish to thank Arthur Renaudineau for explaining me most of Section 7, Mark Gross for a useful discussion, Hulya Arguz and Thomas Prince for explaining me their work. I was partially supported by the national research project "Moduli and Lie theory" (PRIN 2017) and by a travel grant from the INDAM research group GNSAGA. ## 2. Lagrangian fibrations and mirror symmetry We explain the construction of Lagrangian torus fibrations starting from the data of an integral affine manifold with singularities. The topological construction was done by Gross [9] for the 3-fold case and an extension to all dimensions was announced by Ruddat and Zharkov [19]. It was shown in [5] that a variant of Gross' topological fibrations are indeed Lagrangian with respect to a symplectic form on \(X\), induced by the integral affine structure on \(B\). **Definition 2.1**.: An integral affine manifold with singularities is a triple \((B,\Delta,\mathcal{A})\) where \(B\) is an \(n\)-dimensional topological manifold; \(\Delta\subset B\) a closed, codimension 2 subset and \(\mathcal{A}\) a maximal atlas on \(B_{0}=B-\Delta\) whose change of coordinate maps are in \(\mathbb{R}^{n}\rtimes\operatorname{SL}(\mathbb{Z},n)\). The set \(\Delta\) is called the discriminant locus. Given \(B_{0}=B-\Delta\), we denote by \(j:B_{0}\to B\) the inclusion. The cotangent bundle \(T^{*}B_{0}\) carries the standard symplectic form, moreover we have the lattice \[\Lambda^{*}=\operatorname{span}_{\mathbb{Z}}\langle dx_{1},\dots,dx_{n}\rangle,\] where \((x_{1},\dots,x_{n})\) are local integral affine coordinates. This defines the symplectic manifold \[X_{0}=T^{*}B_{0}/\Lambda^{*}\] together with the Lagrangian torus fibration \(f_{0}:X_{0}\to B_{0}\) given by the standard projections. A (partial) compactification of \(X_{0}\) is given by a \(2n\)-dimensional manifold \(X\) together with map \(f:X\to B\) and a commutative diagram (1) where the top arrow is a homeomorphism onto its image. In dimension \(n=3\), Gross shows that under certain hypothesis on the set \(\Delta\) and the affine structure around it, such a compactification can be carried out topologically in a canonical way. In the same dimension and with the same hypothesis, Castano-Bernard and Matessi [5] prove that, after a small thickening of \(\Delta\), \(X\) has a symplectic structure such that the inclusion \(X_{0}\to X\) is a symplectomorphism and \(f\) is a Lagrangian fibration. The hypothesis on \(\Delta\) and on the affine structure is that they are locally isomorphic to certain prescribed local models. When these hypothesis are satisfied we will say that \(B\) is _simple_. We describe below the local models in dimension \(2\) and \(3\). In higher dimensions there is a longer list. The models are characterized by two key properties of the monodromy of \(\Lambda^{*}\) around \(\Delta\) which are called simplicity and positivity, see [11] for further details. ### Dimension two: focus-focus points In dimension two we ask for \(\Delta\) to be a discrete set of points with an affine structure around them locally isomorphic to the one depicted in Figure 1. Such singular points are called focus-focus points. In Figure 1 the two simplices on the left are glued to the simplices on the right via integral affine transformations. The (red) cross is the point in \(\Delta\). ### The affine quartic This is a global compact example. Take the simplex \(P\) in \(\mathbb{R}^{3}\) whose vertices are \[p_{0}=(-1,-1,-1),\quad p_{1}=(3,-1,-1),\] \[p_{2}=(-1,3,-1),\quad p_{3}=(-1,-1,3)\] We take \(B=\partial P\). Each edge of \(P\) contains five integral points and is subdivided by these into four segments. Define \(\Delta\) to be the set of midpoints of these segments. In total \(\Delta\) consists of \(24\) points. We can Figure 1. Charts around a focus-focus point define charts on \(B\) as follows. We have four charts consisting of the interior of each \(2\)-face of \(P\) together with their natural integral affine structure. For every integral point \(q\) on some edge consider a neighborhood \(U_{q}\subset B-\Delta\). Define a chart by the projection \(U_{q}\to\mathbb{R}^{3}/\mathbb{R}q\), where \(\mathbb{R}q\) is the line generated by \(q\). By choosing these neighborhoods so that they cover \(B_{0}\), we obtain an integral affine structure on \(B_{0}\). It is not hard to show that all \(24\) points are focus-focus. This example is called the affine quartic because it is the affine structure associated to a toric degeneration of a quartic in \(\mathbb{P}^{3}\). ### Dimension three: positive and negative vertices In dimension three, \(\Delta\) must be a trivalent graph. We have three local models. One for a generic point along an edge of \(\Delta\) and two local models for vertices, which can be either of positive or negative type. The affine structure along an edge of \(\Delta\) has the following description. Take the focus-focus \(2\)-dimensional model, denote it by \((B_{ff},p)\), where \(p\) is the focus-focus singular point. Then along the interior of an edge of \(\Delta\) we want the affine structure to be locally isomorphic to \(B_{ff}\times\mathbb{R}\), where now \(\Delta=\{p\}\times\mathbb{R}\). A _negative vertex_ is depicted in Figure 2. Here \(B\) is the union of two standard simplices and \(\Delta\) is the trivalent graph (with just one vertex) depicted in red inside the common face. Figure 2 depicts the affine structure on \(B_{0}\). It has three charts, one for each vertex of the common face. In the figure the shaded regions are not part of the charts. Figure 2. Charts near a negative vertex The map \(\Phi_{1}\) is the identity on the bottom simplex and on the top simplex it is the linear map given by the matrix \[\begin{pmatrix}1&0&0\\ 0&1&1\\ 0&0&1\end{pmatrix}.\] The map \(\Phi_{2}\) is the identity on the bottom simplex and on the top simplex it is the linear map given by the matrix \[\begin{pmatrix}1&0&1\\ 0&1&0\\ 0&0&1\end{pmatrix}.\] For the _positive vertex_, we let \(B=\mathbb{R}\times\mathbb{R}^{2}\) and take \(\Delta\) inside \(\{0\}\times\mathbb{R}^{2}\) given by the set \[\{y=0,\ x\geq 0\}\cup\{x=0,\ y\geq 0\}\cup\{x=y,\ x\leq 0\}.\] Define the sets \[R^{+}=\mathbb{R}_{\geq 0}\times\Delta,\quad R^{-}=\mathbb{R}_{\leq 0}\times\Delta\] The affine structure on \(B_{0}\) has two charts. The open sets are \[U_{1}=(\mathbb{R}\times\mathbb{R}^{2})-R^{-},\quad U_{2}=(\mathbb{R}\times \mathbb{R}^{2})-R^{+}.\] On \(\mathbb{R}^{2}\) define the piecewise linear function \[\nu(x,y)=\min\{0,x,y\}\] Define the coordinate map \(\phi_{1}\) on \(U_{1}\) to be the identity and the coordinate map \(\phi_{2}\) on \(U_{2}\) to be \[\phi_{2}(t,x,y)=(t+\nu(x,y),x,y).\] ### The affine quintic This example is similar to the affine quartic, but one dimension higher. Take the simplex \(P\) in \(\mathbb{R}^{4}\) with vertices \[p_{0}=(-1,-1,-1,-1),\quad p_{1}=(4,-1,-1,-1),\quad p_{2}=(-1,4,-1,-1),\] \[p_{3}=(-1,-1,4,-1),\quad p_{4}=(-1,-1,-1,4).\] Let \(B=\partial P\). Inside every \(2\)-face of \(P\), consider the honeycomb (red) graph depicted in Figure 3. Define \(\Delta\) to be union of such graphs over all \(2\)-faces of \(P\). The interior of each \(2\)-face contains \(25\) trivalent vertices. There are also \(5\) trivalent vertices in the interior of each edge. These are the points where the honeycomb graph intersects an edge, indeed each edge is contained in exactly three \(2\)-faces. We can define charts on \(B\) as follows. We have obvious charts consisting of the interior of each \(3\)-face of \(P\). For every integral point \(q\) in a \(2\)-face, consider a neighborhood \(U_{q}\subset B-\Delta\). Define a chart by the projection \(U_{q}\to\mathbb{R}^{4}/\mathbb{R}q\), where \(\mathbb{R}q\) is the line generated by \(q\). By choosing these neighborhoods so that they cover \(B_{0}\), we obtain an integral affine structure. It can be shown that vertices in the interior of 2-faces are of negative type and vertices in the interior of edges are of positive type. This example is called the affine quintic because it is the affine structure associated to a toric degeneration of a quintic in \(\mathbb{P}^{4}\). More examples of similar affine manifolds with singularities associated to toric degenerations of Calabi-Yau complete intersections in Fano toric varieties are constructed in [10]. ### Singular fibres The compactification in diagram (1) is obtained by gluing suitable singular fibres over \(\Delta\). For instance, in dimension 2, the singular fibre over a focus-focus point is a once pinched torus. In dimension three the singular fibre over a point in the interior of an edge of \(\Delta\) is \(F\times S^{1}\) where \(F\) is a once pinched torus. The fibre over a positive vertex is obtained by considering a three torus \(T^{2}\times S^{1}\), where \(T^{2}\) is a two torus, and collapsing a two torus \(T^{2}\times\{p\}\) to a point. The singular fibre over a negative vertex is more complicated, we refer to [9] or [5] for the Lagrangian models. In the case of the affine quartic the compactified manifold \(X\) is homeomorphic to a K3 surface, i.e. to a quartic and the affine quintic is homeomorphic to a quintic Calabi-Yau, as proved by Gross in [9]. It is expected that when \(X\) is constructed from affine manifolds with singularities associated to toric degenerations of Calabi-Yau complete intersections in Fano toric varieties as in [10], then it is homeomorphic to the given Calabi-Yau. Figure 3. Discriminant of an affine quintic ### Topological mirror symmetry In [9] Gross constructs the topological mirror \(\check{X}\) of \(X\). Given the lattice \(\Lambda\subset TB_{0}\), dual to \(\Lambda^{*}\), we can form \[\check{X}_{0}=TB_{0}/\Lambda\] together with projection \(\check{f}_{0}:\check{X}_{0}\to B_{0}\). Gross proved that when \(B\) is simple, also \(\check{X}_{0}\) can be compactified to a manifold \(\check{X}\) with a map \(\check{f}:\check{X}\to B\) extending \(\check{f}_{0}\). Indeed, in dimension 3, the positive fibres in \(X\) must be replaced by negative fibres in \(\check{X}\) and viceversa. Since the tangent bundle does not have a natural symplectic structure, to construct a Lagrangian fibration on \(\check{X}\) one needs the additional data of a potential \(\phi\). This is a multivalued strictly convex function which can be used to define a symplectic form on \(TB_{0}\) or, equivalently, to define a mirror affine structure on \(B_{0}\) via a Legendre transform. For the purpose of this paper it will be enough to consider the mirror \(\check{X}\) as the topological manifold obtained from \(TB_{0}\). ### The Leray spectral sequence The cohomology of \(X\) can be computed by the Leray spectral sequence associated to the map \(f:X\to B\). Recall that given a group \(G\) we denote by \(R^{p}f_{*}G\) the sheaf on \(B\) associated to the presheaf \(U\mapsto H^{p}(f^{-1}(U),G)\). The fibration is called \(G\)_-simple_ if \[j_{*}R^{p}f_{0_{*}}G=R^{p}f_{*}G\] This essentially means that the cohomology of the singular fibres is determined by the local monodromy of \(\Lambda^{*}\otimes G\). Gross proves that the fibrations constructed above (i.e. from a simple \(B\)) are \(G\)-simple for \(G=\mathbb{Z},\mathbb{Q}\) and \(\mathbb{Z}_{n}\). The \(E_{2}\) page is given by the cohomology groups \(H^{q}(B,R^{p}f_{*}G)\). Since the fibres are connected, we have that \[R^{0}f_{*}G\cong G. \tag{2}\] Let us now consider \(G=\mathbb{Z}\). The fact that transition maps of the affine structure are in \(\mathbb{R}^{n}\rtimes\operatorname{SL}(\mathbb{Z},n)\) implies that the fibres are oriented, in particular \[R^{n}f_{*}\mathbb{Z}\cong\mathbb{Z}. \tag{3}\] This is equivalent to the fact that \(B_{0}\) has a global integral volume form. If \(b\in B_{0}\), we have that \[(R^{p}f_{*}\mathbb{Z})_{b}=\bigwedge_{b}^{p}\Lambda_{b}. \tag{4}\] On the other hand, if we consider the Leray spectral sequence for the mirror we have \[(R^{p}\check{f}_{*}\mathbb{Z})_{b}=\bigwedge^{p}\Lambda^{*}_{b}. \tag{5}\] By contraction with the global volume form (or equivalently by Poincare duality on the fibres) we have the natural isomorphism \[R^{p}f_{0_{*}}\mathbb{Z}\cong R^{n-p}\check{f}_{0_{*}}\mathbb{Z}.\] This extends to an isomorphism \[R^{p}f_{*}\mathbb{Z}\cong R^{n-p}\check{f}_{*}\mathbb{Z} \tag{6}\] by \(\mathbb{Z}\)-simplicity. We now consider \(n=3\) and \(G=\mathbb{Q}\). With the additional assumptions that \(B\) is a \(\mathbb{Q}\)-homology sphere and that \(b_{1}(X)=b_{1}(\check{X})=0\) it can be shown that the \(E_{2}\) page for \(X\) looks as follows \[\begin{array}{cccc}\mathbb{Q}&0&0&\mathbb{Q}\\ 0&H^{1}(B,R^{2}f_{*}\mathbb{Q})&H^{2}(B,R^{2}f_{*}\mathbb{Q})&0\\ 0&H^{1}(B,R^{1}f_{*}\mathbb{Q})&H^{2}(B,R^{1}f_{*}\mathbb{Q})&0\\ \mathbb{Q}&0&0&\mathbb{Q}\end{array}\] The bottom and top rows follow from (2) and (3) and the assumption that \(B\) is a \(\mathbb{Q}\)-homology sphere. The vanishing of \(H^{0}(B,R^{1}f_{*}\mathbb{Q})\) and \(H^{3}(B,R^{2}f_{*}\mathbb{Q})\) follow from the assumption \(b_{1}(X)=b_{5}(X)=0\). The vanishing of \(H^{0}(B,R^{2}f_{*}\mathbb{Q})\) and \(H^{3}(B,R^{1}f_{*}\mathbb{Q})\) follow from (6) and the assumption \(b_{1}(\check{X})=b_{5}(\check{X})=0\). Gross [7, 8, 9] proves that, with the given hypothesis, the spectral sequence degenerates at the \(E_{2}\) page. In particular we have \[H^{2}(X,\mathbb{Q})\cong H^{1}(B,R^{1}f_{*}\mathbb{Q})\cong H^{2}(B,R^{2}f_{* }\mathbb{Q})\cong H^{4}(X,\mathbb{Q})\] and similarly for \(\check{X}\). Using (6) we also have \[H^{1}(B,R^{2}f_{*}\mathbb{Q})\cong H^{1}(B,R^{1}\check{f}_{*}\mathbb{Q})\cong H ^{2}(B,R^{2}\check{f}_{*}\mathbb{Q})\cong H^{2}(B,R^{1}f_{*}\mathbb{Q})\] If \(X\) and \(\check{X}\) are Calabi-Yau manifolds we have that the Hodge numbers satisfy \[h^{1,1}(X)=\dim H^{1}(B,R^{1}f_{*}\mathbb{Q})=\dim H^{1}(B,R^{2}\check{f}_{*} \mathbb{Q})=h^{1,2}(\check{X})\] In particular, we have the celebrated mirror symmetry of the hodge diamonds of \(X\) and \(\check{X}\). In this paper we will be concerned with cohomology with \(\mathbb{Z}_{2}\) coefficients. Also in this case the spectral sequence degenerates at the \(E_{2}\) page and if we assume that \(B\) is a \(\mathbb{Z}_{2}\)-cohomology sphere and \(H^{1}(X,\mathbb{Z}_{2})\cong H^{1}(\tilde{X},\mathbb{Z}_{2})\cong 0\), the \(E_{2}\) page becomes \[\begin{array}{cccc}\mathbb{Z}_{2}&0&0&\mathbb{Z}_{2}\\ 0&H^{1}(B,R^{2}f_{*}\mathbb{Z}_{2})&H^{2}(B,R^{2}f_{*}\mathbb{Z}_{2})&0\\ 0&H^{1}(B,R^{1}f_{*}\mathbb{Z}_{2})&H^{2}(B,R^{1}f_{*}\mathbb{Z}_{2})&0\\ \mathbb{Z}_{2}&0&0&\mathbb{Z}_{2}\end{array}\] Again we have \[H^{2}(X,\mathbb{Z}_{2})\cong H^{1}(B,R^{1}f_{*}\mathbb{Z}_{2}),\] \[H^{1}(B,R^{2}f_{*}\mathbb{Z}_{2})\cong H^{2}(B,R^{1}f_{*}\mathbb{ Z}_{2}).\] In particular the \(\mathbb{Z}_{2}\)-Betti numbers of \(X\) satisfy \[b_{2}(X,\mathbb{Z}_{2})=\dim H^{1}(B,R^{1}f_{*}\mathbb{Z}_{2})\] \[b_{3}(X,\mathbb{Z}_{2})=2+2\dim H^{1}(B,R^{2}f_{*}\mathbb{Z}_{2})\] The relation between the hodge numbers of \(X\) and the groups \(H^{q}(B,R^{p}f_{*}\mathbb{Z}_{2})\) depends on whether \(H^{p+q}(X,\mathbb{Z})\) has torsion. ## 3. Real structures ### The standard real structure Consider the involution \(\iota_{0}:X_{0}\to X_{0}\) induced from the map \(\alpha\mapsto-\alpha\) on the fibres of the cotangent bundle of \(B_{0}\). It is clearly anti-symplectic in the sense that on the symplectic form \(\omega\) it acts as \(\iota_{0}^{*}\omega=-\omega\). It was proved in [6] that \(\iota_{0}\) extends to a smooth fibre preserving anti-symplectic involution \(\iota:X\to X\). It is clear that \(\iota\) fixes the zero section. We call \(\iota\) the standard real structure. We denote by \(\Sigma\) the fixed point set of \(\iota\), which we can think as the real part of \(X\). The zero section is a connected component of \(\Sigma\). We also denote by \(\pi:\Sigma\to B\) the restriction of \(f\), i.e. \(\pi=f|_{\Sigma}\). Notice that \(\pi\) is generically a \(2^{n}\) to \(1\) covering. ### Twisted real structures In [6] we constructed real structures which can be viewed as a twist of \(\iota\) by a Lagrangian section. Let \(\tau:B\to X\) be a Lagrangian section. Consider on \(X_{0}\) the translation by \(\tau\), i.e. the map which on the fibres acts by \(\alpha\mapsto\alpha+\tau\). It was shown in [6] that this map extends smoothly to a fibre preserving symplectomorphism of \(X\), for simplicity we continue to denote it by \(\tau\). Now assume that \(\tau\) is not a square, i.e. that there does not exist another section \(\tau^{\prime}\) such that \(\tau=2\tau^{\prime}\). Define \[\iota_{\tau}=\iota\circ\tau\] Clearly \(\iota_{\tau}\) is a fibre preserving antisymplectic map. To prove that it is an involution, consider the map \(\iota\circ\tau\circ\iota\). It is a fibre preserving symplectomorphism. As such, it must be given by the translation by a section. Indeed it is easy to show that \[\iota\circ\tau\circ\iota=-\tau.\] In particular \[\iota_{\tau}^{2}=(\iota\circ\tau)\circ(\iota\circ\tau)=(\iota\circ\tau\circ \iota)\circ\tau=(-\tau)\circ(\tau)=\operatorname{Id}_{X}.\] Therefore \(\iota_{\tau}\) is an involution. The fact that \(\tau\) is not a square implies that \(\iota_{\tau}\) does not fix any section. We call \(\iota_{\tau}\) a _twisted real structure_, where \(\tau\) is the twist. We denote by \(\Sigma_{\tau}\) the fixed point set of \(\iota_{\tau}\). We also denote by \(\pi_{\tau}:\Sigma_{\tau}\to B\) the map given by the restriction of \(f\). Also in this case \(\pi_{\tau}\) is generically an \(2^{n}\) to \(1\) covering. ### A long exact sequence Let us restrict to the three dimensional case \(n=3\). We describe the long exact sequence which was found in [4] relating the \(\mathbb{Z}_{2}\)-cohomology of \(\Sigma\) with the cohomology of \(X\). Consider the projection \(\pi:\Sigma\to B\). In particular, since the preimage by \(\pi\) of a point in \(B\) is finite, we have that the Leray spectral sequence of \(\pi\) is quite simple and \[H^{q}(\Sigma,G)\cong H^{q}(B,\pi_{*}G)\] We restrict to the case \(G=\mathbb{Z}_{2}\). In [4] the following result is proved **Lemma 3.1**.: There exists a short exact sequence of sheaves on \(B\): \[0\longrightarrow R^{1}f_{*}\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z} _{2}\longrightarrow\pi_{*}\mathbb{Z}_{2}\longrightarrow R^{2}f_{*}\mathbb{Z}_ {2}\longrightarrow 0 \tag{7}\] Proof.: Let us consider the same sheaves restricted to \(B_{0}\). Notice that for every \(b\in B_{0}\), we have \[\pi^{-1}(b) =\tfrac{1}{2}\Lambda^{*}\mod\Lambda^{*}\] \[\cong\Lambda^{*}\otimes\mathbb{Z}_{2}\] In particular it is a subgroup of the fibre \(f^{-1}(b)\) isomorphic to \(\mathbb{Z}_{2}^{3}\). On the other hand the stalk of \(\pi_{*}\mathbb{Z}_{2}\) at \(b\) is canonically identified with \[(\pi_{*}\mathbb{Z}_{2})_{b}=\operatorname{Maps}(\pi^{-1}(b),\mathbb{Z}_{2}).\] Moreover we have isomorphisms (4) for \(\mathbb{Z}_{2}\) coefficients \[(R^{1}f_{*}\mathbb{Z}_{2})_{b} =\Lambda\otimes\mathbb{Z}_{2},\] \[(R^{2}f_{*}\mathbb{Z}_{2})_{b} \cong\bigwedge_{\sigma=1}^{2}\Lambda\otimes\mathbb{Z}_{2}. \tag{8}\] Notice that \((R^{1}f_{*}\mathbb{Z}_{2})_{b}\) is the space of linear maps from \(\pi^{-1}(b)\) to \(\mathbb{Z}_{2}\), so it injects in \((\pi_{*}\mathbb{Z}_{2})_{b}\). Concerning the two \(\mathbb{Z}_{2}\) summands in the lefthand side of the sequence we have \[\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}=\langle 1\rangle\oplus\langle\delta_{0}\rangle\] where \(1\) is the constant map equal to \(1\) and \(\delta_{0}\) is the delta function at \(0\) (i.e. the map which maps \(0\in\pi^{-1}(b)\) to \(1\) and everything else to \(0\)). From (5) and (6) we have the isomorphism \[(R^{2}f_{*}\mathbb{Z}_{2})_{b}\cong\Lambda^{*}\otimes\mathbb{Z}_{2}\cong\pi^{-1 }(b). \tag{9}\] One can identify \(\pi^{-1}(b)\) with the quotient of the first two groups of the above sequence by identifying a non zero point \(y\in\pi^{-1}(b)\) with the class (in the quotient) of the map \(\delta_{y}\), the delta function at \(y\). Let us assume that \(B\) is a homology \(\mathbb{Z}_{2}\) sphere. Then the short exact sequence induces the long exact sequence in cohomology : \[\begin{split} 0\longrightarrow& H^{0}(B,R^{1}f_{*} \mathbb{Z}_{2})\oplus(\mathbb{Z}_{2})^{2}\longrightarrow H^{0}(\Sigma, \mathbb{Z}^{2})\longrightarrow H^{0}(B,R^{2}f_{*}\mathbb{Z}_{2})\longrightarrow \\ &\longrightarrow H^{1}(B,R^{1}f_{*}\mathbb{Z}_{2})\longrightarrow H^ {1}(\Sigma,\mathbb{Z}^{2})\longrightarrow H^{1}(B,R^{2}f_{*}\mathbb{Z}_{2}) \stackrel{{\beta}}{{\longrightarrow}}\\ &\longrightarrow H^{2}(B,R^{1}f_{*}\mathbb{Z}_{2})\longrightarrow H ^{2}(\Sigma,\mathbb{Z}_{2})\longrightarrow H^{2}(B,R^{2}f_{*}\mathbb{Z}_{2}) \longrightarrow\dots\end{split} \tag{10}\] The map \(\beta:H^{1}(B,R^{2}f_{*}\mathbb{Z}_{2})\to H^{2}(B,R^{1}f_{*}\mathbb{Z}_{2})\) is the connecting homomorphism and it essentially determines the cohomology of \(\Sigma\). In particular the following corollaries follow from the properties of the Leray spectral sequence described in SS2.7. **Corollary 3.2**.: If \(B\) is a cohomology \(\mathbb{Z}_{2}\)-sphere and \(H^{1}(X,\mathbb{Z}_{2})\) and \(H^{1}(\check{X},\mathbb{Z}_{2})\) are both zero, then \(\Sigma\) has two connected components and the long exact sequence (10) splits as \[\begin{split} 0&\to H^{1}(B,R^{1}f_{*}\mathbb{Z}_{2}) \to H^{1}(\Sigma,\mathbb{Z}_{2})\to H^{1}(B,R^{2}f_{*}\mathbb{Z}_{2}) \stackrel{{\beta}}{{\rightarrow}}H^{2}(B,R^{1}f_{*}\mathbb{Z}_{2}) \\ &\to H^{2}(\Sigma,\mathbb{Z}_{2})\to H^{2}(B,R^{2}f_{*} \mathbb{Z}_{2})\to 0\end{split}\] **Corollary 3.3**.: Under the same hypothesis of Corollary 3.2, with the additional assumption that \(X\) is a Calabi-Yau variety and the cohomologies of \(X\) and \(\check{X}\) have no \(2\)-torsion, we have that the \(\mathbb{Z}_{2}\) Betti numbers of \(\Sigma\) satisfy \[b_{q}(\Sigma,\mathbb{Z}_{2})\leq h^{q,3-q}(X)+h^{q,q}(X).\] Indeed the hypothesis and the properties of the Leray spectral sequence imply \[\dim H^{p}(B,R^{q}f_{*}\mathbb{Z}_{2})=h^{p,q}(X).\] These inequalities coincide with those proved by Renaudineau-Shaw [18] for any real hypersurface arising from primitive patchworking in a toric variety. Notice however that in our case \(X\) is not necessarily a hypersurface (see for instance the case of Schoen's Calabi-Yau, [4] and [1]). ### Mirror symmetry Using the isomorphism (6), we can interpret the connecting homomorphism \(\beta\) in (10) as a map on the cohomology of the mirror \(\tilde{X}\): \[\beta:H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2})\to H^{2}(B,R^{2}\check{f}_{*} \mathbb{Z}_{2})\] In [2], Arguz and Prince proved the following remarkable result **Theorem 3.4**.: The connecting homomorphism \(\beta\) in the long exact sequence (10) coincides with the squaring map \[\operatorname{Sq}:H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2}) \longrightarrow H^{2}(B,R^{2}\check{f}_{*}\mathbb{Z}_{2})\] \[D \longmapsto D^{2}\] Notice that if \(B\) is a \(\mathbb{Z}_{2}\) homology sphere and \(H^{1}(X,\mathbb{Z}_{2})=H^{1}(\check{X},\mathbb{Z}_{2})=0\), then \(H^{p}(B,R^{p}\check{f}_{*}\mathbb{Z}_{2})\cong H^{2p}(\check{X},\mathbb{Z}_{2})\). The map \(\beta\) in this case is the squaring with respect to the usual cup product in cohomology. ## 4. Short exact sequences Our first goal is to generalize the short exact sequence (7) to the case of twisted real structures. We will prove the following result **Theorem 4.1**.: There exist sheaves \(\mathcal{L}^{1}_{\tau}\) and \(\mathcal{L}^{2}_{\tau}\) over \(B\) and a short exact sequence \[0\longrightarrow\mathcal{L}^{1}_{\tau}\longrightarrow\pi_{\tau_{*}}\mathbb{Z} _{2}\longrightarrow\mathcal{L}^{2}_{\tau}\longrightarrow 0, \tag{11}\] such that \(\mathcal{L}^{1}_{\tau}\) and \(\mathcal{L}^{2}_{\tau}\) are related to the topology of \(X\) by the following short exact sequences \[0\longrightarrow\mathbb{Z}_{2}\longrightarrow\mathcal{L}^{1}_{\tau} \longrightarrow R^{1}f_{*}\mathbb{Z}_{2}\longrightarrow 0, \tag{12}\] \[0\longrightarrow R^{2}f_{*}\mathbb{Z}_{2}\longrightarrow\mathcal{L}^{2}_{\tau} \longrightarrow\mathbb{Z}_{2}\longrightarrow 0. \tag{13}\] ### Classification of Lagrangian sections The Lagrangian sections of \(f:X\to B\) are classified, up to Hamiltonian equivalence, by the group \(H^{1}(B,j_{*}\Lambda^{*})\). This can be seen as follows. Take some covering \(\{U_{i}\}_{i\in I}\) of \(B\) by open sets homeomorphic to \(3\)-balls. Let us assume we are away from singularities, i.e. assume \(U_{i}\cap\Delta=\emptyset\). Given a Lagrangian section \(\tau\), consider \(\tau_{|U_{i}}:U_{i}\to T^{*}U_{i}/\Lambda^{*}\). Since \(U_{i}\) is homeomorphic to a \(3\)-ball, we can find a Lagrangian lift \(\tilde{\tau}_{U_{i}}:U_{i}\to T^{*}U_{i}\) of \(\tau_{|U_{i}}\). Then, since \(\tau\) is a global section, we must have that on overlaps \(U_{i}\cap U_{k}\) \[\tilde{\tau}_{U_{i}}-\tilde{\tau}_{U_{k}}\in\Lambda^{*}.\] This can be extended to the case when \(U_{i}\cap\Delta\neq\emptyset\). Therefore, to the section \(\tau\) we can associate the Cech 1-cocycle \(\{U_{i}\cap U_{k},\tilde{\tau}_{U_{i}}-\tilde{\tau}_{U_{k}}\}\) giving a class in \(H^{1}(B,j_{*}\Lambda^{*})\), which we continue to denote by \(\tau\). It can be shown that any class can be represented by a Lagrangian section and that two Lagrangian sections represent the same class if and only if they are Hamiltonian isotopic. ### Local description of \(\Sigma_{\tau}\) Given the above local description of a Lagrangian section \(\tau\), it is easy to describe the involution \(\iota_{\tau}\) locally. We will do this away from singularities, i.e. when \(U_{i}\cap\Delta=\emptyset\). Indeed \[\iota_{\tau}:T^{*}U_{i}/\Lambda^{*} \to T^{*}U_{i}/\Lambda^{*}\] \[\mapsto[-(\alpha+\tilde{\tau}_{U_{i}})]\] where \(\alpha\) is a 1-form and \([\cdot]\) denotes the class in the quotient by \(\Lambda^{*}\). Then, locally, we have \[\Sigma_{\tau|_{U_{i}}}=-\tfrac{\tilde{\tau}_{U_{i}}}{2}+\tfrac{1}{2}\Lambda^{ *}\mod\Lambda^{*}\] In particular the fibre \(\pi_{\tau}^{-1}(b)\) has the structure of an affine space modelled on \(\Lambda^{*}\otimes\mathbb{Z}_{2}\). ### The sheaves \(\mathcal{L}^{1}_{\tau}\) and \(\mathcal{L}^{2}_{\tau}\) The sheaf \(\mathcal{L}^{1}_{\tau}\) is easily described. Its stalks over \(b\in B_{0}\) are the affine maps \(\pi^{-1}(b)\to\mathbb{Z}_{2}\), which embed inside \(\pi_{\tau*}\mathbb{Z}_{2}\). Moreover, on \(B_{0}\), we have an obvious sequence \[0\longrightarrow\mathbb{Z}_{2}\longrightarrow\mathcal{L}^{1}_{\tau} \longrightarrow\Lambda\otimes\mathbb{Z}_{2}\longrightarrow 0,\] where the left hand side is the inclusion of the constant maps, while the righthand side is given by taking the linear part of an affine map. If we push this sequence forward by \(j:B_{0}\to B\) and we use simplicity we get the sequence (12). By definition \(\mathcal{L}^{2}_{\tau}\) is the quotient of the inclusion \(\mathcal{L}^{1}_{\tau}\to\pi_{\tau*}\mathbb{Z}_{2}\). ### Proof of Theorem 4.1 Let \(S\) be an affine space modeled on a \(\mathbb{Z}_{2}\)-vector space \(V\) of dimension 3. Let \(L^{1}_{S}=\operatorname{Aff}(S)\) be the space of affine functions on \(S\). Let \(L^{2}_{S}\) be the quotient between \(\operatorname{Maps}(S,\mathbb{Z}_{2})\) and \(L^{1}_{S}\). Given a subset \(A\subset S\), let \(\delta_{A}\) denote the function which is 1 on \(A\) and 0 elsewhere and by \([\delta_{A}]\) the class of \(\delta_{A}\) in \(L^{2}_{S}\). An affine function on \(S\) is of the type \(\delta_{W}\) or \(1+\delta_{W}\) for some affine subspace \(W\subseteq S\) of codimension 1 or 0. It is not hard prove that \(L^{2}_{S}\) is generated by the elements of type \([\delta_{Z}]\), where \(Z\subset S\) is either a line or a point. Moreover, given two lines \(Z_{1}\) and \(Z_{2}\), then \([\delta_{Z_{1}}]=[\delta_{Z_{2}}]\) if and only if \(Z_{1}\) and \(Z_{2}\) are parallel. We then have an exact sequence \[0\to V\to L_{S}^{2}\to\mathbb{Z}_{2}\to 0 \tag{14}\] Where a vector \(v\in V\) is mapped to \([\delta_{Z_{v}}]\) where \(Z_{v}\) is a line with direction \(v\) if \(v\neq 0\) or the empty set if \(v=0\). It can be easily proved that \[[\delta_{Z_{v+w}}]=[\delta_{Z_{v}}]+[\delta_{Z_{w}}]\] so that the first map is linear. The quotient of \(L_{S}^{2}\) by \(V\) is \(\mathbb{Z}_{2}\) and it is generated by the class of \([\delta_{q}]\) where \(q\) is a point in \(S\). Given \(b\in B_{0}\), let \(S=\pi_{\tau}^{-1}(b)\). As we said, \(S\) is an affine space modeled on \(\Lambda^{*}\otimes\mathbb{Z}_{2}\). Using the isomorphism \((R^{2}f_{*}\mathbb{Z}_{2})_{b}\cong\Lambda^{*}\otimes\mathbb{Z}_{2}\) (see (8) and (9)), the sequence (14) becomes the sequence (13). ### The two dimensional case If \(B\) is two dimensional (hence the affine base of a K3 surface), then we only have the sheaf \(\mathcal{L}_{\tau}^{1}\) of affine maps inside \(\pi_{*}\mathbb{Z}_{2}\) which satisfies (12) and \[0\longrightarrow\mathcal{L}_{\tau}^{1}\longrightarrow\pi_{*}\mathbb{Z}_{2} \longrightarrow\mathbb{Z}_{2}\longrightarrow 0. \tag{15}\] This follows from the fact that if \(S\) is an affine space over \(\mathbb{Z}_{2}\) of dimension \(2\) then we have \[0\to\operatorname{Aff}(S)\to\operatorname{Maps}(S,\mathbb{Z}_{2})\to\mathbb{Z} _{2}\to 0\] where \(\mathbb{Z}_{2}\) is generated by \([\delta_{q}]\), with \(q\in S\). ## 5. The connecting homomorphism The sequence (11) gives the long exact sequence in cohomology \[\begin{split} 0\longrightarrow& H^{0}(B,\mathcal{L}_{\tau}^{ 1})\longrightarrow H^{0}(\Sigma_{\tau},\mathbb{Z}^{2})\longrightarrow H^{0}(B,\mathcal{L}_{\tau}^{2})\longrightarrow\\ & H^{1}(B,\mathcal{L}_{\tau}^{1})\longrightarrow H^{1}(\Sigma_{ \tau},\mathbb{Z}^{2})\longrightarrow H^{1}(B,\mathcal{L}_{\tau}^{2}) \stackrel{{\beta}}{{\longrightarrow}}\\ & H^{2}(B,\mathcal{L}_{\tau}^{1})\longrightarrow H^{2}(\Sigma_{ \tau},\mathbb{Z}_{2})\longrightarrow H^{2}(B,\mathcal{L}_{\tau}^{2}) \longrightarrow\ldots\end{split} \tag{16}\] Combining this with the maps induced by the sequences (12) and (13) we obtain the diagram \[\begin{CD}H^{1}(B,R^{2}f_{*}\mathbb{Z}_{2})@>{\beta^{\prime}}>{}>H^{2}(B,R^{1 }f_{*}\mathbb{Z}_{2})\\ @V{}V{}V@V{}V{}V\\ H^{1}(B,\mathcal{L}_{\tau}^{2})@>{\beta}>{}>H^{2}(B,\mathcal{L}_{\tau}^{ 1})\end{CD} \tag{17}\] where \(\beta^{\prime}\) is obtained by composition. Using the isomorphisms (6), we interpret \(\beta^{\prime}\) as a map on the cohomology of the mirror \[\beta^{\prime}:H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2})\to H^{2}(B,R^{2} \check{f}_{*}\mathbb{Z}_{2}).\] As explained in SS4.1, the twist \(\tau\) is a class in \(H^{1}(B,j_{*}\Lambda^{*})\). Notice that \[H^{1}(B,j_{*}\Lambda^{*})=H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}).\] This allows us to interpret \(\tau\) as the class of a line bundle \(L_{\tau}\) on \(\check{X}\) (indeed, it is conjectured that Lagrangian sections are mirror to line bundles). The assumption that \(\tau\) is not a square (see SS3.2) implies that \(L_{\tau}\) is not zero after reduction modulo two. Therefore, our assumption is that the twist \(\tau\) is such that the mirror line bundle \(L_{\tau}\) satisfies \[0\neq L_{\tau}\in H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2}).\] We can now state our main result **Theorem 5.1**.: The map \(\beta^{\prime}:H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2})\to H^{2}(B,R^{2} \check{f}_{*}\mathbb{Z}_{2})\), related to the connecting homomorphism in the long exact sequence (16) via diagram (17), coincides with the map \[S_{\tau}:H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2}) \longrightarrow H^{2}(B,R^{2}\check{f}_{*}\mathbb{Z}_{2})\] \[D \longmapsto D^{2}+DL_{\tau}\] Proof.: The proof is similar to the proof of Theorem 3.4 in [2]. Fix an open covering \(\mathfrak{U}=\{\mathcal{U}_{i}\}_{i\in I}\) which is Leray for all the sheaves and such that all triple intersections do not intersect the discriminant \(\Delta\). We will denote multiple intersections of open sets by \[U_{i_{0},\ldots,i_{k}}=U_{i_{0}}\cap\ldots\cap U_{i_{k}}.\] The cup product in Cech cohomology has the following description. Let \(\alpha\in H^{p}(B,\mathcal{F})\) and \(\beta\in H^{q}(B,\mathcal{G})\) then the cup product \(\alpha\cup\beta\in H^{p+q}(B,\mathcal{F}\otimes G)\) is represented by the cochain \[(\alpha\cup\beta)_{i_{0}\ldots i_{p+q}}=\sum_{r=0}^{p+q}\alpha_{i_{r},\ldots,i _{r+p}}\otimes\beta_{i_{r+p},\ldots,i_{r+p+q}}\] where we have chosen cocycles representing \(\alpha\) and \(\beta\). The indices in this formula should be interpreted cyclically. In the following, when we take a local section of \(\Lambda^{*}\otimes\mathbb{Z}_{2}\) and denote it by \(\lambda\), we will mean that \(\lambda\in\Lambda^{*}\) reduced mod 2, i.e. \(\lambda\) will be short for \(\lambda\otimes 1\). On the other hand we can also identify \[\Lambda^{*}\otimes\mathbb{Z}_{2}=\tfrac{1}{2}\Lambda^{*}\quad\text{ mod }\Lambda^{*}\] and therefore we may identify \(\lambda\) with the point \(\tfrac{1}{2}\lambda\) in the fibre \(f^{-1}(b)\). Take a class \(D\in H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2})\) represented by a 1-cycle \(\{U_{ij},D_{ij}\}\) The twisting cycle \(L_{\tau}\in H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2})\) is represented by \(\{U_{ij},\tau_{ij}\}\) as described in SS4.1. The product \(D^{2}+DL_{\tau}\) is then represented by the 2-cycle \[(D^{2}+DL_{\tau})_{ijk}=D_{ij}\wedge (D_{jk}+\tau_{jk})+D_{jk}\wedge(D_{ki}+\tau_{ki})\] \[+D_{ki}\wedge(D_{ij}+\tau_{ij}).\] We now compare this cycle with \(\beta^{\prime}(D)\). First, we need to view \(D\) as a class in \(H^{1}(B,\mathcal{L}_{\tau}^{2})\). As described in SS4.4, \(\{U_{ij},D_{ij}\}\) is sent to the cycle \(\bar{D}=\{U_{ij},[\delta_{Z_{D_{ij}}}]\}\), where \(Z_{D_{ij}}\) is a line with direction \(D_{ij}\) if \(D_{ij}\neq 0\) or the empty set if \(D_{ij}=0\). Now we need a cochain \(\Gamma\in C^{1}(\mathfrak{U},\pi_{\tau*}\mathbb{Z}_{2})\) representing \(\bar{D}\). Using the local description of \(\Sigma_{\tau}|_{U_{i}}\) given in SS4.2, we can represent \(Z_{D_{ij}}\) as the line through the point \(-\frac{\check{\tau}_{i}}{2}\) and direction \(D_{ij}\). Therefore we lift \([\delta_{Z_{D_{ij}}}]\) to the map \[\Gamma_{ij}=\delta_{Z_{D_{ij}}}=\delta_{-\frac{\check{\tau}_{i}}{2}}+\delta_{ -\frac{\check{\tau}_{i}+D_{ij}}{2}}\] Then we consider \(\partial\Gamma\in C^{2}(\mathfrak{U},\pi_{\tau*}\mathbb{Z}_{2})\). On \(U_{ijk}\) we have \[(\partial\Gamma)_{ijk} =\delta_{Z_{D_{ij}}}+\delta_{Z_{D_{jk}}}+\delta_{Z_{D_{ki}}}=\] \[=\delta_{-\frac{\check{\tau}_{i}}{2}}+\delta_{-\frac{\check{\tau }_{i}+D_{ij}}{2}}\] \[+\delta_{-\frac{\check{\tau}_{i}}{2}}+\delta_{-\frac{\check{\tau }_{j}+D_{jk}}{2}}\] \[+\delta_{-\frac{\check{\tau}_{k}}{2}}+\delta_{-\frac{\check{\tau }_{k}+D_{ki}}{2}} \tag{18}\] The next steps consist first in describing \((\partial\Gamma)_{ijk}\) as an affine function \(\alpha_{ijk}\) on \(\pi_{\tau}^{-1}(b)\). Thus giving a cycle \(\alpha\in H^{2}(B,\mathcal{L}_{\tau}^{1})\). Notice that \(\beta(\bar{D})=\alpha\). Then we need to take the linear part \(\beta_{ijk}=\operatorname{Lin}(\alpha_{ijk})\) so that \(\beta^{\prime}(D)=\{U_{ijk},\beta_{ijk}\}\). Then we compare \(\beta_{ijk}\) with \((D^{2}+DL_{\tau})_{ijk}\). Let us identify \(\pi_{\tau}^{-1}(b)\) with the vector space \(V=\Lambda^{*}\otimes\mathbb{Z}_{2}\) by declaring the point \(-\frac{\check{\tau}_{i}}{2}\) to be the origin. Moreover, let us denote by \(Z_{0},Z_{1},Z_{2}\) respectively the sets \(Z_{D_{ij}}\), \(Z_{D_{jk}}\) and \(Z_{D_{ki}}\). Let \[e_{1}=D_{jk},\quad e_{2}=D_{ki},\quad f_{1}=\tau_{ij},\quad f_{2}=\tau_{ki}.\] The cocycle condition implies \(D_{ij}=e_{1}+e_{2}\) and \(\tau_{jk}=f_{1}+f_{2}\). It is then easy to see that \[\alpha_{ijk}=(\partial\Gamma)_{ijk} =\delta_{Z_{0}}+\delta_{Z_{1}}+\delta_{Z_{2}}=\] \[=\delta_{0}+\delta_{e_{1}+e_{2}}\] \[+\delta_{f_{1}}+\delta_{f_{1}+e_{1}}\] \[+\delta_{f_{2}}+\delta_{f_{2}+e_{2}}\] On the other hand we have \[\begin{split}(D^{2}+DL_{\tau})_{ijk}&=(e_{1}+e_{2}) \wedge(e_{1}+f_{1}+f_{2})+e_{1}\wedge(e_{2}+f_{2})+\\ &\qquad\qquad+e_{2}\wedge(e_{1}+e_{2}+f_{1})=\\ &=e_{1}\wedge e_{2}+e_{1}\wedge f_{1}+e_{2}\wedge f_{2}\end{split} \tag{19}\] We study four different cases. _Case 1: \(e_{1}\) and \(e_{2}\) are linearly dependent._ If \(e_{1}=e_{2}=0\), then both \((\partial\Gamma)_{ijk}\) and \((D^{2}+DL_{\tau})_{ijk}\) are zero, in particular they match. Otherwise we may assume w.l.o.g. that \(e_{1}=e_{2}\neq 0\). Then \(Z_{0}=\emptyset\) and \(Z_{1}\) and \(Z_{2}\) either coincide or are parallel. In the first case \(e_{1}\) and \(f_{1}+f_{2}\) are linearly dependent, therefore \((\partial\Gamma)_{ijk}\) and \((D^{2}+DL_{\tau})_{ijk}\) both vanish. Otherwise if \(Z_{1}\) and \(Z_{2}\) are parallel and distinct, then \(e_{1}\) and \(f_{1}+f_{2}\) are linearly independent. In particular \[(D^{2}+DL_{\tau})_{ijk}=e_{1}\wedge(f_{1}+f_{2})\] is a non-zero two form. On the other hand \(\alpha_{ijk}=(\partial\Gamma)_{ijk}=\delta_{W}\), where \(W\) is the unique 2-plane containing the two lines. In particular \(\alpha_{ijk}\) is a non constant affine function. Let \(e_{3}\) be a third vector so that \(\{e_{1},f_{1}+f_{2},e_{3}\}\) forms a basis of \(V\). Let \(\{e_{1}^{*},e_{2}^{*},e_{3}^{*}\}\) form the dual basis of \(V^{*}=\Lambda\otimes\mathbb{Z}_{2}\). Taking the linear part of \(\alpha_{ijk}\) we have \[\beta_{ijk}=\operatorname{Lin}(\alpha_{ijk})=e_{3}^{*}.\] Consider \(\Omega=e_{1}\wedge(f_{1}+f_{2})\wedge e_{3}\), which coincides with the global 3-form on \(B\). Contracting \(\Omega\) with \(e_{3}^{*}\) gives precisely \(e_{1}\wedge(f_{1}+f_{2})\). Therefore, after applying the isomorphism (6) we have \[\beta_{ijk}=(D^{2}+DL_{\tau})_{ijk}.\] We now assume \(e_{1}\) and \(e_{2}\) are linearly independent. Let \(e_{3}\) be a third vector so that \(\{e_{1},e_{2},e_{3}\}\) form a basis of \(V\) and let \(\{e_{1}^{*},e_{2}^{*},e_{3}^{*}\}\) be the dual basis of \(V^{*}=\Lambda\otimes\mathbb{Z}_{2}\). As above \(\Omega=e_{1}\wedge e_{2}\wedge e_{3}\) coincides with the global three form on \(B\). We discuss the following three cases. _Case 2: \(Z_{0},Z_{1},Z_{2}\) are coplanar and pass through the same point._ In this case \(\alpha_{ijk}=(\partial\Gamma)_{ijk}=\delta_{W}\), where \(W\) is the unique plane containing the three lines. We have \[\beta_{ijk}=\operatorname{Lin}(\alpha_{ijk})=e_{3}^{*}.\] Let \(q\) be the common point of the three lines. We must have \(q=0\) or \(q=e_{1}+e_{2}\). Notice that \(f_{j}=q+\epsilon_{j}e_{j}\), where \(\epsilon_{j}=1,0\). Then \[(D^{2}+DL_{\tau})_{ijk}=e_{1}\wedge e_{2}+(e_{1}+e_{2})\wedge q=e_{1}\wedge e_ {2}.\] Since contracting \(\Omega\) with \(e_{3}^{*}\) gives \(e_{1}\wedge e_{2}\), we have \(\beta_{ijk}=(D^{2}+DL_{\tau})_{ijk}\) _Case 3: \(Z_{0},Z_{1},Z_{2}\) are coplanar and intersect pairwise at three different points._ In this case, \(\alpha_{ijk}=0\). Let \(q=Z_{0}\cap Z_{1}\). Then \(f_{1}=q+\epsilon_{1}e_{1}\) and \(f_{2}=q+e_{1}+\epsilon_{2}e_{2}\). Therefore \[(D^{2}+DL_{\tau})_{ijk}=e_{1}\wedge e_{2}+e_{1}\wedge q+e_{2}\wedge(q+e_{1})=( e_{1}+e_{2})\wedge q=0.\] So that \(\beta_{ijk}=(D^{2}+DL_{\tau})_{ijk}\). _Case 4: \(Z_{1},Z_{2}\) are coplanar and \(Z_{0}\) is disjoint from \(Z_{1}\) and \(Z_{2}\)._ Let \(q=Z_{1}\cap Z_{2}\). We must have that \(q\) is linearly independent from \(e_{1}\) and \(e_{2}\), therefore we may assume that \(e_{3}=q\). We have that \(\alpha_{ijk}=\delta_{W}\), where \(W\) is the unique 2-plane containing \(Z_{0}\) and the points \(e_{1}+e_{3}\) and \(e_{2}+e_{3}\). It can be easily seen that \[\beta_{ijk}=\operatorname{Lin}(\alpha_{ijk})=e_{1}^{*}+e_{2}^{*}+e_{3}^{*}\] On the other hand, we have \(f_{j}=e_{3}+\epsilon_{j}e_{j}\), which gives \[(D^{2}+DL_{\tau})_{ijk}=e_{1}\wedge e_{2}+e_{1}\wedge e_{3}+e_{2}\wedge e_{3}.\] This is precisely the two form obtained by contracting \(\Omega\) with \(e_{1}^{*}+e_{2}^{*}+e_{3}^{*}\). Therefore \(\beta_{ijk}=(D^{2}+DL_{\tau})_{ijk}\) also in this case. This concludes the proof. ## 6. Connectedness and bounds on Betti numbers We will compute some consequences on the cohomology of \(\Sigma_{\tau}\) with the assumption that the base \(B\) is a homology \(\mathbb{Z}_{2}\)-sphere and that \(H^{1}(X,\mathbb{Z}_{2})=H^{1}(\check{X},\mathbb{Z}_{2})=0\). We will prove the following **Theorem 6.1**.: With the above assumptions, if \(\tau\) is a non-trivial twist, then \(\Sigma_{\tau}\) is connected and its Betti numbers satisfy \[b_{1}(\Sigma_{\tau},\mathbb{Z}_{2})\leq\dim H^{1}(B,R^{1}f_{*}\mathbb{Z}_{2}) +\dim H^{1}(B,R^{2}f_{*}\mathbb{Z}_{2})-1.\] where equality holds if and only if the connecting homomorphism \(\beta\) in (16) is zero. If the integral cohomologies of \(X\) and \(\check{X}\) have no \(\mathbb{Z}_{2}\) torsion, then \[b_{q}(\Sigma_{\tau},\mathbb{Z}_{2})\leq h^{q,3-q}(X)+h^{q,q}(X)-1.\] Notice in particular that \(\Sigma_{\tau}\) is never maximal, in fact we have \[\sum b_{j}(\Sigma_{\tau},\mathbb{Z}_{2})\leq\sum b_{j}(X,\mathbb{Z}_{2})-4.\] When this inequality is an equality, \(\Sigma_{\tau}\) is called an \((M-2)\) real variety (\(M\) stands for maximal). ### Cohomology of \(\mathcal{L}^{2}_{\tau}\) We prove the following **Lemma 6.2**.: In the hypothesis of Theorem 6.1 we have \[H^{0}(B,\mathcal{L}^{2}_{\tau})=0\] \[H^{1}(B,\mathcal{L}^{2}_{\tau}))\cong\frac{H^{1}(B,R^{2}f_{*} \mathbb{Z}_{2})}{\langle\tau\rangle}\] \[H^{2}(B,\mathcal{L}^{2}_{\tau})\cong H^{2}(B,R^{2}f_{*}\mathbb{Z }_{2})\] \[H^{3}(B,\mathcal{L}^{2}_{\tau})\cong\mathbb{Z}_{2}\] Proof.: It follows from the discussion in SS2.7 that \[H^{0}(B,R^{2}f_{*}\mathbb{Z}_{2})\cong H^{0}(B,R^{1}\check{f}_{* }\mathbb{Z}_{2})=0\] \[H^{3}(B,R^{2}f_{*}\mathbb{Z}_{2})=0\] Moreover \[H^{0}(B,\mathbb{Z}_{2})=H^{3}(B,\mathbb{Z}_{2})=\mathbb{Z}_{2},\quad H^{1}(B, \mathbb{Z}_{2})=0\] since \(B\) is a homology \(\mathbb{Z}_{2}\) sphere. Hence the long exact sequence associated to (13) splits as follows \[0\longrightarrow H^{0}(B,\mathcal{L}^{2}_{\tau})\longrightarrow \mathbb{Z}_{2}\longrightarrow H^{1}(B,R^{2}f_{*}\mathbb{Z}_{2})\longrightarrow H ^{1}(B,\mathcal{L}^{2}_{\tau})\longrightarrow 0\] \[0\longrightarrow H^{2}(B,R^{2}f_{*}\mathbb{Z}_{2}) \longrightarrow H^{2}(B,\mathcal{L}^{2}_{\tau})\longrightarrow 0\] \[0\longrightarrow H^{3}(B,\mathcal{L}^{2}_{\tau}) \longrightarrow\mathbb{Z}_{2}\longrightarrow 0 \tag{20}\] The last two lines give the last two statements of the lemma. In the first line we have two possibilities, either \(H^{0}(B,\mathcal{L}^{2}_{\tau})=0\) or \(H^{0}(B,\mathcal{L}^{2}_{\tau})\cong H^{0}(B,\mathbb{Z}_{2})\cong\mathbb{Z}_{2}\). Let us prove that the former holds. Take some covering \(\mathfrak{U}=\{U_{i}\}\) over which the cycle \(\tau\) and \(\Sigma_{\tau}\) can be described as in SS4.1. Then, over each \(U_{i}\) we have identifications \[(\Lambda^{*}\otimes\mathbb{Z}_{2})\oplus\mathbb{Z}_{2}\stackrel{{ \phi_{i}}}{{\longrightarrow}}\mathcal{L}^{2}|_{U_{i}}.\] In fact for every non-zero \(v\in(\Lambda^{*}\otimes\mathbb{Z}_{2})\) let \(Z_{v}\) be the line with direction \(v\) and passing through \(-\frac{\check{\tau}_{i}}{2}\). Then we define \[\phi_{i}(v,\epsilon)=\left[\delta_{Z_{v}}+\epsilon\delta_{-\frac{\check{\tau} _{i}}{2}}\right].\] It is then easy to check that over \(U_{ij}\) \[\phi_{j}^{-1}\circ\phi_{i}(v,\epsilon)=(v+\epsilon\tau_{ij},\epsilon).\] Suppose by contradiction that \(\mathcal{L}^{2}_{\tau}\) has a non-trivial section \(\alpha\) which is mapped to \(1\) under the homomorphism \(H^{0}(B,\mathcal{L}^{2}_{\tau})\rightarrow\mathbb{Z}_{2}\). Then, locally with the above identifications \(\phi_{i}\), we have \[\alpha_{i}=\alpha|_{U_{i}}=(v_{i},1)\] for some local section \(v_{i}\) of \(\Lambda^{*}\otimes\mathbb{Z}_{2}\). But since \(\alpha\) is a section, we must have that on \(U_{ij}\) \[(v_{i}+\tau_{ij},1)=(v_{j},1).\] This implies that \(\tau_{ij}=v_{j}-v_{i}\), i.e. that \(\tau\) is the trivial class, contradicting our assumption. The first line of (20) becomes \[0\longrightarrow\mathbb{Z}_{2}\longrightarrow H^{1}(B,R^{2}f_{*}\mathbb{Z}_ {2})\longrightarrow H^{1}(B,\mathcal{L}_{\tau}^{2})\longrightarrow 0.\] We now prove that image of \(\mathbb{Z}_{2}\) inside \(H^{1}(B,R^{2}f_{*}\mathbb{Z}_{2})\) is generated by \(\tau\). Given \(1\in\mathbb{Z}_{2}=H^{0}(B,\mathbb{Z}_{2})\), we first lift it to a cochain \(\gamma\in C^{0}(\mathfrak{U},\mathcal{L}_{\tau}^{2})\). Then \(\partial\gamma\) comes from a cocycle \(\lambda\) in \(C^{1}(\mathfrak{U},R^{2}f_{*}\mathbb{Z}_{2})\) whose class is the image of \(1\). We can define \(\gamma\) on each \(U_{i}\) by \[\phi_{i}(\gamma_{i})=(0,1).\] Now we have \[\phi_{j}(\gamma_{i}-\gamma_{j})=\phi_{i}(\gamma_{i}-\gamma_{j})=(\tau_{ij},0).\] Therefore the cocycle \(\lambda\) coincides with \(\tau\). This proves the second isomorphism in this lemma. ### Cohomology of \(\mathcal{L}_{\tau}^{1}\) We can do a similar analysis of the cohomology of \(\mathcal{L}_{\tau}^{1}\). **Lemma 6.3**.: In the hypothesis of Theorem 6.1 we have that \[H^{0}(B,\mathcal{L}_{\tau}^{1}) =\mathbb{Z}_{2}\] \[H^{1}(B,\mathcal{L}_{\tau}^{1}) \cong H^{1}(B,R^{1}f_{*}\mathbb{Z}_{2})\] Proof.: Both isomorphism follow from the long exact sequence associated to (12). ### Proof of Theorem 6.1 We apply the isomorphisms of Lemmas 6.2 and 6.3 to the long exact sequence (16). The first line becomes \[0\longrightarrow\mathbb{Z}_{2}\longrightarrow H^{0}(\Sigma_{\tau},\mathbb{Z} _{2})\longrightarrow 0. \tag{21}\] This proves that \(\Sigma_{\tau}\) is connected. The rest of (16) becomes \[0\longrightarrow H^{1}(B,R^{1}f_{*}\mathbb{Z}_{2})\longrightarrow H^{1}( \Sigma_{\tau},\mathbb{Z}_{2})\longrightarrow H^{1}(B,\mathcal{L}_{\tau}^{2}) \stackrel{{\beta}}{{\longrightarrow}}\] \[H^{2}(B,\mathcal{L}_{\tau}^{1})\longrightarrow H^{2}(\Sigma_{ \tau},\mathbb{Z}_{2})\longrightarrow H^{2}(B,\mathcal{L}_{\tau}^{2}) \longrightarrow\dots, \tag{22}\] which gives \[b_{1}(\Sigma_{\tau},\mathbb{Z}_{2}) \leq\dim H^{1}(B,R^{1}f_{*}\mathbb{Z}_{2})+\dim H^{1}(B,\mathcal{ L}_{\tau}^{2})\] \[=\dim H^{1}(B,R^{1}f_{*}\mathbb{Z}_{2})+\dim H^{1}(B,R^{2}f_{*} \mathbb{Z}_{2})-1.\] Obviously the equality holds if and only if \(\beta=0\). If the integral cohomology of X has no \(\mathbb{Z}_{2}\) torsion then the dimensions of the spaces on the righthand side equal the corresponding Hodge numbers. ### Topology of twisted real \(K3\) surfaces Assume that \(B\) has dimension \(2\) and that it is the affine base of a \(K3\) surface. Let \(\Sigma_{\tau}\) be the real twisted \(K3\) associated to some twist \(\tau\in H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2})\). **Theorem 6.4**.: If the twist is non trivial, then the real twisted \(K3\) surface \(\Sigma_{\tau}\) is connected and has genus \(9\). Proof.: We use the sheaf \(\mathcal{L}^{1}_{\tau}\) with the properties described in SS4.5. The long exact sequence associated to (15) splits as \[0\longrightarrow H^{0}(B,\mathcal{L}^{1}_{\tau})\longrightarrow H^{0}(\Sigma_{ \tau},\mathbb{Z}_{2})\longrightarrow\mathbb{Z}_{2}\stackrel{{ \alpha}}{{\longrightarrow}}\] \[\longrightarrow H^{1}(B,\mathcal{L}^{1}_{\tau})\longrightarrow H^ {1}(\Sigma_{\tau},\mathbb{Z}_{2})\longrightarrow 0,\] and \[0\longrightarrow H^{2}(B,\mathcal{L}^{1}_{\tau})\longrightarrow H^{2}(\Sigma_ {\tau},\mathbb{Z}_{2})\longrightarrow\mathbb{Z}_{2}\longrightarrow 0.\] The sequence (12) gives that \(H^{0}(B,\mathcal{L}^{1}_{\tau})\cong\mathbb{Z}_{2}\) and \[0\longrightarrow H^{1}(B,\mathcal{L}^{1}_{\tau})\longrightarrow H^{1}(B,R^{1 }f_{*}\mathbb{Z}_{2})\longrightarrow\mathbb{Z}_{2}\longrightarrow H^{2}(B, \mathcal{L}^{1}_{\tau})\longrightarrow 0.\] Let us prove that the homomorphism \(\alpha\) in the first sequence is injective if and only if \(\tau\) is non-trivial. The argument is similar to the proof of Lemma 6.2. Take some covering \(\mathfrak{U}=\{U_{i}\}\) over which the cycle \(\tau\) and \(\Sigma_{\tau}\) can be described as in SS4.1. Then, over each \(U_{i}\) we have identifications \[\mathcal{L}^{1}_{\tau}|_{U_{i}}\stackrel{{\phi_{i}}}{{ \longrightarrow}}(\Lambda\otimes\mathbb{Z}_{2})\oplus\mathbb{Z}_{2}.\] which map an affine function \(\beta\) on \(\Sigma_{\tau}|_{U_{i}}\) to \((v_{\beta},\epsilon_{\beta})\), where \(v_{\beta}\) is the linear part of \(\beta\) and \(\epsilon_{\beta}=\beta(-\tilde{\tau}_{U_{i}}/2)\). It is easy to show that \[\phi_{j}\circ\phi_{i}^{-1}(v,\epsilon)=(v,\epsilon+v(\tau_{ij}))\] Using the mirror symmetry isomorphism (6), the twist \(\tau\) has a mirror \(\check{\tau}\in H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2})\), which is represented by \(\{\check{\tau}_{ij}\}\), where \(\check{\tau}_{ij}\in(\Lambda\otimes\mathbb{Z}_{2})_{U_{ij}}\) is obtained by contracting \(\tau_{ij}\) with a global integral \(2\)-form. The generator \(1\) of \(\mathbb{Z}_{2}\) (i.e. the last term in the sequence (15)) can be represented on each \(U_{i}\) as \([\delta\stackrel{{\tau_{U_{i}}}}{{-\frac{\tau_{U_{i}}}{2}}}]\). Then one can check that \(\alpha(1)\) is represented by the \(1\)-cycle \(\{(\check{\tau}_{ij},1)\}\). This cycle is non-zero if and only if \(\tau\) is non zero. Now the statement follows from the above sequences, where we use the fact that \(\dim H^{1}(B,R^{1}f_{*}\mathbb{Z}_{2})=20\) ## 7. The Renaudineau-Shaw spectral sequence We will compare our results with the work [18] by Renaudineau and Shaw, where they construct a spectral sequence computing the Betti numbers of real hypersurfaces in toric varieties arising from primitive patchworking. In their case they consider the \(n\)-dimensional tropical hypersurface \(X\), corresponding to the complex hypersurface \(\mathbb{C}X\), and they compute the homology with \(\mathbb{Z}_{2}\) coefficients of the real hypersurface \(\mathbb{R}X\). ### The results of Renaudineau-Shaw The patchworking data (i.e. the choice of signs) are encoded in a cosheaf \(\mathcal{S}\) of \(\mathbb{Z}_{2}\) modules over \(X\), called the sign cosheaf. Its homology satisfies \(H_{q}(X,\mathcal{S})=H_{q}(\mathbb{R}X,\mathbb{Z}_{2})\). Then they construct a filtration of \(\mathcal{S}\) \[0=\mathcal{K}_{n+1}\subset\mathcal{K}_{n}\subset\ldots\subset\mathcal{K}_{0}= \mathcal{S}\] by cosheaves whose quotients \[\mathcal{F}_{p}=\frac{\mathcal{K}_{p}}{\mathcal{K}_{p+1}}\] are a \(\mathbb{Z}_{2}\) version of the cosheaves defined by Itenberg, Katzarkov, Mikhalkin and Zharkov (IKMZ) in [16]. The tropical homology groups of \(X\) are defined as \(H_{q}(X,\mathcal{F}_{p})\). The above filtration induces a spectral sequence, converging to the homology of \(\mathbb{R}X\), whose first page is given by \(E^{1}_{q,p}=H_{q}(X,\mathcal{F}_{p})\). The boundary morphisms are maps \(\partial:H_{q+1}(X,\mathcal{F}_{p})\to H_{q}(X,\mathcal{F}_{p+1})\). In the original definition of [16], the cosheaves \(\mathcal{F}_{p}\) are defined over \(\mathbb{Q}\) and in this case IKMZ prove that \[\dim(H_{q}(X,\mathcal{F}_{p}))=h^{p,q}(\mathbb{C}X).\] Under the assumption that the integral tropical homology of \(X\) has no torsion, the spectral sequence implies that \[b_{q}(\mathbb{R}X,\mathbb{Z}_{2})\leq\begin{cases}h^{q,q}(\mathbb{C}X)\text{ if }q=n/2,\\ h^{q,n-q}(\mathbb{C}X)+h^{q,q}(\mathbb{C}X)\text{ otherwise.}\end{cases}. \tag{23}\] It follows from [3] that indeed these inequalities hold when the ambient toric variety is smooth. This proves a generalization of the Itenberg conjecture [15]. Renaudineau and Shaw also prove that \(\mathbb{R}X\) is maximal if and only if the spectral sequence degenerates at the first page, i.e. if and only if all the boundary maps vanish. They conjecture that the spectral sequence degenerates at the second page. ### Comparison with our case In our case, \(B\) plays the role of the tropical variety \(X\) and the sheaf \(\pi_{\tau*}\mathbb{Z}_{2}\) is the replacement for the sign cosheaf \(\mathcal{S}\). Notice that we use a sheaf because we compute cohomology, instead of homology. Let us define a filtration of \(\pi_{\tau*}\mathbb{Z}_{2}\) analogous to the one described above. Let \(V\) be an \(n\)-dimensional \(\mathbb{Z}_{2}\)-vector space and let \(S_{V}=\mathrm{Maps}(V,\mathbb{Z}_{2})\). We define the subspace \(K^{p}\subset S_{V}\) as the set of maps which can be defined by a polynomial on \(V\) of degree less or equal to \(p\). Then \(K^{0}\cong\mathbb{Z}^{2}\) is given by the constant maps, \(K^{1}\) is the space of affine maps and it can be shown that \(K^{n}=S_{V}\). Therefore we have a filtration \[0\subset K^{0}\subset K^{1}\subset....\subset K^{n}=S_{V}\] It can be shown (e.g. compare with Section 4 of [18]) that \[K^{p}/K^{p-1}\cong\bigwedge^{p}V^{*}\] If \(V\) is an affine space instead of a vector-space, choosing a point \(V\) as the origin turns it into a vector space and thus we can still define the filtration on \(\mathrm{Maps}(V,\mathbb{Z}_{2})\). It can be shown that this filtration is independent of the chosen point. We have seen that at a smooth point \(b\in B_{0}\), \(\pi_{\tau}^{-1}(b)\) is an affine space modeled on \(\Lambda_{b}^{*}\otimes\mathbb{Z}_{2}\) and therefore we can define a filtration by sheaves on \(\pi_{\tau*}\mathbb{Z}_{2}\), since \((\pi_{\tau*}\mathbb{Z}_{2})_{b}=\mathrm{Maps}(\pi_{\tau}^{-1}(b),\mathbb{Z}_{2})\): \[0\subset\mathcal{K}^{1}\subset\mathcal{K}^{2}\subset\ldots\subset\mathcal{K} ^{n}=\pi_{\tau*}\mathbb{Z}_{2}.\] Let us denote \[\mathcal{F}^{p}=\mathcal{K}^{p}/\mathcal{K}^{p-1}.\] Then we have \[\mathcal{F}^{p}\cong\bigwedge^{p}\Lambda=R^{p}f_{*}\mathbb{Z}_{2},\] where the last equality follows from (4). Therefore the sheaves \(R^{p}f_{*}\mathbb{Z}_{2}\) play the same role as the cosheaves \(\mathcal{F}_{p}\) in the Renaudineau-Shaw sequence. The above filtration induces a spectral sequence computing the cohomology of \(\Sigma_{\tau}\). The first page is \(E^{1}_{q,p}=H^{q}(B,R^{p}f_{*}\mathbb{Z}_{2})\) and the boundary maps are \[\partial_{\tau}:H^{q}(B,R^{p+1}f_{*}\mathbb{Z}_{2})\to H^{q+1}(B,R^{p}f_{*} \mathbb{Z}_{2})\] Let us now compare this spectral sequence with our sequences. We restrict to the case \(n=3\), so that \(\mathcal{K}^{3}=\pi_{\tau*}\mathbb{Z}_{2}\). Notice that by definition \[\mathcal{L}^{1}_{\tau}=\mathcal{K}^{1}.\] Moreover the sequence (11) defining \(\mathcal{L}^{2}_{\tau}\) implies that \[\mathcal{L}^{2}_{\tau}=\mathcal{K}_{3}/\mathcal{K}_{1}.\] The sequence (12) is the sequence \[0\to\mathcal{K}^{0}\to\mathcal{K}^{1}\to\mathcal{F}^{1}\to 0\] defining \(\mathcal{F}^{1}\). The sequence (13) corresponds to \[0\to\mathcal{F}^{2}\to\mathcal{K}^{3}/\mathcal{K}^{1}\to\mathcal{F}^{3}\to 0. \tag{24}\] It is not difficult to show that the homomorphism \(\beta^{\prime}\) defined in (17) coincides with the corresponding boundary map from the spectral sequence. Moreover the connecting homomorphism \[H^{0}(B,\mathcal{F}^{3})\to H^{1}(B,\mathcal{F}^{2})\] from sequence (24) is also a boundary map from the spectral sequence and it is equivalent to the connecting homomorphism \[\mathbb{Z}_{2}\to H^{1}(B,R^{2}f_{*}\mathbb{Z}_{2})\] from (13). In particular Lemma 6.2 shows that, with the given hypothesis, this map is always injective when the twist \(\tau\) is non trivial. Therefore, in the hypothesis of SS6, the spectral sequence never degenerates at the first page when \(\tau\) is non trivial. ### Mirror symmetry Let us now consider mirror symmetry. Applying the isomorphism (6), the boundary homomorphisms for the first page of the spectral sequence become \[\partial_{\tau}:H^{q}(B,R^{n-p-1}\check{f}_{*}\mathbb{Z}_{2})\to H^{q+1}(B,R^{ n-p}\check{f}_{*}\mathbb{Z}_{2}).\] When \(n=3\), Theorem 5.1 and Lemma 6.2 explicitly describe these homomorphisms in the cases \((q,p)=(1,1)\) and \((q,p)=(0,2)\). **Question.**_Can we explicitly describe these homomorphisms for all \(n,p\) and \(q\)?_ For instance, in three dimensions, there is one more case to compute in order to determine the spectral sequence, i.e. \((q,p)=(2,0)\). In fact, if the boundary map in this case is injective, then the spectral sequence degenerates at the second page. We were not able to find an explicit description of this case. ## 8. A connected \((M-2)\)-real quintic In the case of the quintic threefold, with the torus fibration described by Gross in [9], Arguz-Prince [2] computed the cohomology of the untwisted real quintic (i.e. \(L_{\tau}=0\)). They found that the map \(\beta\) has rank 73, and hence that \(b_{1}(\Sigma)=29\). In particular the untwisted real quintic is not maximal. Since also the twisted quintics are not maximal, as proved in Section 6, none of the real quintics constructed in this way is maximal. The highest possible value of \(b_{1}\) for our twisted real Calabi-Yau's is when \(\beta_{\tau}=0\). In this case, as proved in Section 6, the real Calabi-Yau is an \((M-2)\)-real variety. We prove the following **Theorem 8.1**.: There exists a connected \((M-2)\) real twisted quintic \(\Sigma_{\tau}\). In particular \[b_{1}(\Sigma_{\tau},\mathbb{Z}_{2})=101.\] We will prove this by finding a divisor \(L\) in the mirror quintic \(\check{X}\) such that \[D^{2}+DL=0\quad\forall D\in H^{2}(\check{X},\mathbb{Z}_{2}). \tag{25}\] ### The mirror quintic Consider the same simplex \(P\) as in SS2.4, i.e. the one with vertices \[V_{0}=(-1,-1,-1,-1),\quad V_{1}=(4,-1,-1,-1),\quad V_{2}=(-1,4,-1,-1),\] \[V_{3}=(-1,-1,4,-1),\quad V_{4}=(-1,-1,-1,4).\] It is a reflexive polytope with the origin as its only interior integral point. The fan whose cones are the cones over the faces of \(P\) gives a singular toric variety \(\check{Y}_{P}\). We can resolve \(\check{Y}_{P}\) by taking a unimodular regular subdivision of the boundary \(\partial P\) and take the associated fan, which gives a smooth toric variety \(\check{Y}\). The mirror quintic \(\check{X}\) is a smooth anti-canonical divisor in \(\check{Y}\). Let us take, as in Gross [9], a subdivision of \(\partial P\) which on \(2\)-dimensional faces looks like Figure 4. Figure 4. Triangulation of \(2\)-dimensional faces ### Divisors in the mirror quintic Each vertex in the subdivision of \(\partial P\) corresponds to a one dimensional cone of the fan, hence to a toric divisor in \(\check{Y}\). The divisors corresponding to vertices inside two dimensional faces of \(\partial P\) are precisely the ones which have non trivial intersection with the mirror quintic \(\check{X}\), therefore they correspond to non-zero divisors in \(\check{X}\). Gross shows that these divisors generate \(H^{2}(\check{X},\mathbb{Z})\), and hence also \(H^{2}(\check{X},\mathbb{Z}_{2})\) (see Lemma 4.3 of [9]). With some abuse of notation, we denote by \(V_{0},\ldots,V_{4}\) also the divisors corresponding to the vertices of \(\partial P\). Moreover we denote by \(E^{\ell}_{ij}\) the divisor corresponding to the \(\ell\)'th interior vertex along the edge from \(V_{i}\) to \(V_{j}\), where interior vertices of edges are numbered as in Figure 4. The divisors in the interior of the 2-face with vertices \(V_{i}\), \(V_{j}\) and \(V_{k}\), numbered as in Figure 4, are denoted by \(F^{\ell}_{ijk}\). In Proposition 4.2 of op. cit. Gross also computes triple intersection numbers \(D_{1}D_{2}D_{3}\) between these divisors. We report here the \(\mod 2\) version. **Proposition 8.2** (Proposition 4.2 of [9]).: The \(\mod 2\) triple intersection numbers between the above divisors in \(\check{X}\) are as follows 1. \((V_{i})^{3}=(E^{\ell}_{ij})^{3}=1\) and \((F^{\ell}_{ijk})^{3}=0\). 2. Given two distinct divisors \(D_{1}\) and \(D_{2}\) lying on the same 2-face, then \((D_{1})^{2}D_{2}=1\) if and only if \(D_{1}\) and \(D_{2}\) are connected by one edge in the graph depicted in Figure 5. 3. Given three distinct divisors \(D_{1}\), \(D_{2}\), \(D_{3}\) lying on the same 2-face, then \(D_{1}D_{2}D_{3}=1\) if and only if they are vertices of a two simplex in Figure 4. All other triple intersections are zero. Let \(S\) be the set of all divisors of type \(V_{i}\), \(E^{\ell}_{ij}\) or \(F^{\ell}_{ijk}\). A general divisor in \(H^{2}(\check{X},\mathbb{Z}_{2})\) can be written as \[L=\sum_{D\in S}\epsilon_{D}D\] Figure 5. Triple intersection graph where \(\epsilon_{D}\in\mathbb{Z}_{2}\). We can obviously view \(L\) as a subset of \(S\), where \(D\in L\) if and only if \(\epsilon_{D}=1\). We have that \(L\) satisfies (25) if and only if \[D_{1}^{2}D_{2}=LD_{1}D_{2},\quad\forall D_{1},D_{2}\in S.\] ### Local configurations It follows from Proposition 8.2 that if \(D_{1}\) and \(D_{2}\) do not belong to the same 2-face then \(D_{1}^{2}D_{2}=LD_{1}D_{2}=0\). We now consider the following cases (and subcases) 1. \(D_{1}\) and \(D_{2}\) lie in the same 2-face but not in the same edge of \(\partial P\); 1. \(D_{1}\neq D_{2}\) and they are connected by an edge of the subdivision; 2. \(D_{1}\neq D_{2}\) and they are not connected by an edge of the subdivision; 3. \(D_{1}=D_{2}\) and it is in the interior of a 2-face. 2. \(D_{1}\) and \(D_{2}\) lie inside the same edge. 1. \(D_{1}=D_{2}\); 2. \(D_{1}\neq D_{2}\) and they are connected by an edge in the graph of Figure 5; 3. \(D_{1}\neq D_{2}\) and they are not connected by an edge in the graph of Figure 5; We now see how in the above cases the condition \(D_{1}^{2}D_{2}=LD_{1}D_{2}\) imposes certain local configurations on \(L\). **Case 1.1.** Let \(D_{3}\) and \(D_{4}\) be the other vertices of the two simplices which contain the edge from \(D_{1}\) to \(D_{2}\) as in Figure 6. Then, using Proposition 5, we have \[D_{1}^{2}D_{2}=1\quad\text{and}\quad LD_{1}D_{2}=\sum_{j=1}^{4}\epsilon_{D_{j}}.\] Figure 6. If \(D_{1}\) and \(D_{2}\) are on the same 2-face and are connected by an edge, then only an odd number of the \(D_{1},D_{2},D_{3},D_{4}\) can be in \(L\). Therefore we have \(D_{1}^{2}D_{2}=LD_{1}D_{2}\) if and only if only an odd number of the vertices \(D_{1},D_{2},D_{3},D_{4}\) belong to \(L\). **Case 1.2.** It is easy to see that in this case, both \(D_{1}^{2}D_{2}\) and \(LD_{1}D_{2}\) are always zero. **Case 1.3.** If \(D_{1}=D_{2}\) and it is in the interior of a 2-faces of \(\partial P\), then \(D_{1}^{2}D_{2}=D_{1}^{3}=0\). On the other hand \[LD_{1}^{2}=\sum_{j=3}^{8}\epsilon_{D_{j}},\] where \(D_{3},\dots,D_{8}\) are the six vertices adjacent to \(D_{1}\). Therefore in this case, \(D_{1}^{2}D_{2}=LD_{1}D_{2}\) if and only if an even number of the \(D_{3},\dots,D_{8}\) belong to \(L\) (see Figure 7). **Case 2.1.** If \(D_{1}=D_{2}\) and it lies on an edge of \(\partial P\), then \(D_{1}^{2}D_{2}=D_{1}^{3}=1\). Let \(S_{D_{1}}\) be the subset of \(S\) consisting of \(D_{1}\) and of all the vertices which are connected to \(D_{1}\) via an edge of the graph in Figure 5 for some 2-face containing \(D_{1}\). For example, if \(D_{1}\) is of type \(V_{j}\), i.e. it is a vertex of \(\partial P\), then \(S_{D_{1}}\) contains 5 elements, otherwise if \(D_{1}\) is in the interior of an edge, \(S_{D_{1}}\) will contain 8 elements. Then we have \[LD_{1}D_{2}=LD_{1}^{2}=\sum_{D\in S_{D_{1}}}\epsilon_{D}.\] Therefore in this case \(D_{1}^{2}D_{2}=LD_{1}D_{2}\) if and only if \(S_{D_{1}}\cap L\) contains an odd number of elements. **Case 2.2.** Let \(D_{1}\neq D_{2}\) belong to some edge of \(\partial P\) and assume they are connected by an edge in the graph of Figure 5 (e.g. \(D_{1}=E_{ij}^{2}\) and \(D_{2}=E_{ij}^{3}\)). Let \(S_{D_{1}D_{2}}\) be the subset of \(S\) consisting of \(D_{1}\), \(D_{2}\) and the vertices \(D\) such that \(D_{1},D_{2}\) and \(D\) are vertices of a 2-simplex of the Figure 7. If \(D_{1}=D_{2}\) is in the interior of a 2-face then an even number of the \(D_{3},\dots,D_{8}\) can be in \(L\). subdivision. In particular \(S_{D_{1}D_{2}}\) contains 5 elements. Then we have \(D_{1}^{2}D_{2}=1\) and \[LD_{1}D_{2}=\sum_{D\in S_{D_{1}D_{2}}}\epsilon_{D}.\] Therefore in this case \(D_{1}^{2}D_{2}=LD_{1}D_{2}\) if and only if \(S_{D_{1}D_{2}}\cap L\) contains an odd number of elements. **Case 2.3.** Let \(D_{1}\neq D_{2}\) belong to some edge of \(\partial P\) and assume they are not connected by an edge in the graph of Figure 5. We have two possibilities: either \(D_{1}\) and \(D_{2}\) are not adjacent (e.g. \(D_{1}=E_{ij}^{2}\) and \(D_{2}=E_{ij}^{4}\)) or they are adjacent but the edge between them is not part of the graph in Figure 5 (e.g. e.g. \(D_{1}=E_{ij}^{1}\) and \(D_{2}=E_{ij}^{2}\)). In both cases we have \(D_{1}^{2}D_{2}=0\). Let \(S_{D_{1}D_{2}}\) be the subset of \(S\) consisting of the vertices \(D\) such that \(D_{1},D_{2}\) and \(D\) are vertices of a 2-simplex of the subdivision. Obviously \(S_{D_{1}D_{2}}\) is empty in the first case and consists of 3 elements in the second case. Then \[LD_{1}D_{2}=\sum_{D\in S_{D_{1}D_{2}}}\epsilon_{D}.\] Therefore in this case \(D_{1}^{2}D_{2}=LD_{1}D_{2}\) if and only if \(S_{D_{1}D_{2}}\cap L\) contains an even number of elements. ### Proof of Theorem 8.1 In Figure 8 we give two examples of a 2-face and divisor \(L\) (the dotted "red" vertices) such that all interior edges satisfy the configurations of Case 1.1 and all interior vertices satisfy the configurations of Case 1.3. We call the picture on the left an "empty face" (because its edges are empty) and the one on the right the "arrow" and we also depict their symbols. The arrows, in the symbol for the arrow, correspond to non-empty edges and they point towards the vertex which lies in \(L\). Figure 8. The “empty” face and the “arrow” In Figure 9 we give two examples of a neighborhood of an edge of \(\partial P\) and a divisor \(L\) such that every vertex of the edge satisfies the configurations of Case 2.1 and every edge satisfies the configurations of either Case 2.2 or Case 2.3. Since each edge of \(\partial P\) is contained in three 2-faces, we have glued together three copies (blue, black and red) of the graph in Figure 5 along an edge. Notice that the configuration on the left can be obtained by gluing three copies of an "arrow" along a non-empty edge. The configuration on right can be obtained by gluing two copies of an "empty" face and one copy of an "arrow", along their empty edges. Figure 10 describes a global example of a divisor \(L\in H^{2}(\check{X},\mathbb{Z}_{2})\) satisfying \(D^{2}+LD=0\) for all \(D\in H^{2}(\check{X},\mathbb{Z}_{2})\). We have depicted the graph formed by the edges of \(\partial P\). Each triple of vertices corresponds to a 2-face of \(\partial P\) and this 2-faces is either an "empty" face or an "arrow" depending on whether the decoration of the edges matches the corresponding symbols. It is clear that all non-empty edges will satisfy the configuration on the left in Figure 9 and all empty edges will satisfy the configuration on the right. ### The twisted real mirror quintic Let \(\check{X}\) be the mirror of the quintic. We study the topology of a twisted real mirror quintic in \(\check{X}\) Figure 10. A global configuration with empty faces and arrows Figure 9. Two configurations along an edge which we denote by \(\check{\Sigma}_{\tau}\). In this case \[H^{1}(B,R^{1}\check{f}_{*}\mathbb{Z}_{2})\cong(\mathbb{Z}_{2})^{101}\quad\text{ and}\quad H^{1}(B,R^{2}\check{f}_{*}\mathbb{Z}_{2})\cong\mathbb{Z}_{2}.\] In the untwisted case, \(\check{\Sigma}\) has two connected components and it follows from the result of Arguz and Prince that \(b_{1}(\check{\Sigma}_{0})=101\) (see Example 4.10 of [2]). Since \(H^{1}(B,R^{1}f_{*}\mathbb{Z}_{2})\cong\mathbb{Z}_{2}\), there is only one twisted real mirror quintic \(\check{\Sigma}_{\tau}\). It follows from Lemma 6.2 and 6.3 that \(H^{1}(B,\mathcal{L}_{\tau}^{2})=0\) and \(H^{1}(B,\mathcal{L}_{\tau}^{1})=(\mathbb{Z}_{2})^{101}\). Then, sequence (16) implies \[b_{1}(\Sigma_{\tau})=100.\]
2308.08628
Learning the meanings of function words from grounded language using a visual question answering model
Interpreting a seemingly-simple function word like "or", "behind", or "more" can require logical, numerical, and relational reasoning. How are such words learned by children? Prior acquisition theories have often relied on positing a foundation of innate knowledge. Yet recent neural-network based visual question answering models apparently can learn to use function words as part of answering questions about complex visual scenes. In this paper, we study what these models learn about function words, in the hope of better understanding how the meanings of these words can be learnt by both models and children. We show that recurrent models trained on visually grounded language learn gradient semantics for function words requiring spatial and numerical reasoning. Furthermore, we find that these models can learn the meanings of logical connectives and and or without any prior knowledge of logical reasoning, as well as early evidence that they are sensitive to alternative expressions when interpreting language. Finally, we show that word learning difficulty is dependent on frequency in models' input. Our findings offer proof-of-concept evidence that it is possible to learn the nuanced interpretations of function words in visually grounded context by using non-symbolic general statistical learning algorithms, without any prior knowledge of linguistic meaning.
Eva Portelance, Michael C. Frank, Dan Jurafsky
2023-08-16T18:53:39Z
http://arxiv.org/abs/2308.08628v3
Learning the meanings of function words from grounded language using a visual question answering model ###### Abstract Interpreting a seemingly-simple function word like "or", "behind", or "more" can require logical, numerical, and relational reasoning. How are such words learned by children? Prior acquisition theories have often relied on positing a foundation of innate knowledge. Yet recent neural-network based visual question answering models apparently can learn to use function words as part of answering questions about complex visual scenes. In this paper, we study what these models learn about function words, in the hope of better understanding how the meanings of these words can be learnt by both models and children. We show that recurrent models trained on visually grounded language learn gradient semantics for function words requiring spacial and numerical reasoning. Furthermore, we find that these models can learn the meanings of logical connectives _and_ and _or_ without any prior knowledge of logical reasoning, as well as early evidence that they can develop the ability to reason about alternative expressions when interpreting language. Finally, we show that word learning difficulty is dependent on frequency in models' input. Our findings offer evidence that it is possible to learn the meanings of function words in visually grounded context by using non-symbolic general statistical learning algorithms, without any prior knowledge of linguistic meaning. ## 1 Introduction When studying how children learn words researchers often make the assumption that knowing the meaning of a word \(w\) means having the ability to differentiate between things that are \(w\) and things that are not (Bloom, 2002, ch.1). This notion of meaning, sometimes called 'external meaning' is in contrast to 'internal meaning' - the mental representation of meaning that a person has for \(w\) - the favored definition of meaning in theoretical semantics. Evaluating children's ability to understand the meaning of words by how they use them in the external world seems pretty straightforward in the case of nouns and predicates, but not so much for function words, like determiners, conjunctions, and prepositions. These closed-class words tend to have external meanings that only manifest themselves in how they modify other words or sentences as a whole, making them difficult to study without referring in some way to their internal meaning. Additionally, parsing their meaning often requires complex reasoning skills such as logical, numerical, spatial or relational reasoning. The abstract nature and complexity of function words are what make their acquisition by children so difficult to study using conventional methods. Yet, these same qualities are also what make function words an ideal test case to compare different theories of language acquisition and their respective learning strategies. It has been widely observed that children tend to acquire words and grammatical structures in a specific order; this is also the case for function words. For example, _and_ is much more prevalent in children's linguistic input and is acquired before _or_(Morris, 2008; Jasbi, Jaggi, & Frank, 2018); Children start to correctly use the preposition _behind_ before they do _in front of_ and furthermore, their initial uses of these words are possibly conditioned on contextual factors like whether the referent object has the property of having a front and back, like a car or a doll (Windmiller, 1973; Kuczaj & Maratsos, 1975; E. V. Clark, 1977), and the degree of occlusion between two objects (Johnston, 1984; Grigoroglou, Johanson, & Papafragou, 2019). These differences in order of acquisition represent learning outcomes which can be used as test cases to study the impact of different types of information available in the input on learners ability to acquire these words. Theories for the acquisition of function words tend to fall somewhere along the spectrum between nativist explanations - for example logical nativism (Crain, 2012) - and usage-based approaches (Tomasello, 2005). Nativist theories posit that humans are endowed with innate knowledge of some reasoning skills and that children may undergo a series of maturational stages, to reach adult like understanding. These stage-based and symbolic learning explanations predict that conceptual differences between words may lead to asymmetries in their acquisition. On the other hand, usage-based approaches argue that the reasoning skills necessary for understanding function words are learnt through experience. Children learn these words using non-symbolic general learning mechanisms which are not exclusive to language acquisition. Usage-based learning mechanisms specifically predict that frequency of exposure is strongly related to the order in which new words may be learnt. While frequency may also play a role in nativist theories, it is often posited to be secondary to other conceptual differences. In this paper, we will consider the acquisition of three groups of function words: (1) logical reasoning with the connectives _or_ and _and_; (2) spatial reasoning with the prepositions _in front of_ and _behind_; (3) numerical reasoning with the scalar quantifiers _more_ and _fewer_. We hypothesise that these function words can be learnt using non-symbolic general learning algorithms and, furthermore, that the ordering effects seen in children's acquisition of these words are simply the result of their frequency in children's input, rather than evidence for non-symbolic or stage-based learning strategies. We propose to use computational models that learn these types of words from grounded input to test both whether they can be learnt using non-symbolic general learning algorithms and whether any ordering effects observed follow from the relative frequency of function words in the input. As neural network models have become bigger and better at completing linguistic tasks, they are also increasingly referred to as 'black boxes', giving rise to a stream of research 'probing' their internal knowledge representations (Linzen, Dupoux, & Goldberg, 2016; Lake & Baroni, 2018; Futrell et al., 2019; Manning, Clark, Hewitt, Khandelwal, & Levy, 2020; J. Hu, Gauthier, Qian, Wilcox, & Levy, 2020). This model probing paradigm has also introduced a new possible approach in cognitive science for using neural networks to study language learning in children because it has shown us that some grammatical knowledge is in practice learnable. In doing so, models may be used to inform debates about the relative innateness of certain linguistic knowledge (A. Clark & Lappin, 2011). This approach considers models as independent learners - in other words like a new'species' of language learners - that can be leveraged to implement 'proofs of concept' (Lappin, 2021, ch. 1.2, Tsuji, Cristia, & Dupoux, 2021; Warstadt & Bowman, 2023.). A proof of concept can show us what is possible 'in practice' for models and 'in principle' for humans. With our experiments, we hope to offer proof of concept evidence showing what is in practice learnable from visually grounded language on the meaning of abstract function words requiring complex reasoning skills. The behaviour of models can then serve as a lower bound for what is also possible in children's acquisition of these same words (Portelance, 2022, SS1). Specifically, we experiment with neural network models learning language in a visual question answering task, where they must come up with word representations in order to answer questions about visual scenes. The task we use is called the CLEVR (Compositional Language and Elementary Visual Reasoning) dataset (Johnson et al., 2017). It contains visual block-world scenes and corresponding questions like "Are there more red cubes than metal spheres?". Cleverly, this task makes it possible for us to study the effect of visual grounding on learning the meaning of function words, going beyond just the effect of linguistic context. As such, we can consider the interactions that may emerge from cross-modal statistical word learning, an open question developmentalists are still tackling (Saffran & Kirkham, 2018). In this task, models are never given the meaning of words, or any form of mapping between words and the content of images. They must deduce this information during learning. Instead, models are given a question and image and expected to generate an answer. They are also given the actual answer to verify their prediction. However, since no meaningful representation for words is given, they are initially equivalent to random symbols (numbers are used in Figure 1 to illustrate this point). In order to propose that a neural network learner offers additional proof that some outcome - the meaning of function words - is likely learnable in humans, it is insufficient to just show that the models can learn this outcome; we must also weigh in on what might have led the model to Figure 1: Example visual question answering task. learn it in the first place, and acknowledge that the proposed prerequisites for learning the outcome must also be available to human learners (Baroni, 2021). The learning mechanisms used by visual question answering models are almost certainly different from those used by children. Visual question answering models typically use supervised learning based on gradient descent, which can be viewed as a form of negative evidence. They have access to the correct answers to training questions and learn by trying to minimize the cross entropy between predicted answers and the correct ones. Importantly, models do not receive direct supervision to learn abstract reasoning or the meanings of function words; these learning outcomes are incidental to the task and instead could be one of many strategies that models converge towards to answer the questions correctly. Some suggest that word learning in children does not make use of any negative evidence (Baker, 1979; Pinker, 1989; Marcus, 1993; Fodor Crowther, 2002). However, many would agree that children do get some forms of indirect or implicit negative evidence, at least tangentially - be it that their desired outcomes are not met when they are misunderstood for example - where some form of supervision is to a certain extent always available (R. Brown, 1970; Snow & Ferguson, 1977; Penner, 1987; Farrar, 1992; Saxton, 1997; Chouinard & Clark, 2003; A. Clark & Lappin, 2011). The meanings of words may then be learnt indirectly from this evidence and the same may be said for our models. Indeed, visual question answering models have already been used to explore neural networks' capacity to learn meaningful representations of referential words, such as nouns and predicates, when trained on language tasks grounded in the visual world (Mao, Gan, Kohli, Tenenbaum, & Wu, 2019; Pillai, Matuszek, & Ferraro, 2021; Zellers et al., 2021; Wang, Mao, Gershman, & Wu, 2021; Jiang et al., 2023). As for function words, Hill, Hermann, Blunsom, and Clark (2018) briefly consider how visually grounded models learn negation, and Kuhnle and Copestake (2019) studied how these models interpret the quantifier _most_. Regier's (1996) earlier extensive work also considered how neural network models can learn to map visual scenes to spatial prepositions, though his models did not learn from any linguistic input per say and predate visual question answering models. Others more recently have also used these tasks to model noun and predicate learning in children (Hill, Clark, Blunsom, & Hermann, 2020; Nikolaus & Fourtassi, 2021). However, to the best of our knowledge, no work has probed visually grounded neural network models' representations of the meaning of function words in the context of children's function word learning. Throughout this paper, we will address three major research questions: 1. **How do visually grounded question answering models learn to represent and interpret function words and do these representations generalize to unseen linguistic and visual contexts?** 2. **Does the existence alternative expressions in each reasoning pair affect their acquisition or are the meanings of function words acquired in isolation?** 3. **Do models learn these function words in a similar order to children and are these ordering effects the results of their frequency or do they follow from other conceptual explanations?** With respect to our first research question, each of our function words of interest are defined in absolute terms in the CLEVR dataset we use. For example _or_ is defined as the logical operator, \(A\lor B\), and _more_ is defined as greater than, \(A>B\). In practice however, most of these words have much more gradient meanings when used by people in naturalistic contexts. The use of language in context distinguishes semantic representations from pragmatic interpretations. We probe how models interpret these words in novel context to determine how their meanings may be represented. Do their interpretations suggest that they have clear cut thresholds that distinguish the meaning of words like _more_ and _fewer_, or does linguistic gradience arise as a result of their learning environment when exposed to grounded language use in continuous visual settings? If models can learn representations which lead to gradient interpretations in novel contexts by using simple learning algorithms, then we can offer proof of concept evidence that function words are learnable from supervised data using non-symbolic learning mechanisms. With respect to our second research question, the existence of alternative expressions or worlds is the cornerstone behind gricean pragmatic reasoning, and what allows us to have different interpretations for the same word in different contexts (Grice, 1975; Degen, 2023). Children have been found to exhibit pragmatic reasoning skills in multiple domains, especially when alternative worlds are made salient (Barner, Brooks, & Bale, 2011; Katsos & Bishop, 2011; Stiller, Goodman, & Frank, 2015; Horowitz & Frank, 2016; Baharloo, Vasil, Ellwood-Lowe, & Srinivasan, 2023). It is however unclear if this ability is acquired through specific means. Following Gricean theory, we might expect children to be able to judge the informativeness of contrasting expressions as soon as they have learnt their meaning (Katsos & Bishop, 2011; E. V. Clark, 2003), suggesting that these abilities may stem from the same learning mechanisms. If recurrent visual question answering models can learn to consider alternative expressions when interpreting function words like _and_ and _or_ in novel contexts, then we may offer proof of concept evidence that the ability to reason about alternatives can be derived from statistical learning mechanism applied in a contextually grounded setting. With respect to our third research question, frequency or word predictability is a known predictor of the order in which children acquire words (Goodman, Dale, & Li, 2008; Kuperman, Stadthagen-Gonzalez, & Brysbaert, 2012; Braginsky, Yurovsky, Marchman, & Frank, 2019; Portelance, Duan, Frank, & Lupyan, To Appear). There may however be other factors - in relation with or independent from - frequency that make learning the meaning of certain function words harder than others. For example, Clark (1993) points out that there seems to be an asymmetry in the acquisition of adjective pairs like _big_ and _little_, _tall_ and _short_, etc., where children tend to produce words for positive dimensions before they do negative ones. This difference in learning may be independent from frequency, since in experiments where children are exposed to nonsense word pairs like these with even frequency, they still seem to favor learning the positive words over the negative ones (Klatzky, Clark, & Macken, 1973). These results would then promote a conceptual explanation for these effects over a frequency-based explanation. Such asymmetries may also exist for similarly polarized pairs of function words. Here, we explore whether the order in which these words are learnt is a function of how frequent they are in the input or if there may be other factors which makes certain function words intrinsically more difficult to learn than others. We will compare the order in which children acquire words requiring similar abstract reasoning to the order in which visual question answering models learn these same words, while varying their relative frequency in the models' input. Our approach is as follows. We define a novel semantic testing task within the CLEVR block world to determine whether models understand the meanings of function words in unseen contexts. We then evaluate model performance on these novel tests throughout training to visualize how learning progresses. Next, we compare the relative order in which models learn our function words to the acquisition order we expect in children. We manipulate input distributions and train models on different subsets of the training data with various function word frequencies to analyse whether the ordering effects initially observed are solely mediated by frequency or if other more conceptual factors play a role. 1 In the remainder of this introduction, we briefly review children's acquisition of function words and the visual question answering task and dataset we use. Footnote 1: All of the data, models, and experiment code presented in this paper are publicly available at www.github.com/evaportalence/vqa-function-word-learning. ### Children's acquisition of the target function words For each of the word pairs and their respective reasoning skills considered in this study ("and"/"or", "behind"/"in front of", "more"/"fewer"), we review what is currently known and debated about their acquisition in the child language learning literature. #### 1.1.1 "and" / "or" The source of the emergence of logical reasoning in children has been debated for quite some time (for a thorough review of the field see Jasbi, 2018, Ch.5). Proposals tend to fall somewhere along the spectrum between logical nativism (Crain, 2012) and usage-based approaches (Morris, 2008). Logical nativism posits that humans are endowed with innate logic and children then go through a series of developmental stages to reach adult like logical understanding. As for usage-based approaches, these argue that logical reasoning is learnt through experience using general learning mechanisms- as opposed to learning strategies that are specific to logical reasoning - and that frequency in children's input explains any ordering effects seen in children's learning of logical concepts. All agree that children correctly interpret _and_ before _or_; _and_ is also much more frequent than _or_ in children's input, and furthermore, they are exposed to more instances of exclusive _or_ than inclusive _or_(Morris, 2008; Jasbi et al., 2018). There is however some debate about the order in which children acquire possible meanings of _or_ and what the underlying meaning of this logical connective may be in children's representations. Given its higher frequency, Morris (2008) suggests that children initially learn exclusive _or_. Similarly, early nativist approaches argued that children's early understanding of _or_ was as a simple choice, making it compatible with exclusivity (Neimark, 1970). Following Grice's (1975) proposal that exclusive interpretations are the result of generalized conversational implicature, others have instead advocated that _or_ is underlying inclusive and that children eventually learn exclusive _or_ via pragmatic reasoning (Chierchia, Crain, Guasti, Gualmini, & Meroni, 2001; Chierchia et al., 2004; Jasbi & Frank, 2021). Interestingly, some have also found the children often mistakenly interpret _or_ as conjunction (Braine & Rumain, 1981; Singh, Wexler, Astle-Rahim, Kamawar, & Fox, 2016; Tieu et al., 2017), though it has been suggested that this finding may be an artifact to the specific experimental task designs used in these studies (Paris, 1973; Skordos, Feiman, Bale, & Barner, 2020). All of the experimental results showing that children understand _or_ inclusively still leave unanswered the question of how they come to learn the meaning of this word in the first place. Crain (2008, 2012) argues that these results are in fact evidence in favor of a logical nativist explanation since, though children are exposed to more instances of exclusive interpretations of _or_, they seem to instead favor inclusive interpretations initially. Currently, there is little evidence showing that inclusive _or_ is learnable from more general learning mechanisms that would support a usage-based approach. #### 1.1.2 "behind" / "in front of" Children learn the meaning of the locative preposition _behind_ before they do _in front of_(Johnston & Slobin, 1979; Johnston, 1984). There have been a few proposals for explaining this asymmetry, all sharing a common thread: that children do not initially encode the meaning of these words in geometric spatial terms. The semantic mis-analysis hypothesis for the asymmetry in children's early understanding of these expressions suggests that children struggle to incorporate the perspective of the observer in analysing the meanings of these words (Piaget & Inhelder, 1967), so they erroneously define the concepts of _front_ and _back_ in terms of visibility and occlusion (Johnston, 1984). Grigoroglou et al. (2019) also suggest that children analyse these words in terms of occlusion but not as a result of semantic misanalysis, instead as the result of pragmatic inference, where occlusion is more notable than visibility. Much of the research on the acquisition of _behind_ and _in front of_ then documents the stages of development between these early word representations and their adult-like geometric meanings. Some researchers have found that this transition is aided by the eventual projection of the property of having a front or back on objects (e.g. being behind a doll versus being behind a block) (Windmiller, 1973; Kuczaj & Maratsos, 1975; E. V. Clark, 1977). Again, there is currently a lack of evidence supporting the use of more general learning mechanisms behind the acquisition of these words, as opposed to learning strategies specific to spacial reasoning. #### 1.1.3 "more" / "fewer" Quantifiers have been found to follow quite robust acquisition ordering effects cross-linguistically (Katsos et al., 2016). For the comparative quantifiers _more (than)_ and _fewer (than)_, the meaning of _more_ has repeatedly been found to be learnt earlier than _fewer/less_ by children (Donaldson & Balfour, 1968; Palermo, 1973; Donaldson & Wales, 1970; Townsend, 1974; Geurts, Katsos, Cummins, Moons, & Noordman, 2010). Some have also found that children initially interpret _less_ as a synonym of _more_(Donaldson & Balfour, 1968; Palermo, 1973), but as Townsend (1974) points out, these earlier experimental studies did not have a way to truly distinguish between children interpreting _less_ as _more_ or simply not knowing the meaning of _less_. A few hypotheses have been put forward to explain the acquisition asymmetry between these two comparative quantifiers, all favoring conceptual explanations over frequency-based ones. Though Donaldson and Wales (1970) briefly mention that _more_ is much more frequent than _less_ in children's input, they quickly reject the possibility that frequency is the answer, arguing that if the asymmetry was down to frequency, we would expect children that do not know the meaning of _less_ to interpret this word in a variety of ways. However, citing previous work, they suggest that _less_ is instead always interpreted as _more_. They thus propose that there are a series of developmental stages for the processing of comparatives, which lead to this asymmetry, where _more_ is acquired earlier because children initially learn to use it in singular referent contexts like in the additive sense of _more_, for which they say a counterpart with _less_ is not possible. H. H. Clark (2018) offers a similar proposal with slightly different developmental stages. Still, these results clearly suggest that word frequency might account for developmental ordering phenomena, consistent with usage-based accounts as well. ### Visual question answering and the CLEVR dataset As a testbed for the learnability of function words, we will use visual question answering models trained on the CLEVR dataset, a standard dataset used in the broader natural language processing community (Johnson et al., 2017). Visual question answering was proposed as a language learning task that is grounded in images and requires models to develop abstract reasoning skills (Malinowski & Fritz, 2014; Antol et al., 2015; Gao et al., 2015; Ren, Kiros, & Zemel, 2015). Models are given images and questions about their content as input; they are then trained to answer these visually grounded questions (example image-question pairs from the CLEVR dataset are given in Figure 2). Generating the correct answers often requires reasoning skills, such as logical reasoning, spatial reasoning, and numerical reasoning, which models must also learn. Since learning the meaning of function words requires developing these same reasoning skills, models trained to complete these types of tasks lend themselves well to the study of function word learning using neural networks. Initial visual question answering tasks used datasets that were produced by having human annotators come up with questions for images (Malinowski & Fritz, 2014; Antol et al., 2015; Gao et al., 2015; Krishna et al., 2017). However, as the first resulting models emerged it became clear that they had shortcomings which prevented them from developing abstract reasoning, in part due to unbalanced datasets (Agrawal, Batra, & Parikh, 2016; Zhang, Goyal, Summers-Stay, Batra, & Parikh, 2016). To avoid this problem and to help parse which reasoning skills models were developing and relying on, balanced datasets with explicit generative models to produce questions (Johnson et al., 2017; Hudson & Manning, 2019) and images (Johnson et al., 2017) were created. CLEVR is one such dataset, containing generated images of scenes from a 3D block-world and constructed questions. The CLEVR dataset is composed of questions paired with images like those illustrated in Figure 2. The images are all of complex scenes in a block-world involving static objects placed on a 3D grey plane. Objects have four varying attributes: shape, color, material, and size. The number Figure 2: Example images and corresponding questions taken from CLEVR dataset. of objects in an image varies randomly between 3 and 10, as does their relative positions and the positions of light sources in the scenes. There are a total of 70,000 distinct images in the training set and another 15,000 different images in the validation set.2 Footnote 2: The CLEVR dataset also contains a test set, but since this dataset was designed as a benchmarking task, the meta-information for test images isn’t publicly available, nor are the answers to the test questions. We tried contacting the authors of the original paper to gain access to the test images’ meta-information in order to use them for our probe design, but we were unsuccessful. For these reasons, the images from the validation set were used in designing our semantic probe testing task. Each image is paired with a set of questions like those in Figure 2. In total there are 699,989 questions in the training set and 149,991 in the validation set. There are different types of questions, including existential questions, count questions, attribute identification questions, and comparison questions, requiring a slew of reasoning skills to answer them. Questions can be compositional and require multiple reasoning steps to arrive at the right answer. For a break down of all the question types and a full definition of the generative model used to generate them, we refer the reader to the original CLEVR dataset paper (Johnson et al., 2017). The CLEVR dataset is a standardized and highly-controlled dataset intended to facilitate progress in the development of natural language processing systems, but it is not natural language; it does not have all the same properties as the speech children are exposed to. The language in CLEVR is template-based and text only; by contrast, children's input is composed of a much richer signal including varied syntactic frames, prosody, social cues, and other sources of information. This fundamental difference means that our models do not have access to much of the rich information that children leverage to learn new words. However, working within a highly controlled and simplified learning environment allows us to understand the relations that exist between models' input and their learning outcomes. Additionally, it helps us place a lower bound on the necessary learning conditions for the meanings of function words we are interested in. ## 2 Evaluating function word knowledge using semantic probes We consider the following three types of reasoning skills and their corresponding function words in our experiments: logical reasoning and the connectives _and_ and _or_; spatial reasoning and the prepositions _behind_ and _in front of_; numerical reasoning and the quantifiers _more_ and _fewer_, all of which are available vocabulary words in CLEVR.3 We designed a semantic probe evaluation task to determine whether the models were able to learn meaningful representations for each of these words. In the rest of this paper, we will use the capitalized version of a word to refer to its respective semantic probes, for example, AND will refer to the semantic probe for the word _and_. Footnote 3: In appendix B we also include some experiments with relational reasoning and the adjective _same_. ### Semantic probes Each semantic probe is a set of existential questions based on a simple template that contains one of our function words of interest. Models must know the meaning of the relevant word to answer probe questions correctly, otherwise, we would expect performance to be at or below chance on probe questions overall. Each question is associated with an image from the CLEVR validation image set that satisfies any implied presuppositions. Example image-question pairs from each Figure 3: Example image-question pairs from semantic probes. probe are presented in Figure 3. The probes are an out-of-distribution generalization task for the models, since the questions are all based on unseen templates, though they are all composed of words which are part of the CLEVR vocabulary and show some similarities with existing CLEVR question templates. For each probe, given the template, we created the set of questions such that we iterate through every possible combination of referents in the CLEVR universe, allowing us to abstract away any difficulty answering questions that may be due to other content words. For each question, we then identified all the images in the validation set that met its presuppositions. If there were more than 10 such images we randomly sampled 10 of them. Figure 4 illustrates this procedure. And - Orprobes templates are 'Are there \(X\)s that are \(\alpha\)**and**\(\beta\)?' and 'Are there \(X\)s that are \(\alpha\)**or**\(\beta\)?', where \(X\) is a referential expression (e.g. gray sphere, metal thing, big cylinder, cube) and \(\alpha,\beta\) are properties (e.g. purple, small, metal). As previously mentioned, the probes iterate through every possible referent combination, where a referent is noun (thing, sphere, cylinder, cube) optionally preceded by a modifier referring to its color, material or size4. These templates do not have any presuppositions, so 10 images were randomly sampled for each one, totalling 15,600 image-question pairs in each probe. Footnote 4: There is an exception for the noun ‘thing’ in the case of BEHIND - IN FRONT OF and MORE - FEWER probe templates which obligatorily requires a modifier. For the AND probe, questions which were paired with images that contained at least one \(X\) that was both \(\alpha\) and \(\beta-(\alpha\wedge\beta)\) - had 'yes' as their correct answer, while questions where this requirement wasn't met in the image had 'no' as their answer. We were interested in determining the prevalence of inclusive versus exclusive interpretations of the word _or_ by models. For this reason, we used the following answer scheme for the OR probe. Questions which were paired with images that contained at least one \(X\) that was \(\alpha\) but not \(\beta-(\alpha\wedge\neg\beta)\) -, or not \(\alpha\) but \(\beta-(\neg\alpha\wedge\beta)\) -, expected the correct answer 'yes'. Questions which were Figure 4: Example probe creation procedure. Given a template, we cycle through every possible variable combination and then sample 10 images and determine their corresponding answers. paired with images that contained no \(X\)s or only \(X\)s that were neither \(\alpha\) nor \(\beta-(\neg\alpha\wedge\neg\beta)\) - had 'no' as there answer. As for question-image pairs where all \(X\)s were both \(\alpha\) and \(\beta-(\alpha\wedge\beta)\) - were ambiguous, expecting a 'yes' answer if _or_ was interpreted as inclusive, while a 'no' answer if on the other hand _or_ was interpreted as exclusive. Behind - In Front ofprobes used as templates 'Is the \(X\)**behind** the \(Y\)?' and 'Is the \(X\)**in front of** the \(Y\)?', where both \(X\) and \(Y\) are referential expressions. These templates presuppose that the images contain exactly one \(X\) and one \(Y\). Again iterating over the same complete set of referent combinations, we identified all the images which satisfied this presupposition. If there were more than 10, we randomly sampled 10 of them, otherwise we included all available images. In the end there were a total of 24,380 image-question pairs for each probe. Using the'scene' metadata available for each image, which contains annotations as to the relative position of objects, we determined the correct answer to each question. These relative positions were determined using the \((x,y,z)\) center point coordinates of objects. Using the difference in \(y\) coordinates, we determined if an object was behind or in front of another. Image-question pairs where \(X\) was in fact behind \(Y\) received a 'yes' answer for the BEHIND probe and a 'no' answer for the IN FRONT OF probe. If the opposite was true, the answers were reversed. In our analyses, we additionally wanted to track probe questions performance based on the relative distance between \(X\) and \(Y\). For these analyses we kept track of the Euclidean distance between the two referent objects using these same \((x,y,z)\) coordinates. More - Fewerprobes follow the forms 'Are there **more** of the \(X\)s than the \(Y\)s?' and 'Are there **fewer** of the \(X\)s than the \(Y\)s?'. Both these templates presuppose that the images contain at least one \(X\) and one \(Y\). Based on this presupposition we identified all of the compatible images for each question and, again, if more than 10 images were found we randomly sampled 10 of them for a given question. In total there were 24,420 image-questions pairs in each of these probes. To determine the answers to each image-question pair, we once again used the'scene' metadata which was associated to each image. We identified all of the objects which were part of \(X\) and \(Y\) referent categories and then compared their cardinality. If the number of \(X\)s was greater than the number of \(Y\)s, (\(|X|>|Y|\)), then the answer to a question in the MORE probe was 'yes', while the answer to a question in the FEWER probe was 'no'. If on the other hand the number of \(X\)s was less than the number of \(Y\)s, (\(|X|<|Y|\)), then the opposite answering pattern applied, MORE questions had 'no' for an answer, while FEWER questions - 'yes'. In the event that there were the exact same number of \(X\)s and \(Y\)s, (\(|X|=|Y|\)), both probe question types' answer was 'no'. We were interested in tracking model performance on probe questions as a function of the difference in cardinality between the two referent sets, (\(|X|-|Y|\)), so we also kept track of this number for each image-question pair. In each of the experiments that follow, we use these probes to evaluate how much models have learnt about the meaning of these words and how they interpret them given different visual contexts. We test models on all probes at each epoch during model training, allowing us to analyse what they are learning over time. ### Evaluation As we discuss below, the model we use achieves very high overall accuracy on the CLEVR task. But since our probes are out-of-distribution questions (never seen in the training set), models are not likely to reach as high accuracy scores as they do on within task questions like those seen in the validation set. Chance performance on the probes is in theory near 0% accuracy since models can produce any word in their vocabulary as the answer to probe questions. However, models very quickly learn after only a couple batches that existential questions are always answered with either 'yes' or 'no'. Thus, in practice, we should consider chance to be 50% accuracy, since models are in fact only considering two possible answers to probe questions. The CLEVR dataset is well balanced in terms of the relative frequency of each function word. Table 1 shows the raw counts for words as well as their relative frequency by word pair in the training data. The total number of word tokens is 12,868,670 words, over 699,989 training questions. Additionally, 'yes' and 'no' answers to questions containing these words are also generally well balanced, the exception being questions containing the word _or_. Table 2 shows the relative frequencies of these answers for questions containing each of our function words. As evident from this table, there are no questions containing the word _or_ which are answered using 'yes' or 'no'. _Or_ is always used as a logical conjunct connecting referents, specifically in count questions (e.g. 'How many things are blue cubes or small cylinders?'), which all require a number as their answer. All the while, _and_ is additionally used in a much wider variety of question types, sometimes connecting prepositional phrases (e.g. 'What material is the blue cube that is behind the cylinder and left of the red thing?'). Cumulatively, about 52 % of questions with _and_ require a yes/no answer, while the rest are other words in the vocabulary. Like _and_, _behind_ and _in front of_ show up \begin{table} \begin{tabular}{|l|c|c|} \hline word pairs & raw counts & frequency \\ \hline and & 81,506 & 56.32\% \\ or & 63,214 & 43.68\% \\ \hline behind & 147,409 & 49.98\% \\ in front of & 147,506 & 50.02\% \\ \hline more & 11,570 & 49.40 \% \\ fewer & 11,851 & 50.60 \% \\ \hline \end{tabular} \end{table} Table 1: Relative frequencies of each function word pair in the CLEVR training data. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{2}{c|}{_yes_ answers} & \multicolumn{2}{c|}{_no_ answers} \\ word pair & raw counts & frequency & raw counts & frequency \\ \hline and & 20,673 & 25.36 \% & 21,463 & 26.33 \% \\ or & 0 & 0\% & 0 & 0\% \\ \hline behind & 27,491 & 18.65\% & 28,707 & 19.47\% \\ in front of & 27,748 & 18.81\% & 28,563 & 19.36\% \\ \hline more & 5,549 & 47.96 \% & 6,021 & 52.04 \% \\ fewer & 5,840 & 49.28 \% & 6,011 & 50.72 \% \\ \hline \end{tabular} \end{table} Table 2: Frequencies of _yes_ and _no_ answers for questions containing each function word in the CLEVR training data. in a variety of question types, requiring different types of answers, while _more_ and _fewer_ are only used in questions which require 'yes' or 'no' answers. These differences in input distributions are artifacts of the CLEVR dataset generator and the question templates used by the original authors behind this dataset. Thus, in the results which follow, it is difficult to fairly compare across word pairs or across AND and OR probes; we should instead consider them somewhat independently. However, if we observe differences in accuracy within well balanced pairs, these are likely due to other factors beyond their frequency in the models' input. We will explore some of these factors further in the experiments that follow. ## 3 MAC: A recurrent reasoning model for question answering A variety of models have been proposed for completing visual question answering tasks; all of these include both visual and a linguistic processing units. For our current experiments, we chose to use a model that - at the time we began the project - had the top performance scores on the original CLEVR task: the MAC (Memory, Attention, and Composition) model (Hudson & Manning, 2018). This model reaches an accuracy level of 98.9% on CLEVR's test set. Because it does so well on within-sample questions, we hoped that it could also generalize to out-of-distribution questions like our probes. The MAC model is a recurrent attention-based neural network model with generic and homogeneous structure. It eliminates the possibility of the model itself introducing any forms of symbolic structural biases, which is important since it will serve as an example of non-symbolic learning for our hypotheses testing. Its structure is illustrated in Figure 5 and described below. Following previous approaches to the CLEVR task (R. Hu, Andreas, Rohrbach, Darrell, & Saenko, 2017; Santoro et al., 2017; Perez, Strub, De Vries, Dumoulin, & Courville, 2018), the MAC model creates a representation for images by first extracting a fixed set of features from ResNet-101 (He, Zhang, Ren, & Sun, 2016), pre-trained on ImageNet (Russakovsky et al., 2015). It then processes images through two convolutional neural network layers, producing its final image representation \(\mathbf{I}\). As for questions, at each state \(t\), the current word is first converted to an embedding vector and then goes through a single layered bidirectional long-short term memory cell (biLSTM), the output of which is then used as a word representation \(\mathbf{q}_{t}\), taking into account linguistic contextual information. The MAC model uses custom recurrent cells (MAC cells) which incrementally process information as the model makes its way through each word in the question. Unlike previous models, the MAC model uses soft attention mechanisms (Xu et al., 2015) over both the word and the image representations to learn to prioritise different parts of its input during processing. The MAC cell takes into consideration the previous state's soft attention maps over the previous word's representation \(\mathbf{c}_{(t-1)}\) (called the control state in Hudson & Manning, 2018) as well as the image representation \(\mathbf{m}_{(t-1)}\) (called the memory state in the original paper) - these represent the previous hidden states. At each time step, in addition to these previous hidden states, the model uses the current word and the image representations to produce new soft attention maps over the word and image to be used at the next time step. At the final step, the model integrates the final soft attention map over the image representation (the final memory state) with all of the word's final representations to generate an answer. Answers always consist of a single word which can be any word from the model's shared question and answer vocabulary. Figure 5: A 4-layer MAC model with recurrent processing states, where \(I\) is the processed image, \(\mathbf{Q}=[\mathbf{q}_{0},...,\mathbf{q}_{t+2}]\) is the question which contains the output of a biLSTM at for each state, or word, while \(\mathbf{c}_{(t)}\) and \(\mathbf{m}_{(t)}\) are soft attention maps from state \(t\) over the question and the image respectively. At the final state, the final attention map over the image, here \(\mathbf{m}_{(t+2)}\) and the output states of the biLSTM over all of the words in the question, \(\mathbf{Q}\), go through an output cell to produce a distribution over possible answers. The best version of the MAC model originally reported used 12 layers of recurrent MAC cells before the output layer, however the authors found that very similar performance could be achieved with as few as 4 layers (test accuracy 97.9%). Thus, we chose to use this smaller and more efficient version of the model for our experiments - see Figure 5 for a visualization of the model's layers. We otherwise kept all other hyperparameters the same as the ones used in the main version of the model from the original paper (see appendix A). ## 4 Experiment 1: Learning to interpret and represent function words How do visually grounded question answering models learn to represent and interpret function words? Do the representations they learn for words like _and_, _or_, _behind_, _in front of_, _more_, and _fewer_ generalize to unseen linguistic and visual contexts? ### Setup We trained five MAC models on the original CLEVR training data for 25 epochs, initialized using different random seeds. Models learn and update using back propagation with the addition of variational dropout on 15% of parameters across the model at each pass. We evaluated their performance on our semantic probes at each epoch and report the mean performance and standard deviation across models for each probe. Models reached an average prediction accuracy of 98.84% on the training data and of 97.74% on the validation set, reproducing the performances originally reported by Hudson and Manning (2018) for 4-layered MAC models. ### Results And - OrProbe questions were all of the form 'Are there \(X\)s that are \(\alpha\) and/or \(\beta\)?'. As a reminder, there are four possible truth-conditions associated to the images the questions are paired with: \((\alpha\wedge\beta)\), \((\alpha\wedge\neg\beta)\), \((\neg\alpha\wedge\beta)\), and \((\neg\alpha\wedge\neg\beta)\). First, let's consider the overall accuracy of models on probes in non-ambiguous contexts in Figure 6 - in other words, excluding OR probe questions in \((\alpha\wedge\beta)\) contexts, where inclusive and exclusive interpretations of _or_ have opposing answers. As seen in the figure, models perform better than chance on both the AND and OR probes. Next, Figure 7 shows the mean accuracy reported in the previous figure as a function of the answer type - 'yes' or 'no' - expected for each question for these probes. There is a clear asymmetry for both probes between questions in contexts requiring a 'no' answer versus a 'yes', and second, models performance in 'yes' contexts then seems to drop after the second epoch. For AND, 'yes' is expected in \((\alpha\wedge\beta)\) contexts and 'no' otherwise. For OR, 'yes' is expected in \((\alpha\wedge\neg\beta)\) and \((\neg\alpha\wedge\beta)\) contexts, while 'no' is expected in \((\neg\alpha\wedge\neg\beta)\) contexts. Though models has no issue recognizing the answer in \((\neg\alpha\wedge\neg\beta)\), they struggle more when one of the conjuncts is true, or when OR and AND expect opposing answers. This drop seems to also coincide with the rise of exclusive interpretations for OR in \((\alpha\wedge\beta)\) contexts as we see in Figure 8. In Figure 8, we consider the proportion of inclusive versus exclusive interpretations of OR questions in the contexts where \((\alpha\wedge\beta)\) are both true. Importantly, the CLEVR dataset generative model hard-codes _or_ to be interpreted inclusively, in other words, all answers in the training data assume an inclusive _or_. Yet, as we can see in this figure, though the models initially learn to favor Figure 6: Experiment 1: Mean accuracy on AND - OR probes overall in non-ambiguous questions, shading represents standard deviation across 5 models. Figure 7: Experiment 1: Mean accuracy on AND - OR probes by answer type in non-ambiguous questions. inclusive interpretations as we might expect, as learning progresses, they start to interpret OR as exclusive more and more. The differences in performance as a function of the answer types across AND - OR probes suggest that the models struggle more in contexts where AND questions and OR questions have conflicting answers, specifically, in \((\alpha\wedge\neg\beta)\) and \((\neg\alpha\wedge\beta)\) contexts. On the other hand, the results in contexts where \((\alpha\wedge\beta)\) are both true and both AND and OR should have the same answer (assuming an inclusive interpretation of _or_), initially models seem to have no issues, but over time they start to favor exclusive interpretations for _or_ and struggle more with _and_ questions in the 'yes' answer contexts. These results suggest that when determining the answer to a question containing _and_ or _or_, models are also considering alternative questions that contain the other logical connective. In the cases where opposite answers for AND versus OR questions are expected, this attention to alternatives could lead to more uncertainty about the right answer. While in the case where the same answer is expected, it may instead be leading to a form of pragmatic reasoning where opposing logical operators should also have opposing answers, resulting in the rise of exclusive _or_. We explore this hypothesis further in the next experiment. Behind - In Front ofProbe questions are all of form 'Is the \(X\) behind/in front of the \(Y\)?', and expect opposing answers as a function of the relative position of \(X\) to \(Y\). Figure 9 shows the overall accuracy of the models on both probes. There is more variation across random seed runs, though both BEHIND and IN FRONT OF seem to be learnt equally well within runs and performance is generally above chance.5 Unlike for AND and OR, Table 2 shows us that _behind_ and _in front of_ are used in a similar number of questions and expect 'yes/no' answers at equal frequencies; we can therefore fairly compare models' relative performance on these words. Figure 8: Experiment 1: Proportion of exclusive (versus inclusive) interpretations of OR probe in ambiguous contexts, \((\alpha\wedge\beta)\). As with the previous probes, we also consider the mean accuracy of models as a function of the answer type. Whether the context required a 'yes' or 'no' answer did not seem to matter for these probes as much as it did for others; models performed just as well in either context overall. In Figure 10, we look at how well models predict the correct answer to our BEHIND - IN FRONT OF probe questions as a function of the Euclidean distance between \(X\) and \(Y\) referents. The distances were calculated based on the coordinates of the center of each object provided in the metadata of each image. We then rounded the distances to the closest integer to bin our data into distance levels. Objects that have a Euclidean distance of 1 are so close that we expect one to partially occlude the other, while distances of 8 are as far apart as objects can be within a CLEVR image. As we can see from the figure, there is a very clear gradient in performance based on the distance between \(X\) and \(Y\), such that the further apart two objects are, the easier it is for the model to correctly interpret _behind_ and _in front of_. These results suggest the models can learn meaningful representations _behind_ and _in front of Figure 10: Experiment 1: Mean accuracy on BEHIND - IN FRONT OF probes as a function of the Euclidean distance between referents. Figure 9: Experiment 1: Mean accuracy on BEHIND - IN FRONT OF probes overall, shading represents standard deviation across 5 models. such that they can interpret them in novel contexts. Furthermore, when these prepositions are equally frequent in models' input, they are learnt at the same rate. Importantly, models seem to learn a gradient semantic representation for the words as a function of the distance between referents, rather than the strict threshold based meaning which the CLEVR generative model uses. MORE - FEWERProbes are composed of questions of the form 'Are there more/fewer of the \(X\)s than the \(Y\)s?'. For this analysis, we consider three contexts: when \(|X|>|Y|\), \(|X|<|Y|\), and \(|X|=|Y|\). In the first two contexts, MORE and FEWER questions expect opposite answers, while in the third context where there is no difference in the number of \(X\)s and \(Y\)s, they expect the same answer, 'no'. Figure 11 presents the overall accuracy of models on both probes. This initial plot suggests that MORE is learnt first and may be overall easier than FEWER. When we consider the average performance of models as a function of the expected answers a more nuanced story emerges, Figure 12. Clearly, contexts where 'yes' answers are expected are much easier than 'no' contexts, reaching accuracies way above chance for 'yes' and at or below chance for 'no'. As a reminder, 'no' contexts include \(\neg(|X|>|Y|)\) for MORE and \(\neg(|X|<|Y|)\) for FEWER, but also contexts where \(|X|=|Y|\). Additionally, again we see that FEWER may be more difficult to learn than MORE. Figure 11: Experiment 1: Mean accuracy on MORE - FEWER probes overall, shading represents standard deviation across 5 models. Next, we plot accuracy on probes as a function of the absolute difference between the number of \(X\)s and \(Y\)s, \(absolute(|X|-|Y|)\), Figure 13. Models clearly struggle with both MORE and FEWER questions specifically when the difference is 0, or \(|X|=|Y|\), performing below chance in this context. In all other cases, whether the answer is 'yes' or 'no', models correctly answer questions over 75% of the time. Yet again, performance for these probes is gradient. Models correctly interpret both _more_ and _fewer_ more often as a function of the difference in number between the two referent classes. The larger the difference, the easier it is for the model to correctly judge whether there are _more_ or _fewer_ of a given class of referents. Additionally, models poorer performance on FEWER probe questions overall seen in the previous two plots seems to be isolated to the contexts where \(|X|=|Y|\). In fact, if we remove all probe questions where \(|X|=|Y|\) and consider the overall accuracy of models again in Figure 14, we see a very different picture than our original Figure 11. Models have almost equally high performance on both probes, still learning _more_ slightly earlier than _fewer_. Figure 12: Experiment 1: Accuracy on MORE - FEWER probes by answer type. Figure 13: Experiment 1: Mean accuracy on MORE - FEWER probes by absolute difference in the number of objects in each referent class. These results suggest that models learn reasonable meaning representations for both _more_ and _fewer_ and furthermore, that these representations are gradient as a function of the difference in number between two referent categories, rather than being based on strict thresholds, which the CLEVR generative model uses. However, models struggled in contexts where \(|X|=|Y|\) specifically. We hypothesise that this may be because they are exposed to a third alternative numerical reasoning expression during training, _equal/same number_. This alternative expression expects an opposing answer in these contexts. Like with AND and OR, models may be considering the existence of alternative propositions when trying to answer these questions, leading to more uncertainty in the context where the difference in number between \(X\)s and \(Y\)s is the smallest. We explore this hypothesis further in the next experiment. ### Interim conclusion Our first experiment examined the first research question: How do visually grounded question answering models learn to represent and interpret function words and do the representations they learn generalize to unseen linguistic and visual contexts? We found that models learnt gradient semantic representations for function words requiring spatial and numerical, _behind_, _in front of_, _more_, and _fewer_. Additionally, we found early evidence that models consider alternative logical connectives when determining the meaning of expressions containing _and_ and _or_. This behaviour may be leading to models to interpret _or_ as exclusive in an increasing number of context. Further experimentation is necessary to test this hypothesis. ## 5 Experiment 2: The effect of alternatives on reasoning Does the existence alternative expressions in each reasoning pair affect their acquisition or are the meanings of function words acquired in isolation? Following our results from the previous experiment, we hypothesized that models could be considering alternative questions and answers which use opposing or parallel function words when Figure 14: Experiment 1: Mean accuracy on MORE - FEWER probes overall excluding context where \(|X|=|Y|\), shading represents standard deviation across 5 models. they compute the probability of the answer to a given question. This form of'reasoning about alternatives' could then explain the performance patterns we observed specifically for the AND - OR probes, as well as the MORE - FEWER probes. Unlike BEHIND - IN FRONT OF which always expect opposing answers, AND - OR and MORE - FEWER pairs both have contexts where they expect the same answer and others where they do not. The existence of alternative expressions may lead to uncertainty in model predictions in one of two ways. First, if models observe that _and_ and _or_ are interpreted the same in a set of contexts, then they may begin to expect them to also mean something similar in contexts where they actually should have opposing answers. Second, if models instead observe that they have opposing answers in a set of contexts, then they may instead begin to expect them to mean something different also in contexts where in fact they should be interpreted the same way. In either case, the existence of the alternative expression (_and_ in the case of _or_, and _or_ in the case of _and_) is what leads models to answer incorrectly, showing evidence of "reasoning" about alternative propositions. Our second experiment tests this theory to answer our second research question. ### Setup As in experiment 1, we train five MAC models initialized using different random seeds. Unlike the previous experiment, however, we manipulate the training data to remove alternative function words which we believe affected the probe performance for OR, AND, MORE, and FEWER. Specifically, we remove all questions from the training data which contain the word _and_ and then evaluate model performance on the OR probe. We repeat this process and create a version of CLEVR where we remove all instances of _or_ and then evaluate models on the AND probe. Finally, we create a version without _equal/same number of_ and evaluate the models on MORE and LESS probes. By removing _and_, we want to see if the model will correctly learn the semantics of _or_ and favor inclusive interpretations when the alternative logical connective is not present. By removing _or_, we want to make sure models learn to correctly interpret _and_ regardless of the answer context. Finally, by removing _equal/same number of_ and its derivatives, we would like to see if the models can correctly learn to use _more_ and _fewer_ in contexts where \(|X|=|Y|\), when the alternative proposition that there are an _equal_ amount of them is no longer available. For each of these different subsampled training datasets and evaluation probes, we train models for 25 epochs and evaluate performance on probes at each epoch. ### Results We report the mean performance and standard deviation across models for each probe. Since the results for the OR probe and AND probe come from different models trained on different subsampled datasets, we report their performance separately for this experiment. OrModels reach a higher overall accuracy on unambiguous OR probe questions when trained on data without _and_. Comparing model performance when trained with and without _and_ as a function of the answer type expected in Figure 15, it is clear that when we remove the alternative expression models no longer struggle in contexts expecting a 'yes' answer as they did in experiment 1, instead showing high accuracy regardless of the truth value contexts. As for probe questions containing _or_ in ambiguous contexts where inclusive-or and exclusive- or interpretations predict opposing answers, we no longer see a strong progressive rise in exclusive interpretations, instead settling with around 70% of ambiguous questions being answered with inclusive 'yes' answers, Figure 16. When the alternative logical connective _and_ is not present, models have no difficulty learning the semantics of _or_. Since the CLEVR generative model defines _or_ as inclusive, when no pragmatic alternative is present, models also learn to interpret _or_ inclusively. These results support the hypothesis that the rise in exclusive interpretations seen in experiment 1 is due to some form of competition between _or_ and the available alternative _and_. Figure 16: Experiment 2: Proportion of exclusive (versus inclusive) interpretations of OR probe in ambiguous contexts, \((\alpha\wedge\beta)\), when trained on data with the alternative expression _and_ from experiment 1 versus without this alternative in experiment 2. Figure 15: Experiment 2: Mean accuracy on OR probe by answer type in non-ambiguous questions when trained on data with the alternative expression _and_ from experiment 1 versus without this alternative in experiment 2. AndProbe results come from models trained on a subsampled version of CLEVR where all instances of _or_ have been removed. Models had better overall accuracy on AND probe questions when the alternative logical connective was removed than when both were present in experiment 1. In the absence of _or_, models learned to correctly interpret _and_ regardless of the truth value context, Figure 17. Models can learn the meaning of the logical connective _and_ correctly and then generalize it to interpret this word in novel contexts. If the alternative logical connective for disjunction is present, like in experiment 1, then the models may struggle more, as they seem to consider the existence of this alternative when trying to determine the intended meaning of _and_. This difficulty disappears if the alternative is no longer present. More - FewerProbe results from experiment 1 both showed that models struggled to correctly interpret _more_ and _fewer_ in the context where there were an equal number of the two referent categories being compared. We hypothesized that models may have struggled in this context because there existed alternative questions that asked whether there were an _equal_ number of \(X\)s and \(Y\)s inthe training data. To test this hypothesis, we trained models on a subsampled version of CLEVR where we removed all questions that asked about number equality. Figure 18 shows the overall performance of these models on both probes when trained with and without this alternative _equal_ expression. Accuracy on FEWER questions has definitely risen in comparison to experiment 1, though results for MORE look quite similar. Figure 17: Experiment 2: Mean accuracy on AND probe by answer type when trained on data with the alternative expression _or_ from experiment 1 versus without this alternative in experiment 2. However, when we consider model performance on questions as a function of the absolute difference in number between the compared referent categories in Figure 19, models still struggle in contexts where \(|X|=|Y|\). They do better overall in all other contexts. Unlike with AND and OR, removing the pragmatic alternative did not solve our issue with FEWER and MORE. After carefully scrutinising the training data from CLEVR, it became apparent that _more/fewer_ rarely appeared in contexts where \(|X|=|Y|\) and only when they were part of more complex question templates. Figure 20 shows example questions with _more_ in the context where \(|X|=|Y|\) taken from the CLEVR train data. Thus, the issues we see with probe performance in this context may simply be due to our choice of template and the idiosyncrasies in the distribution of _more_ and _fewer_ in the CLEVR training data. Figure 19: Experiment 2: Mean accuracy on MORE - FEWER probes by absolute difference in the number of objects in each referent class when trained without the alternative _equal_ expression. Figure 18: Experiment 2: Mean accuracy on MORE - FEWER probes overall when trained on data with the alternative expression _equal_ from experiment 1 versus without this alternative in experiment 2. Shading represents standard deviation across 5 models. ### Interim conclusion Our second experiment examined our second research question: Does the existence alternative expressions in each reasoning pair affect their acquisition or are the meanings of function words acquired in isolation? We found that in the absence of a logical alternative, models correctly learnt to generalize the meaning of conjunction and disjunction. Our findings confirm our hypothesis that the presence, or absence, of a pragmatic alternative can affect how models learn the meaning of logical connectives _and_ and _or_. Next, we will evaluate how the frequency of different function words may also affect how models learn their meanings. ## 6 Experiment 3: The effect of frequency on learning Our third and final experiment considers the effect of word frequency on the order in which function words are learnt. We address our third research question: does the order in which function words are acquired by models resemble that of children - and are some of these ordering effects simply the result of frequency in the input or are there other conceptual factors at play? ### Setup We again trained five MAC models initialized using different random seeds for a total of 25 epochs and consider their performance on semantic probes throughout training. Our main manipulation that differentiates this experiment from the others is the training data. As in experiment 2, we use a subsampled version of the CLEVR training questions. This time, we created a version of CLEVR where the relative frequencies of the target function words matched their relative frequencies across all English child-directed utterances from the CHILDES repository (MacWhinney, 2000). The CHILDES repository is a collection of open-source transcripts, recordings, and videos of child-caregiver/experimenter interactions from a wide range of studies dating as far back as the 1950s. Children in these studies vary in age between 9 months and 5 years old, the median being about 3 years. Using the childes-db API (Sanchez et al., 2019) to access the data, we isolated all of the English transcript corpora available. We then filtered each to isolate all utterances that were Figure 20: Example CLEVR training questions with the word _more_ in the context where \(|X|=|Y|\) not said by the child, representing a sample of the linguistic input the child was exposed to. We used this corpus to calculate the relative frequencies in children's input of the function words we are interested in. The corpus contained a total of 16,062,386 word tokens. We considered the relative frequencies of our function words within each contrasting pair rather than their relative frequencies overall as it would not have been possible to extract a reasonably sized subsampled version of the CLEVR training data otherwise. One of the main difficulties we ran into when trying to subsample from the CLEVR dataset was that these function words often appeared in overlapping sets of questions, so changing the frequency of one word by subsampling questions would inadverttedly affect another's frequency. Nonetheless, we managed to create a version of the CLEVR training data that almost reproduced the relative frequencies of the CHILDES data and was of a reasonable size, containing 545,681 training questions (9,652,086 tokens). Table 3 shows the exact word counts and frequencies of both the CHILDES and subsampled CLEVR training datasets. ### Results And - OrThese words have a very uneven distribution in the training data, with _and_ being much more prominent than _or_ in children's input. Figure 21 shows the overall performance of the models on each of these probes in non-ambiguous questions (i.e. excluding OR questions in \(\alpha\wedge\beta\) contexts). Interestingly, even with this frequency imbalance, models seem to do quite well on both our AND and OR semantic probes, suggesting that even with a reduced number of training examples containing _or_, they are still learning a reasonable representation for this word that allows them to generalize its meaning to unseen contexts. \begin{table} \begin{tabular}{|l|c|c||c|c|} \hline & \multicolumn{2}{c||}{CHILDES} & \multicolumn{2}{c|}{CLEVR subsampled} \\ word pair & raw counts & frequency & raw counts & frequency \\ \hline and & 217,497 & 90.45\% & 81,506 & 90.45\% \\ or & 22,975 & 9.55\% & 8,610 & 9.55\% \\ \hline behind & 2,954 & 79.62\% & 113,881 & 74.36\% \\ in front of & 756 & 20.38\% & 39,260 & 25.64\% \\ \hline more & 23,406 & 99.10\% & 11,570 & 99.10\% \\ fewer/less & 212 & 0.90\% & 105 & 0.90\% \\ \hline \end{tabular} \end{table} Table 3: Relative frequencies of each function word pair in the CHILDES and subsampled CLEVR training data for experiment 3. This observation is confirmed when we consider models' mean accuracy as a function of the answer type expected, 'yes' or 'no', where OR probe performance is eventually about the same regardless of context, Figure 22. As for the AND probe, the models seem to be performing better than they were in experiment 1, though we still see an imbalance in performance between 'yes' answer contexts and 'no' answer contexts, where the model struggles more in the former. In the case of ambiguous OR questions, in \(\alpha\wedge\beta\) contexts, models clearly prefer inclusive answers; we see no rise in exclusive interpretations like the one seen in experiment 1, see Figure 23. Figure 21: Experiment 3: Mean accuracy on AND - OR probes overall in non-ambiguous questions when trained of subsampled CLEVR, shading represents standard deviation across 5 models. Figure 22: Experiment 3: Mean accuracy on AND - OR probes by answer type in non-ambiguous questions when trained of subsampled CLEVR. If performance on these probes were solely a function of the frequency of these words in models input, we would expect their performance on the OR probe to decrease between experiment 1 and experiment 3, but this is not what we see. Furthermore, if the effect of having possible pragmatic alternatives was also proportional to the frequency of these alternatives in the input for the models, we might also expect to see a stronger effect of pragmatic reasoning on OR probe results and an increase in inclusive interpretations for _or_, but again, we do not see this effect. It seems to have been stronger in experiment 1 when _and_ and _or_ were about equally frequent. The more uniform distribution between these words in experiment 1 could have lead to more uncertainty overall. This explanation is further supported by the much smaller standard deviations we see in Figure 21 for both AND and OR performance as opposed to the same plot from experiment 1, Figure 6. Another possible explanation that should not be discounted is that in downsampling questions containing _or_ in the training set, we may have simply reduced the diversity of contexts seen for _or_ in favor of contexts that resembled our probe template more, such that the models now had less uncertainty specifically about the meaning of _or_. As for AND, model performance looks very similar between experiment 1 and experiment 3, albeit with a little less variation across runs. Models still struggle in contexts where 'yes' answers are expected. The fact that they seem to do better on the OR probe than the AND probe in this experiment does not necessarily mean that _or_ is easier to learn than _and_, since as we noted in experiment 1, unlike with the other two contrasting function word pairs, _and_ and _or_ have very different input distributions. _Or_ is always used as a logical conjunct connecting referents in count questions, while _and_ is used in a much wider variety of question types, connecting different types of phrases. Some of the difficulty with AND probe questions in 'yes' contexts may simply be due to the distribution over input questions the models see for _and_ and how different these questions are from our out-of-distribution probe questions. Frequency is clearly not the only factor at play determining how and when models come to learn these words. Behind - In front ofThese words are also not evenly distributed in children's input in CHILDES and consequently in our subsampled dataset. Both the number of instances of _behind_ and _in front of_ had to be reduced to create the training data used in this experiment, but we had Figure 23: Experiment 3: Proportion of exclusive versus inclusive interpretations of OR probe in ambiguous contexts, \((\alpha\wedge\beta)\), when trained of subsampled CLEVR. to decrease the number of _in front of_ instances significantly more to reproduce their relative frequencies from CHILDES. As we can see in Figure 24, these changes had an effect on the overall performance of models on the IN FRONT OF probe which now finds itself on average around chance with much more variation across runs. The performance on the BEHIND probe is about the same as it was in experiment 1 when _behind_ and _in front of_ where evenly probable. The most interesting results can be seen in Figure 25 where we have plotted model performance on probe questions as a function of the Euclidean distance between the two referents in probe questions. Again, the results from experiment 1 for the BEHIND probe are reproduced, showing a clear gradient representation for the meaning of _behind_ as a function of distance. However, in the case of IN FRONT OF, the gradient has completely disappeared. All of these results suggest that when models are trained on a CLEVR training dataset that reproduces the relative frequencies of _behind_ and _in front of_ seen in children's input, they learn Figure 24: Experiment 3: Mean accuracy on BEHIND - IN FRONT OF probes overall when trained of subsampled CLEVR. Shading represents standard deviation across 5 models. Figure 25: Experiment 3: Mean accuracy on BEHIND - IN FRONT OF probes as a function of the Euclidean distance between referents when trained of subsampled CLEVR. the most frequent word of the pair, _behind_, but struggle to learn the meaning of the less frequent opposing word, _in front of_. This pattern differs from that of _and_ and _or_, since for _behind_ and _in front of_, frequency does seem to be the most important factor in determining their relative learning order and difficulty. MORE - FEWERThese words are an interesting case to consider because _fewer_ is extremely rare in children's input while _more_ is quite common. There are few different senses of the word _more_, the most common in children's input being its adverbial form as in 'do you want more?', which is quite different from the comparative quantifier _more_ seen in CLEVR as in'more than'. Since we could not easily differentiate all the senses of _more_, we decided to also include its counterpart _less_ in addition to _fewer_ when determining their relative frequencies. Nonetheless, _more_ was much more frequent than _fewer_ and _less_ combined (see Table 3). Figure 26 shows the overall performance of models on both probes. Performance on MORE is generally above chance, while on FEWER it is below chance. Our plots of accuracy as a function of answer type Figure 27 resembles its counterpart from experiment 1. Models do quite well in 'yes' contexts for both MORE and FEWER probe questions. The issue seems to be in contexts where a 'no' answer is required, especially for FEWER. Figure 26: Experiment 3: Mean accuracy on MORE - FEWER probes overall when trained of subsampled CLEVR. Shading represents standard deviation across 5 models. Further probing with Figure 28 shows us that errors are isolated specifically to contexts where \(|X|=|Y|\) - yet again. Surprisingly given the very small number of exemplars of _fewer_ seen during training - only 105 cases - models still seem to learn to use _fewer_ in unseen contexts as long as the absolute difference in number between referent classes is greater than zero. Additionally, unlike our results for _in front of_, models still learn a gradient representation for the meaning _fewer_ as a function of number difference. Questions with _fewer_ are all answered with 'yes' or 'no', while questions with _in front of_ expect a much broader set of answers in the original CLEVR dataset (see Tables 1 and 2). This difference in input distribution might explain why models can still learn a reasonable representation for _fewer_ with so few examples. By removing the probe questions where \(|X|=|Y|\) and replotting the overall accuracy of the models on all other cases in Figure 29, we can clearly see that they learn to properly use both MORE and FEWER most of the time, though the performance on FEWER questions has definitely decreased in comparison to the results seen in experiment 1. Figure 28: Experiment 3: Mean accuracy on MORE - FEWER probes by absolute difference in the number of objects in each referent class when trained of subsampled CLEVR. Figure 27: Experiment 3: Accuracy on MORE - FEWER probes by answer type. Even with only a few exemplars of _fewer_, models are able to learn reasonable meaning representations for this word, showing gradient interpretation as a function of the difference in number between compared classes. Models are not as accurate on the FEWER probe as their are on MORE questions, but still perform above chance if we exclude contexts where \(|X|=|Y|\). Once again, in these contexts, models struggle to answer both MORE and FEWER questions correctly. These results suggest that relative word frequency in the input also affects how models learn these function words. ### Interim conclusion With this experiment, we addressed our third research question: Do models learn these function words in a similar order to children and are these ordering effects the results of their frequency or do they follow from other conceptual explanations?When trained on a corpus with similar frequencies to children's input, the MAC models had difficulty learning less frequent function words. For our logical reasoning targets, however, there are factors beyond frequency that influence our models' ability to learn the meanings of _and_ and _or_. ## 7 General Discussion How children learn 'hard' words like _and/or_, _behind/in front of_, and _more/fewer_ is still an open question. Proposals for their acquisition range along a spectrum between children having innate knowledge of the reasoning skills required to understand these words (a nativist perspective) to having to learn them from scratch using general learning mechanisms (a usage-based perspective). In this paper, we used a reccurrent attention-based neural network model exposed to visually grounded language as a test-bed to evaluate the learnability of these function words, providing a proof-of-concept that such words can be learned from data. First, we asked whether models were able to learn the meaning of these words using their non-symbolic general learning mechanisms. We found that they did learn to interpret function Figure 29: Experiment 3: Mean accuracy on MORE - FEWER probes overall excluding context where \(|X|=|Y|\), when trained of subsampled CLEVR. Shading represents standard deviation across 5 models. words along the way to succeeding in the visual question answering task they were trained on, the CLEVR dataset. Models favored learning representations that allowed for gradient interpretations for function words requiring spacial and numerical reasoning rather than threshold-based semantic representations, showing that gradience in meaning may emerge from exposure to language in visually grounded contexts. Models also learnt the meanings of logical connectives _and_ and _or_ without any prior knowledge of logical reasoning. They showed early evidence of considering alternative possible expressions when inferring the meaning of these words, leading to a rise in exclusive interpretations for _or_. Additionally, in answer to our second question, we found that models showed evidence of considering alternative possible expressions when inferring the meaning of these words, which lead to a rise in exclusive interpretations for _or_ in experiment 1. Finally, we wondered whether the relative difficulty of acquisition of words for children could be replicated in models and if it varied as a function of frequency rather than conceptual factors. We found that word learning difficulty was indeed dependent on word frequency in models' input, more frequently seen words generally being easier to learn in the case of spacial and numerical reasoning expressions. When exposed to these words at similar frequencies to children, models showed similar ordering effects for both _behind/in front of_ and _more/fewer_ word pairs. As for our logical reasoning targets, there seemed to be factors beyond frequency that influence our models' ability to learn them. One possible explanation for this difference may be that it is an artifact of the CLEVR dataset, which presented very different context distributions for _and_ and _or_ as opposed to other function word pairs. Our models have shown that it is possible to learn these complex and abstract reasoning skills and to map them to novel words without any prior knowledge. Our results offer proof of concept evidence that sophisticated statistical learning mechanism, when applied to grounded contextualized language, may be enough to explain the acquisition of these function words and related reasoning skills, including the ability to reason about alternatives, supporting more usage-based theories. Congruently, word learning difficulty was found to be mainly affected by frequency of exposure rather than conceptual factors. Our work converges with other recent work suggesting that a variety of non-symbolic neural networks can learn logical operators from sufficiently rich data. For example, Geiger, Carstensen, Frank, and Potts (2023) showed that the logical operator "same" could be learned from data. Although our work here focused on a supervised learning regime, Geiger et al. showed learning successes across supervised and unsupervised contexts, supporting the idea that supervision does not necessarily play a key role in the emergence of symbolic structure. More broadly, the successes of large language models on large-scale reasoning tasks (T. Brown et al., 2020; Wei et al., 2022; Kojima, Gu, Reid, Matsuo, & Iwasawa, 2022) suggest that unsupervised learning may be sufficient for the emergence of functional representations supporting reasoning, though more work is needed to probe such models (Mahowald et al., 2023). The unprecedented success of neural network models offers an opportunity for cognitive science researchers to re-evaluate questions about the learnability of language (Lappin, 2021; Warstadt & Bowman, 2023; Piantdosi, 2023) and provides a new set of tools for comparisons between machine learning and child learning (Frank, Monaghan, & Tsoukala, 2019; Portelance, 2022).We hope that our work here contributes to this broader enterprise.
2307.16293
Regularity in polar spaces of infinite rank
In this paper we propose a definition of regularity suited for polar spaces of infinite rank and we investigate to which extent properties of regular polar spaces of finite rank can be generalized to polar spaces of infinite rank.
Antonio Pasini
2023-07-30T18:02:31Z
http://arxiv.org/abs/2307.16293v1
# Regularity in polar spaces of infinite rank ###### Abstract In this paper we propose a definition of regularity suited for polar spaces of infinite rank and we investigate to which extent properties of regular polar spaces of finite rank can be generalized to polar spaces of infinite rank. ## 1 Introduction This paper is a continuation of [9]. In [9] I mainly stressed on differences between polar spaces of infinite rank and those of finite rank, focusing on properties which hold for all polar spaces of finite rank but fail to hold in many polar spaces of infinite rank, thus unwillingly suggesting that the infinite rank case might be too wild for nice theories can be composed for it. In this paper I support the opposite view. Focusing on regularity and related properties, I shall set down pieces of a theory suited for all polar spaces, including those of infinite rank. As expected, the picture we can get in the infinite rank case is not so neat and simple as for polar spaces of finite rank. Nevertheless, it looks nicer than I dared to hope and more interesting than in the finite rank case. ### Premise We are not going to recall all basics on polar spaces; we presume the reader knows them. We only fix here some notation and terminology to be used in this introductory section. More information will be offered in Section 2. We use the symbol \(\bot\) to denote collinearity, with the convention that every point is collinear with itself. Given a point \(x\) of a polar space \(\mathscr{S}\) we denote by \(x^{\bot}\) the set of points of \(\mathscr{S}\) collinear with \(x\) and, for a set of points \(X\) of \(\mathscr{S}\), we set \(X^{\bot}:=\cap_{x\in X}x^{\bot}\). Two singular subspaces \(X\) and \(Y\) are said to be _opposite_ when \(X^{\bot}\cap Y=Y^{\bot}\cap X=\emptyset\). In particular, two points are opposite if and only if they are non-collinear and two maximal singular subspaces, henceforth called _generators_, are opposite if and only if they are disjoint. The symbol \(\langle.\rangle\) stands for spans, but we use it for spans in polar spaces as well as in projective and vector spaces. Thus, if \(X\) is a set of points of a polar space \(\mathscr{S}\) then \(\langle X\rangle\) is the subspace of \(\mathscr{S}\) generated by \(X\) and, if \(\mathscr{S}\) admits a projective embedding \(\varepsilon:\mathscr{S}\to\operatorname{PG}(V)\), then \(\langle\varepsilon(X)\rangle\) is the projective subspace of \(\operatorname{PG}(V)\) spanned by \(\varepsilon(X)\). In this paper we will often use an informal simplified notation, writing \(\{X,Y\}^{\perp}\) for \((X\cup Y)^{\perp}\), \(\langle X,Y\rangle\) for \(\langle X\cup Y\rangle\), \(\langle X\cap Y,Z^{\perp}\rangle\) for \(\langle(X\cap Y)\cup Z^{\perp}\rangle\) and so on. We trust this notation will not confuse the reader. We recall that the rank of a non-degenerate polar space \(\mathscr{S}\), henceforth called \(\operatorname{rank}(\mathscr{S})\), is the least upper bound of the set of the ranks of the generators of \(\mathscr{S}\), the rank of a projective space being its dimension augmented by \(1\). Following a well established custom, we write \(\operatorname{rank}(\mathscr{S})=\infty\) and \(\operatorname{rank}(\mathscr{S})<\infty\) as shortenings of the sentences "\(\operatorname{rank}(\mathscr{S})\) is an infinite cardinal number" and "\(\operatorname{rank}(\mathscr{S})\) is finite" respectively. We warn that, while when \(\operatorname{rank}(\mathscr{S})<\infty\) all generators of \(\mathscr{S}\) have the same dimension, when \(\operatorname{rank}(\mathscr{S})=\infty\) all generators of \(\mathscr{S}\) are infinite dimensional but in general not all of them have the same dimension. All polar spaces to be considered in the sequel are non-degenerate and thick-lined of rank at least \(2\), by assumption. All embeddings are full projective embeddings. ### A definition of regularity Different but equivalent ways exists to define regularity for pairs of opposite points of a generalized quadrangle. For instance, the following is a rephrasing of the special case \(m=2\) of a definition stated in [18, 6.4.1] for generalized \(m\)-gons: a pair \(\{a,b\}\) of opposite points of a generalized quadrangle is _regular_ if, for any point \(c\) opposite both \(a\) and \(b\), if \(|\{a,b,c\}^{\perp}|>1\) then \(c\in\{a,b\}^{\perp\perp}\); equivalently, if \(x,y\in\{a,b\}^{\perp}\) and \(x\neq y\) then \(\{x,y\}^{\perp}=\{a,b\}^{\perp\perp}\). The following is also equivalent to this definition: for every line \(\ell\), if \(\ell\cap\{a,b\}^{\perp}\neq\emptyset\) then \(\ell\cap\{a,b\}^{\perp\perp}\neq\emptyset\). In particular, in the finite case the latter amounts to say that \(|\{a,b\}^{\perp\perp}|\) is equal to the number of lines through a point (compare Payne and Thas [13, 1.3]). A generalization of these definitions to polar spaces of arbitrary but finite rank is proposed in [12] and [3]: two opposite points of a polar space \(\mathscr{S}\) of finite rank are said to form a _regular_ pair if * \((X\cup Y)^{\perp}=\{a,b\}^{\perp\perp}\) for any two opposite generators \(X\) and \(Y\) of \(\{a,b\}^{\perp}\); equivalently, if \(c\) is a point opposite both \(a\) and \(b\) and \(\{a,b,c\}^{\perp}\) contains two opposite generators of \(\{a,b\}^{\perp}\), then \(c\in\{a,b\}^{\perp\perp}\). It is proved in [12, Lemma 5.5] (also [3, Proposition 5.1]) that (R1) is equivalent to the following: * if a generator \(M\) of \(\mathscr{S}\) contains a generator of the polar space \(\{a,b\}^{\perp}\), then \(M\cap\{a,b\}^{\perp\perp}\neq\emptyset\). Property (R2) still makes sense when \({\rm rank}(\mathscr{S})=\infty\). In fact (R2) is involved in a characterization of symplectic polar spaces of arbitrary (possibly infinite) rank (see [3]). Property (R1) also makes sense when \({\rm rank}(\mathscr{S})=\infty\), however it might possibly be vacuous in certain cases. Indeed it is still an open problem whether polar spaces of infinite rank exist which admit no pair of opposite generators. If \(\{a,b\}^{\perp}\) admits no pair of opposite generators then (R1) is (trivially true but) vacuous for \(\{a,b\}\). This is not the unique problem we face with (R1) when \(rk(\mathscr{S})=\infty\); the following is another one. It is easy to see that (R2) implies (R1), even if \({\rm rank}(\mathscr{S})=\infty\). On the other hand, if (R1) holds for a pair \(\{a,b\}\) of opposite points and every generator of \(\{a,b\}^{\perp}\) admits an opposite in \(\{a,b\}^{\perp}\), then (R2) holds for \(\{a,b\}\) if and only if \(X^{\perp}\cap M\neq\emptyset\) for any generator \(X\) of \(\{a,b\}^{\perp}\) and every generator \(M\) of \(\mathscr{S}\) containing a generator of \(\{a,b\}^{\perp}\) (see the proofs of [12, Lemma 5.5] and [3, Proposition 5.1]). So, if every generator of \(\{a,b\}^{\perp}\) admits an opposite in \(\{a,b\}^{\perp}\) and \(\mathscr{S}\) satisfies the following property (GS), then we can prove that if (R1) holds for \(\{a,b\}\) then (R2) also holds, otherwise we get stuck. * \(X^{\perp}\cap M\neq\emptyset\) for every generator \(M\) and every non-maximal singular subspace \(X\) of \(\mathscr{S}\). It is well known that (GS) holds true when \({\rm rank}(\mathscr{S})<\infty\) but in general it fails when \({\rm rank}(S)=\infty\). This considered, in [3] we looked at (R2) as a possible definition of regularity when \({\rm rank}(\mathscr{S})=\infty\), discarding (R1). In support of this proposal, consider the following sharpening of (R1): * we have \((X\cup Y)^{\perp}=\langle X\cap Y,\{a,b\}^{\perp\perp}\rangle\) for any two generators \(X\) and \(Y\) of \(\{a,b\}^{\perp}\). Obviously, (R3) implies (R1). In Section 3.1 (Theorem 3.3), without assuming that \({\rm rank}(\mathscr{S})<\infty\), we shall prove the following: **Theorem 1.1**: _Properties_ (R2) _and_ (R3) _are equivalent._ As previously recalled, properties (R1) and (R2) are equivalent when \({\rm rank}(\mathscr{S})<\infty\). In this case (R3) and (R1) are equivalent. We believe that when \({\rm rank}(\mathscr{S})=\infty\) property (R3) is stronger than (R1), although we have no example at hand which shows that this is indeed the case. We are now ready to state our definition of regularity for polar spaces of arbitrary, possibly infinite rank. **Definition 1.2**: We say that a pair of opposite points of \(\mathscr{S}\) is _regular_ if it satisfies property (R3) (equivalently, (R2)). If all pairs of opposite points of \(\mathscr{S}\) are regular then \(\mathscr{S}\) is said to be _regular_. ### Tight embeddings We say that an embedding \(\varepsilon:\mathscr{S}\to\operatorname{PG}(V)\) of a polar space \(\mathscr{S}\) is _tight_ if \(\langle\varepsilon(M\cup M^{\prime})\rangle=\operatorname{PG}(V)\) for at least one pair of (necessarily opposite) generators \(M\) and \(M^{\prime}\) of \(\mathscr{S}\). When \(\operatorname{rank}(\mathscr{S})=n<\infty\) an embedding \(\varepsilon:\mathscr{S}\to\operatorname{PG}(V)\) is tight if and only if \(\dim(V)=2n\). If this is the case then \(\langle\varepsilon(M\cup M^{\prime})\rangle=\operatorname{PG}(V)\) for every pair pair of opposite generators \(M\) and \(M^{\prime}\) of \(\mathscr{S}\). The following is proved in [10, Theorem 1.2]: **Proposition 1.3**: _An embeddable polar space of finite rank is regular if and only if it admits a tight embedding._ In particular, when \(\mathscr{S}\) (has finite rank and) admits a unique embedding, as it is the case when \(\mathscr{S}\) is defined over a division ring of characteristic different from \(2\), then \(\mathscr{S}\) is regular if and only if it can be spanned by the union of (any) two opposite generators [10, Corollary 1.3]. As we shall prove in Section 3.3 (Theorem 3.8), Proposition 1.3 admits the following generalization: **Theorem 1.4**: _An embeddable polar space \(\mathscr{S}\) of possibly infinite rank admits a tight embedding if and only if the following holds for at least one (equivalently, every) pair \(\{a,b\}\) of opposite points of \(\mathscr{S}\):_ * _the subspace_ \(\{a,b\}^{\perp}\) _contains two singular subspaces_ \(X\) _and_ \(Y\) _such that_ \(X\) _and_ \(Y\) _are opposite generators of_ \(\{a,b\}^{\perp}\)_, we have_ \[(X\cup Y)^{\perp}\ =\ \{a,b\}^{\perp\perp}\] (1) _and, if_ \(\varepsilon:\mathscr{S}\to\operatorname{PG}(V)\) _is an embedding of_ \(\mathscr{S}\)_, then_ \[\langle\varepsilon(X\cup Y\cup(X\cup Y)^{\perp})\rangle\ =\ \operatorname{PG}(V).\] (2) When \(\mathscr{S}\) is embeddable the group \(\operatorname{Aut}(\mathscr{S})\) acts transitively on the set of ordered pairs of opposite points. Accordingly, if (R4) holds for at least one pair \(\{a,b\}\) of opposite points of \(\mathscr{S}\) then it holds for any such pair. If furthermore \(\operatorname{rank}(\mathscr{S})<\infty\) then \(\operatorname{Aut}(\mathscr{S})\) is also transitive on the set of pairs of opposite singular subspaces of any given rank. Hence, if condition (1) holds for at least one choice of opposite points \(a\) and \(b\) of \(\mathscr{S}\) and opposite generators \(X\) and \(Y\) of \(\{a,b\}^{\perp}\), then it holds for any such choice. In this case (R4) is equivalent to \(\mathscr{S}\) being regular. In contrast, if \(\operatorname{rank}(\mathscr{S})\) is infinite then in general \(\operatorname{Aut}(\mathscr{S})\) is intransitive on the set of pairs of opposite generators and, similarly, the stabilizer of \(\{a,b\}\) in \(\operatorname{Aut}(\mathscr{S})\) acts intransitively on the set of pairs of opposite generators of \(\{a,b\}^{\perp}\). So, it can happen that (1) holds for a particular choice of opposite generators \(X\) and \(Y\) of \(\{a,b\}^{\perp}\) but not for all of them. If this is the case then \(\mathscr{S}\) is not regular. Moreover, when \({\rm rank}(\mathscr{S})<\infty\) then (2) holds for any choice of opposite singular subspaces \(X\) and \(Y\). So, when \({\rm rank}(\mathscr{S})<\infty\) condition (2) is redundant. On the other hand, when \({\rm rank}(\mathscr{S})=\infty\) then in general \(\{a,b\}^{\perp}\) admits opposite generators which do not satisfy (2) (but might possibly satisfy (1)). Notably, this also can happen when \(\mathscr{S}\) is regular. Suppose that \(\mathscr{S}\) admits a tight embedding \(\varepsilon:\mathscr{S}\to{\rm PG}(V)\) and \({\rm rank}(\mathscr{S})\) is infinite. Then in general opposite generators \(M\) and \(N\) also exist such that \(\langle\varepsilon(M\cup N)\rangle\subset{\rm PG}(V)\). This fact also explains why, when \({\rm rank}(\mathscr{S})=\infty\), conditions (1) and (2) of (R4) might hold for certain pairs of opposite generators of \(\{a,b\}^{\perp}\) but not for all of them. Indeed, as one can prove, if \(X\) and \(Y\) are opposite generators of \(\{a,b\}^{\perp}\) then \(\varepsilon(X\cup Y)\) spans \(\langle\varepsilon(\{a,b\}^{\perp})\rangle\) if and only if \(X\) and \(Y\) satisfy both properties (1) and (2). Summarizing the above discussion, when \({\rm rank}(\mathscr{S})\) is infinite (R4) is not equivalent to \(\mathscr{S}\) being regular. In fact, as we shall see in Section 5.4, polar spaces of infinite rank exist which admit a tight embedding (hence they satisfy (R4)) but are not regular. **Conjecture 1.5**: _Every regular polar space admits a tight embedding._ ### The three-generators property Another characterization of regularity has been obtained in [12] by means of the so-called three-generators property. Let \({\rm rank}(\mathscr{S})<\infty\). When \({\rm rank}(\mathscr{S})\) is odd we assume that \(\mathscr{S}\) is thick. So, \(\mathscr{S}\) admits triples of mutually opposite generators and every pair of opposite generators belongs to at least one such triple. Let \(M,M_{1},M_{2}\) be such a triple. For every subspace \(X\) of \(M\) we put \[\pi^{M}_{1,2}(X)\ :=\ ((X^{\perp}\cap M_{1})^{\perp}\cap M_{2})^{\perp}\cap M. \tag{3}\] Thus we obtain a duality \(\pi^{M}_{1,2}\) of \(M\). The duality \(\pi_{2,1}\) is defined in the same way but switching \(M_{1}\) and \(M_{2}\). Clearly, \(\pi^{M}_{2,1}\) is the inverse of \(\pi^{M}_{1,2}\). Hence \(\pi^{M}_{1,2}\) is a polarity if and only if \(\pi^{M}_{1,2}=\pi^{M}_{2,1}\). **Definition 1.6**: We say that a pair \(\{M_{1},M_{2}\}\) of opposite generators of \(\mathscr{S}\) satisfies the _three-generators property_ if \(\pi^{M}_{1,2}\) is a polarity for every generator \(M\) of \(\mathscr{S}\) opposite to both \(M_{1}\) and \(M_{2}\). The following has been proved in [12]. **Proposition 1.7**: _A polar space \(\mathscr{S}\) of finite rank is regular if and only if every pair of opposite generators of \(\mathscr{S}\) satisfies the three-generators property._ **Remark 1**: Definition 1.6 also makes sense when \(\mathscr{S}\) is non-tick of odd rank, but it is vacuous in this case. Indeed in this case no triples of mutually opposite generators exist; consequently, every pair of opposite generators trivially satisfies the \(3\)-generators condition. However \(\mathscr{S}\) is regular in this case (by Proposition 1.3 when \(\mathscr{S}\) is embeddable and [12, Proposition 5.11] when it isn't). Hence, when \(\mathscr{S}\) is non-thick of odd rank Proposition 1.7 trivially holds true. The first obstacle we meet when looking for a generalization of Proposition 1.7 suited for polar spaces of infinite rank is the fact that, when \(\operatorname{rank}(\mathscr{S})=\infty\), the mapping \(\pi_{1,2}^{M}\) defined as in (3) cannot be a duality. Indeed in this case \(\dim(M)\) is infinite and infinite dimensional projective spaces admit no dualities. Moreover, as property (GS) might fail to hold in \(\mathscr{S}\), it can happen that for some point \(x\in M\) we have \(\pi_{1,2}^{M}(x)=M\). We must change our setting, replacing dualities with something weaker. We do as follows. Given a projective space \(\mathbb{P}\), let \(\mathbb{P}^{*}\) be its dual. Let \(P\) and \(P^{*}\) be the point-set of \(\mathbb{P}\) and the set of hyperplanes of \(\mathbb{P}\) respectively. **Definition 1.8**: A _partial duality_ of \(\mathbb{P}\) is a mapping \(\pi:P\to P^{*}\cup\{P\}\) such that, for every choice of distinct points \(x\) and \(y\) of \(\mathbb{P}\), if both \(\pi(x)\) and \(\pi(y)\) belong to \(P^{*}\) then \(\pi(x)\neq\pi(y)\) and \(\pi\) induces a bijection from the line of \(\mathbb{P}\) through \(x\) and \(y\) to the line of \(\mathbb{P}^{*}\) through \(\pi(x)\) and \(\pi(y)\). Let \(\pi\) be a partial duality of \(\mathbb{P}\). The set \(\pi^{-1}(P^{*}):=\{x\in P\mid\pi(x)\neq P\}\) is a (possibly empty) subspace of \(\mathbb{P}\). If \(\pi^{-1}(P^{*})=P\) then we say that \(\pi\) is _non-degenerate_. Obviously, \(\pi\) is non-degenerate if and only if it is injective. In contrast, when \(\pi^{-1}(P^{*})=\emptyset\) we say that \(\pi\) is _trivial_. If for every two distinct points \(x,y\in P\) we have \(x\in\pi(y)\) if and only if \(y\in\pi(x)\), then we say that \(\pi\) is _reflexive_. If \(\pi\) is reflexive then \(P\setminus\pi^{-1}(P^{*})\subseteq\pi(x)\) for every \(x\in\pi^{-1}(P^{*})\). However no hyperplane of \(\mathbb{P}\) contains the complement of a proper subspace of \(\mathbb{P}\). Therefore \(\pi\) is reflexive only if it is either non-degenerate or trivial. A non-trivial reflexive partial duality is called a _polarity_. When \(\dim(\mathbb{P})<\infty\) the dualities of \(\mathbb{P}\) are precisely the (mappings from the poset of subspaces of \(\mathbb{P}\) to the poset of subspaces of \(\mathbb{P}^{*}\) defined by) non-degenerate partial dualities; a duality is involutory if and olny if it is reflexive. Turning back to our polar space \(\mathscr{S}\) but now allowing \(\operatorname{rank}(\mathscr{S})\) to be infinite, let \(M,M_{1},M_{2}\) be three mutual opposite generators of \(\mathscr{S}\). Equation (3), whith \(X\) ranging in the set of points of \(M\) instead of the set of subspaces of \(M\), defines a partial duality \(\pi_{1,2}^{M}\) of \(M\) (Section 4.4, Lemma 4.6). As in the finite rank case, \(\pi_{1,2}^{M}\) is a polarity if and only if \(\pi_{1,2}^{M}=\pi_{2,1}^{M}\) (Section 4.4, Lemma 4.7). Definition 1.6 still makes sense in the present setting. We are not going to repeat it here. However we need one more definition. **Definition 1.9**: A generator \(M\) of \(\mathscr{S}\) is _hyperbolic_ if every hyperplane of \(M\) is contained in at most one generator other than \(M\). As we shall prove in Section 4.5 (Theorem 4.11), the following holds: **Theorem 1.10**: _Let \(\mathscr{S}\) be a polar space of infinite rank, defined over a division ring of characteristic different from \(2\). Let \(M_{1}\) and \(M_{2}\) be opposite generators of \(\mathscr{S}\) such that at least one generator of \(\mathscr{S}\) exists which is opposite to both \(M_{1}\) and \(M_{2}\). Suppose moreover that neither \(M_{1}\) nor \(M_{2}\) are hyperbolic. Under these hypoyheses, \(\{M_{1},M_{2}\}\) satisfies the three-generators property if and only if \(\langle M_{1}\cup M_{2}\rangle=\mathscr{S}\)._ **Conjecture 1.11**: _The hypothesis that the underlying division ring of \(\mathscr{S}\) has characteristic different from \(2\) can be dropped from Theorem 1.10, provided that the conclusion of that theorem is rephrased as follows: the pair \(\{M_{1},M_{2}\}\) satisfies the three-generators property if and only if \(\mathscr{S}\) admits a (necessarily tight) embedding \(\varepsilon:\mathscr{S}\to\operatorname{PG}(V)\) such that \(\langle\varepsilon(M_{1}\cup M_{2})\rangle=\operatorname{PG}(V)\)._ **Organization of the paper.** In Section 2 we state some terminology and recall some basics on projective embeddings of polar spaces. Section 3 contains the proofs of Theorems 1.1 and 1.4 and more results on regularity and tight embeddings. Section 4 contains the proof of Theorem 1.10 and one more result in the same vein as Theorem 1.10. A number of examples of polar spaces of infinite rank are discussed in Section 5. Most of them are regular. ## 2 Preliminaries As in the Introduction, throughout this section \(\mathscr{S}\) is a non-degenerate thick-lined polar space of rank at least \(2\), possibly \(\operatorname{rank}(\mathscr{S})=\infty\). ### A survey of elementary properties In this subsection we recall a few well known properties of subspaces of \(\mathscr{S}\), to be be freely used in the sequel of this paper. We have \(\langle X\rangle\subseteq X^{\perp\perp}\) for every set \(X\) of points of \(\mathscr{S}\). Also, if \(X\subseteq Y\) then \(X^{\perp}\supseteq Y^{\perp}\). Consequently, \(X^{\perp}=X^{\perp\perp\perp}\) for every set \(X\) of points of \(\mathscr{S}\). If moreover \(X\) is a singular subspace of \(\mathscr{S}\) then \(X=X^{\perp\perp}\) if and only if \(X\) is the intersection of a family of generators (as it is always the case when \(\operatorname{rank}(\mathscr{S})<\infty\)). Note that, in general, when \(\operatorname{rank}(\mathscr{S})=\infty\) non-maximal singular subspaces exist which are contained in a unique generator. If \(X\) is one of them then \(X^{\perp\perp}\) is the unique generator containing \(X\). Let \(X\) be a singular subspace of \(\mathscr{S}\) and \(M\) a generator containing \(X\). Let \(\operatorname{cod}_{M}(X)\) be the codimension of \(X\) in \(M\). If \(\operatorname{cod}_{M}(X)<\infty\) then \(\operatorname{cod}_{N}(X)=\operatorname{cod}_{M}(X)\) for every generator \(N\) containing \(X\). Indeed the star of \(X\) is a (possibly degenerate) polar space of finite rank equal to \(\operatorname{cod}_{M}(X)\). If \(\operatorname{cod}_{M}(X)=1\) for some (hence every) generator \(M\) containing \(X\) then we say that \(X\) is a _sub-generator_ of \(\mathscr{S}\). Let \(a\) and \(b\) be two opposite points of \(\mathscr{S}\). Then \(\{a,b\}^{\perp}\cong\operatorname{St}(c)\) for every \(c\in\{a,b\}^{\perp\perp}\), where \(\operatorname{St}(c)\) (the _star_ of \(c\)) is the polar space with the lines and the planes of \(\mathscr{S}\) through \(c\) as points and lines respectively. All generators of \(\{a,b\}^{\perp}\) are sub-generators of \(\mathscr{S}\). So, if \(N\) and \(N^{\prime}\) are generators of \(\{a,b\}^{\perp}\) then \(M=\langle N,a\rangle\) and \(M^{\prime}=\langle N^{\prime},b\rangle\) are generators of \(\mathscr{S}\). Note that \(M\cap M^{\prime}=N\cap N^{\prime}\). Indeed, if \(c\in M\cap M^{\prime}\) then \(c\in\{a,b\}^{\perp}\cap N^{\perp}\cap N^{\prime\perp}=N\cap N^{\prime}\). Hence \(M\) and \(M^{\prime}\) are opposite if and only if \(N\) and \(N^{\prime}\) are opposite. Conversely, if \(M\) and \(M^{\prime}\) are opposite generators of \(\mathscr{S}\) and \(a\in M\), \(b\in M^{\prime}\) are opposite, then \(N:=M\cap b^{\perp}\) and \(N^{\prime}:=M^{\prime}\cap a^{\perp}\) are opposite generators of \(\{a,b\}^{\perp}\). Sets as \(\{a,b\}^{\perp\perp}\) for \(a\) and \(b\) opposite points are called _hyperbolic lines_. Note that the points of a hyperbolic line \(\{a,b\}^{\perp\perp}\) are mutually opposite (in particular, \(\{a,b\}^{\perp}\cap\{a,b\}^{\perp\perp}=\emptyset\)) and if \(c\) and \(d\) are distinct points of \(\{a,b\}^{\perp\perp}\) then \(\{c,d\}^{\perp\perp}=\{a,b\}^{\perp\perp}\). Note also that \(\langle\{a,b\}^{\perp},c\rangle=c^{\perp}\) for every point \(c\in\{a,b\}^{\perp\perp}\). Accordingly, \(\langle\{a,b\}^{\perp}\cup\{a,b\}\rangle\) contains both \(a^{\perp}\) and \(b^{\perp}\). However \(a^{\perp}\) and \(b^{\perp}\) are distinct maximal subspaces of \(\mathscr{S}\). Hence \(\langle\{a,b\}^{\perp}\cup\{a,b\}\rangle=\mathscr{S}\). ### Projective embeddings of point-line geometries In the present subsection we recall some generalities and fix some terminology on projective embeddings of point-line geometries. This will be helpful in the next subsection, where projective embeddings of polar spaces will be discussed. As in Shult [15], a projective embedding of a point-line geometry \(\mathscr{G}=(P,\mathscr{L})\), henceforth called just _embedding_ of \(\mathscr{G}\) for short, is an injective mapping \(\varepsilon:P\to\mathbb{P}\) from the point-set \(P\) of \(\mathscr{G}\) to the point-set of a projective geometry \(\mathbb{P}\) such that \(\varepsilon(P)\) spans \(\mathbb{P}\) and \(\varepsilon(\ell):=\{\varepsilon(p)\}_{p\in\ell}\) is a line of \(\mathbb{P}\) for every line \(\ell\in\mathscr{L}\) of \(\mathscr{G}\). We take the dimension of \(\mathbb{P}\) as the _dimension_\(\dim(\varepsilon)\) of \(\varepsilon\). Note that if \(\mathscr{G}\) admits skew lines then necessarily \(\dim(\mathbb{P})>2\), hence \(\mathbb{P}\) is desarguesian. If \(\mathbb{P}\) is desarguesian and \(\mathbb{K}\) is the underlying division ring of \(\mathbb{P}\) then we say that \(\varepsilon\) is _defined over \(\mathbb{K}\)_. If all embeddings of \(\mathscr{G}\) are defined over the same division ring \(\mathbb{K}\) we say that \(\mathscr{G}\) is _defined over \(\mathbb{K}\)_. Given two embeddings \(\varepsilon:\mathscr{G}\to\mathbb{P}\) and \(\varepsilon^{\prime}:\mathscr{G}\to\mathbb{P}^{\prime}\) of \(\mathscr{G}\) defined over the same division ring \(\mathbb{K}\), a _morphism_ from \(\varepsilon\) to \(\varepsilon^{\prime}\) is a morphism of projective geometries \(\varphi:\mathbb{P}\to\mathbb{P}^{\prime}\) such that \(\varepsilon^{\prime}=\varphi\circ\varepsilon\). (See Faure and Frolicher [6, Chaper 6]) for morphisms of projective geometries.) Note that the condition \(\varepsilon^{\prime}=\varphi\circ\varepsilon\) forces the morphism \(\varphi:\mathbb{P}\to\mathbb{P}^{\prime}\) to be surjective (and, when \(\mathscr{G}\) is connected, it uniquely determines \(\varphi\)). If \(\varphi\) is also injective then we say that \(\varphi\) is an _isomorphism_ from \(\varepsilon\) to \(\varepsilon^{\prime}\) and we write \(\varepsilon\cong\varepsilon^{\prime}\). In general, if a morphism exists from \(\varepsilon\) to \(\varepsilon^{\prime}\) we say that \(\varepsilon\)_covers_\(\varepsilon^{\prime}\) and \(\varepsilon^{\prime}\) is a _quotient_ of \(\varepsilon\). A motivation for this terminology is the following: given a morphism \(\varphi:\varepsilon\to\varepsilon^{\prime}\), let \(K:=\operatorname{Ker}(\varphi)\) be the kernel of \(\varphi\) (notation and terminology as in [6]), let \(p_{K}\) be the projection of \(\mathbb{P}\) onto the star \(\mathbb{P}/K\) of \(K\) in \(\mathbb{P}\) and put \(\varepsilon/K:=p_{K}\circ\varepsilon\). Then \(\mathbb{P}^{\prime}\cong\mathbb{P}/K\) and \(\varepsilon/K\cong\varepsilon^{\prime}\). Following Shult [15], we say that an embedding is _relatively universal_ if it admits no proper cover. As proved by Ronan [14], every embedding \(\varepsilon\) of a geometry \(\mathscr{G}\) is covered by a relatively universal embedding \(\tilde{\varepsilon}\) of \(\mathscr{G}\), uniquely determined by \(\varepsilon\) up to isomorphisms and characterized by the following property: \(\tilde{\varepsilon}\) covers all embeddings which cover \(\varepsilon\). We call \(\tilde{\varepsilon}\) the _hull_ of \(\varepsilon\). Thus, an embedding is relatively universal if and only if it is its own hull. An embedding of \(\mathscr{G}\) is said to be _absolutely universal_ if it covers all embeddings of \(\mathscr{G}\). So, \(\mathscr{G}\) admits the absolutely universal embedding if and only if all embeddings of \(\mathscr{G}\) admit the same hull; equivalently, up to isomorphism, \(\mathscr{G}\) admits a unique relatively universal embedding. Clearly, if \(\mathscr{G}\) admits the absolutely universal embedding and at least one of its embeddings is defined over a given division ring \(\mathbb{K}\), then \(\mathscr{G}\) itself is defined over \(\mathbb{K}\). Finally, we say that an embedding is _minimal_ if it admts no proper quotients. For instance, tight embeddings of polar spaces, as defined in Section 1.3, are minimal. ### Projective embeddings of polar spaces As proved by Tits [17, chapters 8 and 9] (see also Buekenhout and Cohen [1, chapters 7-11] and Cuypers et al. [4]) all polar spaces of rank at least 3 are embeddable but for two families of polar spaces of rank 3. The non-embeddable ones are the line-grassmannians of 3-dimensional projective spaces defined over non-commutative division rings and certain polar spaces with Moufang but non-desarguesian planes, described in [17, Chapter 9]. Let \(\mathscr{S}\) be an embeddable polar space and, when \(\operatorname{rank}(\mathscr{S})=2\), assume that \(\mathscr{S}\) is neither a grid of order at least 4 nor a generalized quadrangle as in Tits [17, 8.6(II)(a)]. Then \(\mathscr{S}\) admits the absolutely universal embedding as well as a unique minimal embedding (Tits [17, Chapter 8]; see also Johnson [7] and [8] and Cuypers et al. [5]). In the two excluded cases, all embeddings are 3-dimensional, hence they are both relatively universal and minimal. In any case, all embeddings of \(\mathscr{S}\) have dimension at least \(2\cdot\operatorname{rank}(\mathscr{S})-1\) (\(\geq 3\) as \(\operatorname{rank}(\mathscr{S})\geq 2\) by assumption); hence they embed \(\mathscr{S}\) in desarguesian projective spaces. When \(\mathscr{S}\) is not an infinite grid, these projective spaces are defined over the same division ring (so \(\mathscr{S}\) is defined over that ring). Let \(\varepsilon:\mathscr{S}\to\mathbb{P}=\operatorname{PG}(V)\) be an embedding of a polar space \(\mathscr{S}\). The \(\varepsilon\)-image \(\varepsilon(\mathscr{S})\) of \(\mathscr{S}\) is a full subgeometry of \(\operatorname{PG}(V)\) and there exists a unique quasi-polarity \(\pi_{\varepsilon}\) of \(\operatorname{PG}(V)\) (see Buekenhout and Cohen [1, Definition 7.1.9] for the definition of quasi-polarities) such that all points of \(\varepsilon(\mathscr{S})\) are absolute for \(\pi_{\varepsilon}\) and for any two points \(x,y\in\mathscr{S}\) we have \(x\perp y\) if and only if \(\varepsilon(x)\perp_{\varepsilon}\varepsilon(y)\), where \(\perp_{\varepsilon}\) is the orthogonality relation associated to \(\pi_{\varepsilon}\) (Buekenhout and Cohen [1, Chapter 9]). More explictly, let \(\mathscr{S}_{\pi_{\varepsilon}}\) be the (possibly degenerate) polar space defined by \(\pi_{\varepsilon}\) on \(\operatorname{PG}(V)\). Then \(\varepsilon(\mathscr{S})\) is a subspace of \(\mathscr{S}_{\pi_{\varepsilon}}\), possibly \(\varepsilon(\mathscr{S})=\mathscr{S}_{\pi_{\varepsilon}}\). Note that, as \(\mathscr{S}\) is non-degenerate by assumption, if \(\varepsilon(\mathscr{S})=\mathscr{S}_{\pi_{\varepsilon}}\) then \(\pi_{\varepsilon}\) is a polarity, namely its radical is trivial. (Recall that the _radical_ of a quasi-polarity \(\pi\) of a projective geometry \(\mathbb{P}\) is the subspace of \(\mathbb{P}\) formed by the points \(p\in\mathbb{P}\) such that \(\pi(p)=\mathbb{P}\).) Suppose that \(\varepsilon\) is relatively universal and let \(\mathbb{K}\) be its underlying division ring. Then one of the following occurs (Tits [17, Chaper 8]). 1. \(\mathrm{char}(\mathbb{K})\neq 2\), the polarity \(\pi_{\varepsilon}\) is defined by a non-degenerate alternating form and \(\varepsilon(\mathscr{S})=\mathscr{S}_{\pi_{\varepsilon}}\). 2. The quasi-polarity \(\pi_{\varepsilon}\) is defined by the sesquilinearized of a non-degenerate \(\sigma\)-quadratic form \(q:V\to\mathbb{K}/\mathbb{K}_{\sigma,1}\) as defined by Tits [17, Chapter 8] for an involutory anti-automorphism \(\sigma\) of \(\mathbb{K}\) and \(\varepsilon(\mathscr{S})\) is the polar space \(\mathscr{S}_{q}\) associated to \(q\). In this case, if either \(\mathrm{char}(\mathbb{K})\neq 2\) or \(\sigma\) acts non-trivially on the center \(Z(\mathbb{K})\) of \(\mathbb{K}\), then \(\pi_{\varepsilon}\) is a polarity and \(\varepsilon(\mathscr{S})=\mathscr{S}_{q}=\mathscr{S}_{\pi_{\varepsilon}}\). Suppose moreover that \(\varepsilon\) is absolutely universal (as when \(\mathrm{rank}(\mathscr{S})>2\) or case (1) occurs). In case (1) and in case (2) with \(\pi_{\varepsilon}\) a polarity \(\varepsilon\) is the unique embedding of \(\mathscr{S}\). When \(\pi_{\varepsilon}\) is not a polarity we can factorize \(\varepsilon\) over any subspace \(K\) of the radical \(R_{\varepsilon}\) of \(\pi_{\varepsilon}\), thus obtaining quotients \(\varepsilon/K\) of \(\varepsilon\). In particular, \(\varepsilon/R_{\varepsilon}\) is the minimum embedding of \(\mathscr{S}\). #### 2.3.1 Relations between \(\bot\) and \(\bot_{\varepsilon}\) Let \(X\) be a set of points of \(\mathscr{S}\) and \(\varepsilon:\mathscr{S}\to\mathrm{PG}(V)\) an embedding of \(\mathscr{S}\). Clearly \(\langle\varepsilon(X^{\bot})\rangle\subseteq\varepsilon(X)^{\bot_{\varepsilon}}\). **Proposition 2.1**: _If \(X^{\bot}\not\subseteq X^{\bot\bot}\) then \(\varepsilon(X)^{\bot_{\varepsilon}}=\langle\varepsilon(X^{\bot})\rangle\)._ **Proof.** Let \(\mathscr{X}\) be a projective subspace of \(\mathrm{PG}(V)\) and suppose that \(\mathscr{X}\cap\varepsilon(\mathscr{S})\not\subseteq\mathscr{X}^{\bot_{ \varepsilon}}\). Then \(\mathscr{X}\) is spanned by \(\mathscr{X}\cap\varepsilon(\mathscr{S})\). When \(\varepsilon\) is relatively universal this claim follows from Tits [17, SSSS8.1.6, 8.2.7]. Otherwise, \(\varepsilon\) is a quotient of the absolutely universal embedding \(\tilde{\varepsilon}:\mathscr{S}\to\mathrm{PG}(\widetilde{V})\) of \(\mathscr{S}\), say \(\varepsilon=\tilde{\varepsilon}/K\) for a subspace \(K\) of the radical \(R_{\tilde{\varepsilon}}\) of \(\pi_{\varepsilon}\). Let \(p_{K}\) be the projection of \(\mathrm{PG}(\widetilde{V})\) onto \(\mathrm{PG}(V)=\mathrm{PG}(\widetilde{V})/K\) and put \(\widetilde{\mathscr{X}}:=p_{K}^{-1}(\mathscr{X})\). Then \(\mathscr{X}\cap\varepsilon(\mathscr{S})=p_{K}(\widetilde{\mathscr{X}}\cap \tilde{\varepsilon}(\mathscr{S}))\). However \(\mathscr{X}\cap\varepsilon(\mathscr{S})\not\subseteq\mathscr{X}^{\bot_{ \varepsilon}}\) by assumption. Hence \(\widetilde{\mathscr{X}}\cap\tilde{\varepsilon}(\mathscr{S})\not\subseteq \mathscr{X}^{\bot_{\varepsilon}}\). By Tits [17, SSSS8.1.6, 8.2.7], the set \(\widetilde{\mathscr{X}}\cap\tilde{\varepsilon}(\mathscr{S})\) spans \(\widetilde{\mathscr{X}}\). Hence \(\mathscr{X}\cap\varepsilon(\mathscr{S})\) spans \(\mathscr{X}\). Let now \(X^{\bot}\not\subseteq X^{\bot\bot}\) as in the hypotheses of the lemma and put \(\mathscr{X}=\varepsilon(X)^{\bot_{\varepsilon}}\). Then \(\mathscr{X}\cap\varepsilon(\mathscr{S})=\varepsilon(X^{\bot})\), which is not contained in \(\mathscr{X}^{\bot_{\varepsilon}}\) because \(X^{\bot}\not\subseteq X^{\bot\bot}\). By the previous paragraph, \(\mathscr{X}\cap\varepsilon(\mathscr{S})\) spans \(\mathscr{X}\), namely \(\varepsilon(X^{\bot})\) spans \(\varepsilon(X)^{\bot_{\varepsilon}}\). \(\Box\) **Corollary 2.2**: _If \(X^{\bot}\not\subseteq X^{\bot\bot}\not\subseteq X^{\bot}\) then \(\langle\varepsilon(X^{\bot\bot})\rangle=\varepsilon(X)^{\bot_{\varepsilon}\bot_ {\varepsilon}}\)._ **Proof.** Let \(X^{\perp}\not\subseteq X^{\perp\perp}\). Then \(\langle\varepsilon(X^{\perp})\rangle=\varepsilon(X)^{\perp_{\varepsilon}}\) by Proposition 2.1. Suppose moreover that \(X^{\perp\perp}\not\subseteq X^{\perp}\). Then, since \(X^{\perp}=X^{\perp\perp\perp}\), we have that \(\langle\varepsilon(X^{\perp\perp})\rangle=\varepsilon(X^{\perp})^{\perp_{ \varepsilon}}\) by Lemma 2.1 with \(X^{\perp}\) in place of \(X\). However \(\varepsilon(X^{\perp})^{\perp_{\varepsilon}}=\langle\varepsilon(X^{\perp}) \rangle^{\perp_{\varepsilon}}=\varepsilon(X)^{\perp_{\varepsilon}\perp_{ \varepsilon}}\), since \(\langle\varepsilon(X^{\perp})\rangle=\varepsilon(X)^{\perp_{\varepsilon}}\). Finally \(\langle\varepsilon(X^{\perp\perp})\rangle=\varepsilon(X)^{\perp_{\varepsilon} \perp_{\varepsilon}}\), as claimed. \(\Box\) **Corollary 2.3**: _We have \(\langle\varepsilon(\{a,b\}^{\perp})\rangle=\{\varepsilon(a),\varepsilon(b) \}^{\perp_{\varepsilon}}\) and \(\langle\varepsilon(\{a,b\}^{\perp\perp})\rangle=\{\varepsilon(a),\varepsilon( b)\}^{\perp_{\varepsilon}\perp_{\varepsilon}}\) for any two opposite points \(a\) and \(b\) of \(\mathscr{S}\)._ **Proof.** We have \(\{a,b\}^{\perp}\cap\{a,b\}^{\perp\perp}=\{a,b\}^{\perp\perp\perp}\cap\{a,b\} ^{\perp\perp}=\emptyset\). The conclusion follows from Proposition 2.1 and Corollary 2.2. \(\Box\) #### 2.3.2 Subspaces of an embeddable polar space Given an embedding \(\varepsilon:\mathscr{S}\to\mathrm{PG}(V)\), a subspace \(X\) of \(\mathscr{S}\)_arises_ from \(\varepsilon\) if \(X=\varepsilon^{-1}(\mathscr{X})\) for a projective subspace \(\mathscr{X}\) of \(\mathrm{PG}(V)\); equivalently, \(\langle\varepsilon(X)\rangle\cap\varepsilon(\mathscr{S})=\varepsilon(X)\). **Definition 2.4**: Given a singular subspace \(K\) of \(\mathscr{S}\) (possibly \(K=\emptyset\)) let \(\{X_{i}\}_{i\in I}\) be a family of singular subspaces such that each of them contains \(K\) as a hyperplane, \(X_{i}^{\perp}\cap X_{j}=K\) for any choice of \(i,j\in I\) with \(i\neq j\) and \(|I|>1\). Then \(X:=\cup_{i\in I}X_{i}\) is a subspace of \(\mathscr{S}\), henceforth called a _rosette_. Clearly, \(K\) is the radical of \(X\). In particular, if \(K=\emptyset\) then \(X\) is a set of mutually opposite points. **Proposition 2.5**: _Let \(\mathscr{S}\) be embeddable and let \(X\) be a subspace of \(\mathscr{S}\). Then \(X\) arises from an embedding of \(\mathscr{S}\), except possibly when \(X\) a rosette._ **Proof.** When \(\mathrm{rank}(\mathscr{S})<\infty\) the above is just the main result of [2]. Suppose that \(\mathscr{S}\) has infinite rank. Then \(\mathscr{S}\) admits the universal embedding, say \(\varepsilon:\mathscr{S}\to\mathrm{PG}(V)\). We shall prove that, if \(X\) is not a rosette, then \(X\) arises from \(\varepsilon\). When \(X\) is a singular subspace there is nothing to prove. So, suppose that \(X\) is neither a singular subspace nor a rosette. By contradiction, suppose that \(X\subset\varepsilon^{-1}(\langle\varepsilon(X)\rangle)=:X^{\prime}\). Choose a point \(x_{0}\in X^{\prime}\setminus(X\cup X^{\perp})\). As \(\varepsilon(x_{0})\in\langle\varepsilon(X)\rangle\), there exists a finite subset \(A\subseteq X\) such that \(\varepsilon(x_{0})\in\langle\varepsilon(A)\rangle\). As \(X\) is not a rosette, we can always choose \(A\) in such a way that \(X_{0}:=\langle A\rangle\) contains a pair of mutually opposite lines. Hence \(X_{0}\) is not a rosette. Put \(X_{0}^{\prime}:=\varepsilon^{-1}(\langle\varepsilon(X_{0})\rangle)\). Clearly, \(X_{0}^{\prime}\) has finite rank, since \(\langle\varepsilon(X_{0})\rangle\) is finite dimensional. If \(X_{0}^{\prime}\) is non-degenerate, then put \(X_{1}=X_{0}^{\prime}\). Otherwise, let \(R:=X_{0}^{\prime}\cap{X_{0}^{\prime}}^{\perp}\) be the radical of \(X_{0}^{\prime}\). Then \(R\) has finite rank, since \(\mathrm{rank}(X_{0}^{\prime})<\infty\). However \(\mathscr{S}\) is non-degenerate of infinite rank. An easy argument exploiting induction on \(\mathrm{rank}(R)\) shows that we can find a finite subset \(B\) of \(\mathscr{S}\) such that \(\varepsilon^{-1}(\langle\varepsilon(X_{0}\cup B)\rangle)\) is non-degenerate. With \(B\) chosen in this way, put \(X_{1}:=\varepsilon^{-1}(\langle\varepsilon(X_{0}\cup B)\rangle)\). Again, \(X_{1}\) has finite rank, since \(\langle\varepsilon(X_{0}\cup B)\rangle=\langle\varepsilon(A\cup B)\rangle\) has finite dimension (less than \(|A\cup B|\)). Moreover \(X_{1}\) is non-degenerate, thanks to the addition of \(B\) to \(A\). By construction, \(X_{1}\) arises from \(\varepsilon\). Therefore \(\varepsilon(X_{1})\) is the polar space defined on \(\langle\varepsilon(X_{0}\cup B)\rangle=\langle\varepsilon(X_{1})\rangle\) by the pseudoquadratic (or alternating) form induced on \(\langle\varepsilon(X_{1})\rangle\) by the pseudoquadratic (respectively, alternating) form which describes \(\varepsilon(\mathscr{S})\). This induced form is non-degenerate, since \(X_{1}\) is non-degenerate. Moreover the embedding of \(X_{1}\) in \(\langle\varepsilon(X_{1})\rangle\), say \(\varepsilon_{1}\), is universal by the above and since \(\varepsilon\) is universal by assumption (which means that in the characteristic \(2\) case \(\varepsilon(\mathscr{S})\) cannot be described by an alternating form). Thus we can apply Theorem 1 of [2] to \(X_{0}\) as a subspace of \(X_{1}\). By that theorem, \(X_{0}\) arises from \(\varepsilon_{1}\), namely \(X_{0}=\varepsilon_{1}^{-1}(\langle\varepsilon_{1}(X_{0})\rangle)\). However \(\varepsilon_{1}^{-1}(\langle\varepsilon_{1}(X_{0})\rangle)=\varepsilon^{-1} (\langle\varepsilon(X_{0})\rangle\) and \(x_{0}\in\varepsilon^{-1}(\langle\varepsilon(X_{0})\rangle\) by definition of \(X_{0}\). In the end, \(x_{0}\in X_{0}\). However \(X_{0}\subseteq X\). Therefore \(x_{0}\in X\). We have reached a final contradiction. \(\Box\) #### 2.3.3 Optimally embeddable subspaces Suppose that \(\mathscr{S}\) is embeddable and let \(X\) be a subspace of \(\mathscr{S}\). Then \(\langle\varepsilon(X),\varepsilon(X)^{\perp_{\varepsilon}}\rangle\subseteq \varepsilon(X\cap X^{\perp})^{\perp_{\varepsilon}}\) for every embedding \(\varepsilon\) of \(\mathscr{S}\). Since both \(\varepsilon(X)^{\perp_{\varepsilon}}\) and \(\varepsilon(X\cap X^{\perp})^{\perp_{\varepsilon}}\) contain the radical \(R_{\varepsilon}\) of \(\pi_{\varepsilon}\) and all quotients of \(\varepsilon\) arise by factorizing \(\varepsilon\) over subspaces of \(R_{\varepsilon}\), we have \[\langle\varepsilon(X),\varepsilon(X)^{\perp_{\varepsilon}}\rangle\ =\ \varepsilon(X\cap X^{\perp})^{\perp_{\varepsilon}} \tag{4}\] if and only if the same holds for a quotient of \(\varepsilon\). Consequently, if (4) holds for \(\varepsilon\) then it also holds for all covers and all quotients of \(\varepsilon\). If \(\varepsilon(\mathscr{S})=\mathscr{S}_{\pi_{\varepsilon}}\) then (4) holds for every subspace \(X\) such that \(\dim(\varepsilon(X))<\infty\). (This follows from the fact that the reflexive sesquilinear form \(f\) associated to \(\pi_{\varepsilon}\) is trace-valued and if \(\pi_{\varepsilon}\) is a polarity then \(f\) is non-degenerate.) In particular, if \(\mathscr{S}\) is one of the exceptional embeddable generalized quadrangles which do not admit the absolutely universal embedding then (4) holds for every subspace \(X\) and every embedding \(\varepsilon\) of \(\mathscr{S}\). So, if (4) holds for a subspace \(X\) of \(\mathscr{S}\) and at last one embedding of \(\mathscr{S}\) then it holds for \(X\) and all embeddings of \(\mathscr{S}\). We are now ready to state the following definition. **Definition 2.6**: If \(X\) satisfies property (4) for some (equivalently, every) embedding of \(\mathscr{S}\) then we say that \(X\) is _optimally embeddable_. Clearly, al singular subspaces of \(\mathscr{S}\) are optimally embeddable. As previously recalled, if \(\mathscr{S}\) admits an embedding \(\varepsilon\) such that \(\varepsilon(\mathscr{S})=\mathscr{S}_{\pi_{\varepsilon}}\) then every finitely generated subspace of \(\mathscr{S}\) is optimally embeddable. Note also that a non-degenerate subspace \(X\) of \(\mathscr{S}\) is optimally embeddabe if and only if \(\varepsilon(X)\cup\varepsilon(X)^{\perp_{\varepsilon}}\) spans \(\operatorname{PG}(V)\) for every embedding \(\varepsilon:\mathscr{S}\to\operatorname{PG}(V)\) of \(\mathscr{S}\). **Proposition 2.7**: _Let \(Y\) and \(Z\) be finite-dimensional opposite singular subspaces of \(\mathscr{S}\) and put \(X:=\langle Y,Z\rangle\). Then \(X\cap X^{\perp}=\emptyset\) and \(\langle\varepsilon(X),\varepsilon(X)^{\perp_{\varepsilon}}\rangle=\operatorname {PG}(V)\) for every embedding \(\varepsilon:\mathscr{S}\to\operatorname{PG}(V)\) of \(\mathscr{S}\)._ **Proof.** Let \(\varepsilon:\mathscr{S}\to\mathrm{PG}(V)\) be an embedding of \(\mathscr{S}\). The projective subspace \(\langle\varepsilon(X)\rangle\) is the union of the lines of \(\mathrm{PG}(V)\) joining a point of \(\varepsilon(Y)\) with a point of \(\varepsilon(Z)\). Let \(\ell\) be one of these lines, say \(\ell=\langle\varepsilon(y),\varepsilon(z)\rangle\) for \(y\in Y\) and \(z\in Z\). Suppose that \(\ell\) meets \(\varepsilon(X)^{\perp_{\varepsilon}}\) non-trivially. Then at least one of \(y\) and \(z\) belongs to \(X^{\perp}\). This contradicts the assumption that \(Y\) and \(Z\) are opposite. Therefore \(\langle\varepsilon(X)\rangle\cap\varepsilon(X)^{\perp_{\varepsilon}}=\emptyset\). Accordingly, \(X\cap X^{\perp}=\emptyset\). By assumption, \(\dim\langle\varepsilon(X)\rangle<\infty\). We have \(\langle\mathscr{X},\mathscr{X}^{\perp_{\varepsilon}}\rangle=(\mathscr{X}\cap \mathscr{X}^{\perp_{\varepsilon}})^{\perp_{\varepsilon}}\) for every finite-dimensional subspace \(\mathscr{X}\) of \(\mathrm{PG}(V)\). Therefore \[\langle\varepsilon(X),\varepsilon(X)^{\perp_{\varepsilon}}\rangle\ =\ (\langle \varepsilon(X)\rangle\cap\varepsilon(X)^{\perp_{\varepsilon}})^{\perp_{ \varepsilon}}.\] However \(\langle\varepsilon(X)\rangle\cap\varepsilon(X)^{\perp_{\varepsilon}}=\emptyset\), as shown in the previous paragraph. Therefore \(\langle\varepsilon(X),\varepsilon(X)^{\perp_{\varepsilon}}\rangle=\mathrm{ PG}(V)\), as claimed. \(\Box\) **Lemma 2.8**: _Let \(a\) and \(b\) be opposite points of \(\mathscr{S}\). If \(\varepsilon\) is minimal then \(\varepsilon(\{a,b\})^{\perp_{\varepsilon}}\cap\varepsilon(\{a,b\})^{\perp_{ \varepsilon}\perp_{\varepsilon}}=\emptyset\)._ **Proof.** Let \(\varepsilon:\mathscr{S}\to\mathrm{PG}(V)\) be minimal, namely \(\pi_{\varepsilon}\) is a polarity. Hence \(\varepsilon(\{a,b\}^{\perp_{\varepsilon}}\) has codimention \(2\) in \(\mathrm{PG}(V)\) and, consequently, \(L:=\varepsilon(\{a,b\})^{\perp_{\varepsilon}\perp_{\varepsilon}}\) is a line of \(\mathrm{PG}(V)\). By way of contradiction, suppose that \(\varepsilon(\{a,b\})^{\perp_{\varepsilon}}\cap L\neq\emptyset\). Then \(L\subseteq L^{\perp_{\varepsilon}}\), since \(L\) contains \(\varepsilon(a),\varepsilon(b)\) and \(p\), which are mutually distinct, each of these points is absolute for \(\pi_{\varepsilon}\) and \(\pi_{\varepsilon}(p)\) contains \(\varepsilon(a)\) and \(\varepsilon(b)\). Consequently \(\varepsilon(a)\perp_{\varepsilon}\varepsilon(b)\), namely \(a\perp b\), while \(a\not\perp b\) by assumption. \(\Box\) **Proposition 2.9**: _Let \(a\) and \(b\) be opposite points of \(\mathscr{S}\). Then both \(\{a,b\}^{\perp}\) and \(\{a,b\}^{\perp\perp}\) are optimally embeddable._ **Proof.** As \(\{a,b\}^{\perp}\cap\{a,b\}^{\perp\perp}=\{a,b\}^{\perp\perp\perp}\cap\{a,b\}^{ \perp\perp}=\emptyset\), we need to prove that the following holds for at least one embedding \(\varepsilon:\mathscr{S}\to\mathrm{PG}(V)\) of \(\mathscr{S}\). \[\begin{array}{rcl}\langle\varepsilon(\{a,b\}^{\perp}),\varepsilon(\{a,b\}^{ \perp})^{\perp_{\varepsilon}}\rangle&=&\mathrm{PG}(V),\\ \langle\varepsilon(\{a,b\}^{\perp\perp}),\varepsilon(\{a,b\}^{\perp\perp})^{ \perp_{\varepsilon}}\rangle&=&\mathrm{PG}(V).\end{array} \tag{5}\] We have \(\langle\varepsilon(\{a,b\}^{\perp})\rangle=\{\varepsilon(a),\varepsilon(b)\} ^{\perp_{\varepsilon}}\) and \(\langle\varepsilon(\{a,b\}^{\perp\perp})\rangle=\{\varepsilon(a),\varepsilon( b)\}^{\perp_{\varepsilon}\perp_{\varepsilon}}\) by Corollary 2.3. Moreover \(\varepsilon(\{a,b\}^{\perp_{\varepsilon}\perp_{\varepsilon}\perp_{\varepsilon}}= \varepsilon(\{a,b\})^{\perp_{\varepsilon}}\). Hence both equations of (5) are equivalent to the following: \[\langle\varepsilon(\{a,b\})^{\perp_{\varepsilon}},\varepsilon(\{a,b\})^{ \perp_{\varepsilon}\perp_{\varepsilon}}\rangle\ =\ \mathrm{PG}(V). \tag{6}\] We can assume that \(\varepsilon\) is minimal. Hence \(\varepsilon(\{a,b\})^{\perp_{\varepsilon}\perp_{\varepsilon}}\) is a line, as noticed in the proof of Lemma 2.8. Therefore \[\langle\varepsilon(\{a,b\})^{\perp_{\varepsilon}\perp_{\varepsilon}},\varepsilon (\{a,b\})^{\perp_{\varepsilon}\perp_{\varepsilon}\perp_{\varepsilon}}\rangle=( \varepsilon(\{a,b\})^{\perp_{\varepsilon}\perp_{\varepsilon}}\cap\varepsilon( \{a,b\})^{\perp_{\varepsilon}\perp_{\varepsilon}\perp_{\varepsilon}})^{\perp_{ \varepsilon}}.\] However \(\varepsilon(\{a,b\})^{\perp_{\varepsilon}\perp_{\varepsilon}\perp_{\varepsilon}}= \varepsilon(\{a,b\})^{\perp_{\varepsilon}}\). Hence the above equality amounts to the following: \[\langle\varepsilon(\{a,b\})^{\perp_{\varepsilon}\perp_{\varepsilon}}, \varepsilon(\{a,b\})^{\perp_{\varepsilon}}\rangle\ =\ (\varepsilon(\{a,b\})^{\perp_{\varepsilon}\perp_{\varepsilon}}\cap \varepsilon(\{a,b\})^{\perp_{\varepsilon}})^{\perp_{\varepsilon}}. \tag{7}\] We know that \(\varepsilon(\{a,b\})^{\perp_{\varepsilon}\perp_{\varepsilon}}\cap\varepsilon(\{a,b \})^{\perp_{\varepsilon}}=\emptyset\) by Lemma 2.8 and, obviously, \(\emptyset^{\perp_{\varepsilon}}=\mathrm{PG}(V)\). So, (7) is the same as (6). \(\Box\) Regularity Throughout this section \(\mathscr{S}\) is a (possibly non-embeddable) non-degenerate thick-lined polar space of rank at least 2. ### Definitions and a proof of Theorem 1.1 Lat \(a\) and \(b\) be two opposite points of \(\mathscr{S}\) and \(N,N^{\prime}\) generators of \(\{a,b\}^{\perp}\). Clearly, \(\{N,N^{\prime}\}^{\perp}\supseteq\langle N\cap N^{\prime},\{a,b\}^{\perp\perp}\rangle\). **Definition 3.1**: We say that two generators \(N,N^{\prime}\) of \(\{a,b\}^{\perp}\) form a \(\perp\)-_minimal pair_ if \(\{N,N^{\prime}\}^{\perp}=\langle N\cap N^{\prime},\{a,b\}^{\perp\perp}\rangle\). A generator \(N\) of \(\{a,b\}^{\perp}\) will be said to be \(\perp\)_-minimal_ if \(\{N,N^{\prime}\}\) is \(\perp\)-minimal for every generator \(N^{\prime}\) of \(\{a,b\}^{\perp}\) (in particular, \(N^{\perp}=\langle N,\{a,b\}^{\perp\perp}\rangle\)). We say that \(\{a,b\}\) is _regular_ if all generators of \(\{a,b\}^{\perp}\) are \(\perp\)-minimal. If all pairs of opposite points of \(\mathscr{S}\) are regular, then \(\mathscr{S}\) is said to be _regular_. **Lemma 3.2**: _We have \(\langle X,\{a,b\}^{\perp\perp}\rangle=\cup_{x\in\{a,b\}^{\perp\perp}}\langle X,x\rangle\) for every singular subspace \(X\) of \(\{a,b\}^{\perp}\)._ **Proof.** Let \(x,y\) be distinct points of \(\{a,b\}^{\perp\perp}\). Then \(x\not\perp y\). Consequently, no point of \(\langle X,x\rangle\setminus X\) can be collinear with a point of \(\langle X,y\rangle\setminus X\). The conclusion follows from this remark. \(\Box\) In particular, if \(N\) is a generator of \(\{a,b\}^{\perp}\) then \(\langle N,\{a,b\}^{\perp\perp}\rangle\) is the union of the generators of \(\mathscr{S}\) which contain \(N\) and meet \(\{a,b\}^{\perp\perp}\) non-trivally. Accordingly, \(\{N,N\}\) is \(\perp\)-minimal if and only if all generators of \(\mathscr{S}\) containing \(N\) meet \(\{a,b\}^{\perp\perp}\) non-trivially. **Theorem 3.3**: _Let \(N\) be a generator of \(\{a,b\}^{\perp}\). If the pair \(\{N,N\}\) is \(\perp\)-minimal then \(N\) is \(\perp\)-minimal._ **Proof.** Let \(\{N,N\}\) be \(\perp\)-minimal, namely all generators of \(\mathscr{S}\) containing \(N\) can be obtained as \(\langle N,c\rangle\) for a point \(c\in\{a,b\}^{\perp\perp}\) (Lemma 3.2). Given another generator \(N^{\prime}\) of \(\{a,b\}^{\perp}\), let \(x\in\{N,N^{\prime}\}^{\perp}\) and suppose that \(x\not\in N\cap N^{\prime}\). Therefore \(x\not\in N\) (otherwise \(\langle x,N^{\prime}\rangle\) is a singular subspace contained in \(\{a,b\}^{\perp}\) and properly containing \(N^{\prime}\), a contradiction with \(N^{\prime}\) being a generator of \(\{a,b\}^{\perp}\)). Accordingly, \(M:=\langle N,x\rangle\) is a generator of \(\mathscr{S}\). As \(\{N,N\}\) is \(\perp\)-minimal, \(M\) contains a point \(c\in\{a,b\}^{\perp}\). If \(x=c\) then \(x\in\{a,b\}^{\perp\perp}\subseteq\langle N\cap N^{\prime},\{a,b\}^{\perp\perp}\rangle\). Otherwise, the line \(\langle x,c\rangle\) meets \(N\) in a point, say \(y\). We have \(\langle x,c\rangle\subseteq N^{\prime\perp}\), since both \(x\) and \(c\) belong to \(N^{\prime\perp}\). However, \(N\cap N^{\prime\perp}=N\cap N^{\prime}\). Therefore \(y\in N\cap N^{\prime}\). Hence \(x\in\langle y,c\rangle\subseteq\langle N\cap N^{\prime},\{a,b\}^{\perp\perp}\rangle\). We have proved that \(\{N,N^{\prime}\}^{\perp}\subseteq\langle N\cap N^{\prime},\{a,b\}^{\perp\perp}\rangle\). Hence \(\{N,N^{\prime}\}^{\perp}=\langle N\cap N^{\prime},\{a,b\}^{\perp\perp}\rangle\) since \((N\cap N^{\prime})\cup\{a,b\}^{\perp\perp}\subseteq\{N,N^{\prime}\}^{\perp}\). So, \(\{N,N^{\prime}\}\) is \(\perp\)-minimal. As \(N^{\prime}\) is an arbitrary generator of \(\{a,b\}^{\perp}\), \(N\) is \(\perp\)-minimal. \(\Box\) Theorem 1.1 immediately follows from Theorem 3.3. Indeed in property (R3) it is assumed that every pair of generators of \(\{a,b\}^{\perp}\) is \(\perp\)-minimal while (R2) is equivalent to \(\{N,N\}\) being \(\perp\)-minimal for every generator \(N\) of \(\{a,b\}^{\perp}\). Trivially, (R3) implies (R2). Conversely, by Theorem 3.3, property (R2) implies (R3). **Corollary 3.4**: _Assume that \({\rm rank}({\mathscr{S}})<\infty\) and let \(N\) be a generator of \(\{a,b\}^{\perp}\). If there exists a generator \(N^{\prime}\) of \(\{a,b\}^{\perp}\) such that \(\{N,N^{\prime}\}\) is \(\perp\)-minimal then \(N\) is \(\perp\)-minimal._ **Proof.** In view of Theorem 3.3, we only need to prove that, if a generator \(N^{\prime}\) of \(\{a,b\}^{\perp}\) exists such that \(\{N,N^{\prime}\}\) is \(\perp\)-minimal, then \(\{N,N\}\) is \(\perp\)-minimal. If \(N^{\prime}=N\) there is nothing to prove. So, assume that \(N^{\prime}\neq N\). Hence \(N\cap N^{\prime\perp}=N\cap N^{\prime}\). Let \(M\) be a generator of \({\mathscr{S}}\) containing \(N\). Since \(\dim(N^{\prime})\) is finite and \(\dim(M)=\dim(N^{\prime})+1\), we have \(\dim(M\cap N^{\prime\perp})=1+\dim(N\cap N^{\prime})\) (recall that \(\dim(N^{\prime})=\dim(N)\) as \({\rm rank}({\mathscr{S}})<\infty\)). Accordingly, \(M\cap N^{\prime\perp}\) is not contained in \(N\cap N^{\prime}\). Let \(x\in(M\cap N^{\prime\perp})\setminus(N\cap N^{\prime})\). Clearly \(x\in\{N,N^{\prime}\}^{\perp}\). Therefore \(x\in\langle N\cap N^{\prime},\{a,b\}^{\perp}\rangle\) since by assumption \(\{N,N^{\prime}\}\) is \(\perp\)-minimal. By lemma 3.2, the point \(x\) belongs to a line \(\ell\) joining a point \(c\in\{a,b\}^{\perp\perp}\) with a point \(y\in N\cap N^{\prime}\). We have \(x\neq y\), since \(x\not\in N\cap N^{\prime}\). Therefore \(\ell=\langle x,y\rangle\). Consequently \(\ell\subseteq M\), since both \(x\) and \(y\) belong to \(M\). However \(c\in\ell\). Hence \(M\) meets \(\{a,b\}^{\perp\perp}\) in \(c\). We have proved that all generators of \({\mathscr{S}}\) which contain \(N\) meet \(\{a,b\}^{\perp}\) non-trivially, namely \(\{N,N\}\) is \(\perp\)-minimal. \(\Box\) **Corollary 3.5**: _Still assuming that \({\rm rank}({\mathscr{S}})<\infty\), when \({\rm rank}({\mathscr{S}})=2\) we also assume that \({\mathscr{S}}\) is embeddable. Suppose that there exist two opposite points \(a\) and \(b\) of \({\mathscr{S}}\) such that \(\{a,b\}^{\perp}\) admits a \(\perp\)-minimal pair of generators. Then \({\mathscr{S}}\) is regular._ **Proof.** If \({\mathscr{S}}\) is non-embeddable of rank 3 then \({\mathscr{S}}\) is regular, as proved in [12, Proposition 5.9.4] (also [3, Result 3.3]). Let \({\mathscr{S}}\) be embeddable of finite rank. Then \({\rm Aut}({\mathscr{S}})\) satisfies both the following transitivity properties: * \({\rm Aut}({\mathscr{S}})\) acts transitively on the set of pairs of opposite points of \({\mathscr{S}}\); * for any two opposite points \(x\) and \(y\) of \({\mathscr{S}}\), the stabilizer of \(\{x,y\}\) in \({\rm Aut}({\mathscr{S}})\) acts transitively on the set of generators of \(\{a,b\}^{\perp}\). By assumption, two opposite points \(a\) and \(b\) exist in \({\mathscr{S}}\) such that \(\{a,b\}^{\perp}\) admits a \(\perp\)-minimal pair of generators \(\{N,N^{\prime}\}\). By Corollary 3.4, both \(N\) and \(N^{\prime}\) are \(\perp\)-minimal. Hence \(\{a,b\}\) is regular by (T2) and \({\mathscr{S}}\) is regular by (T1). \(\Box\) **Remark 2**: When \(\mathrm{rank}(\mathscr{S})=2\) and (T1) fails to hold (hence \(\mathscr{S}\) is non-embeddable) it can happen that some but not all of the pairs of opposite points of \(\mathscr{S}\) are regular (see [3, Remark 6]). **Remark 3**: When \(\mathrm{rank}(\mathscr{S})\) is infinite condition (T1) holds true but in general (T2) fails to hold. As we shall see in Section 5.4, when \(\mathrm{rank}(\mathscr{S})=\infty\) this case it can happen that, for any two opposite points \(a\) and \(b\) of \(\mathscr{S}\), some but not all of the generators of \(\{a,b\}^{\perp}\) are \(\perp\)-minimal. ### An improvement of Definition 3.1 Our definition of \(\perp\)-minimality as stated in Definition 3.1 is relative to (the hyperbolic line spanned by) a given pair \(\{a,b\}\) of opposite points, but we have dropped the reference to \(\{a,b\}\) in our terminology presuming that this is implicit in choosing \(N\) and \(N^{\prime}\) among the generators of \(\{a,b\}^{\perp}\). One might believe that, if we regard \(N\) and \(N^{\prime}\) as sub-generators of \(\mathscr{S}\) without choosing in advance a pair of opposite points \(\{a,b\}\) such that \(\{a,b\}^{\perp}\) contains \(N\) and \(N^{\prime}\), then whether \(\{N,N^{\prime}\}\) is or is not \(\perp\)-minimal depends on the choice of a pair of opposite points in \(\{N,N^{\prime}\}^{\perp}\). However, as stated in the next theorem, this is not the case, except possibly when \(\mathrm{rank}(\mathscr{S})=2\) and \(\mathscr{S}\) is non-embeddable. So, all in all, the terminology adopted in Definition 3.1 is not so deceiving as it looks. **Theorem 3.6**: _Let \(N\) and \(N^{\prime}\) be sub-generators of \(\mathscr{S}\) and let \(\{a,b\}\) and \(\{c,d\}\) be pairs of opposite points contained in \(\{N,N^{\prime}\}^{\perp}\). Then the pair \(\{N,N^{\prime}\}\) is \(\perp\)-minimal with respect to \(\{a,b\}\) if and only if it is \(\perp\)-minimal with respect to \(\{c,d\}\), except possibly when \(\mathscr{S}\) is non-embeddable of rank \(2\), \(N=N^{\prime}\) and the stabilizer in \(\mathrm{Aut}(\mathscr{S})\) of the point \(p:=N=N^{\prime}\) acts intransitively on the set of hyperbolic lines of \(\mathscr{S}\) contained in \(p^{\perp}\)._ **Proof.** Suppose firstly that \(\mathscr{S}\) is embeddable and let \(\varepsilon:\mathscr{S}\to\mathrm{PG}(V)\) be an embedding of \(\mathscr{S}\). We can assume to have chosen \(\varepsilon\) in such a way that it is minimal, namely \(\pi_{\varepsilon}\) is a polarity. Then \(\{a,b\}^{\perp\perp}=\varepsilon^{-1}(\langle\varepsilon(a),\varepsilon(b)\rangle)\) and \(\{c,d\}^{\perp\perp}=\varepsilon^{-1}(\langle\varepsilon(c),\varepsilon(d)\rangle)\). Moreover, by Lemma 3.2 the subpspace \(\langle N\cap N^{\prime},\{a,b\}^{\perp\perp}\rangle\) is the union of the subspaces \(\langle N\cap N^{\prime},x\rangle\) for \(x\in\{a,b\}^{\perp\perp}\), A similar description holds for \(\langle N\cap N^{\prime},\{c,d\}^{\perp}\rangle\). Suppose that \(\{N,N^{\prime}\}\) is \(\perp\)-minimal with respect to \(\{a,b\}\), namely: \[\{N,N^{\prime}\}^{\perp}\ =\ \langle N\cap N^{\prime},\{a,b\}^{\perp\perp}\rangle. \tag{8}\] By assumption, \(c,d\in\{N,N^{\prime}\}^{\perp}\) and \(c\not\perp d\). Therefore, by (8) and the previous paragraph, \(c\in\langle N\cap N^{\prime},x\rangle\setminus N\cap N^{\prime}\) and \(d\in\langle N\cap N^{\prime},y\rangle\setminus N\cap N^{\prime}\) for suitable points \(x,y\in\{a,b\}^{\perp}\). Without loss, we can assume that \(x=a\) and \(y=b\). It follows that \(\langle\varepsilon(N\cap N^{\prime}),\varepsilon(a),\varepsilon(b)\rangle= \langle\varepsilon(N\cap N^{\prime}),\varepsilon(c),\varepsilon(d)\rangle\). Accordingly, \(\langle\varepsilon(c),\varepsilon(d)\rangle\) meets every subspace \(\langle\varepsilon(N\cap N^{\prime}),\varepsilon(x)\rangle\) non-trivially, for every \(x\in\{a,b\}^{\perp\perp\perp}\). Similarly, \(\langle\varepsilon(a),\varepsilon(b)\rangle\) meets \(\langle\varepsilon(N\cap N^{\prime}),\varepsilon(y)\rangle\) non-trivially for every \(y\in\{c,d\}^{\perp\perp}\). It follows that \[\langle N\cap N^{\prime},\{a,b\}^{\perp\perp}\rangle\ =\ \langle N\cap N^{\prime},\{c,d\}^{ \perp\perp}\rangle. \tag{9}\] By (8) and (9) we get \(\{N,N^{\prime}\}^{\perp}=\langle N\cap N^{\prime},\{c,d\}^{\perp\perp}\rangle\), namely \(\{N,N^{\prime}\}\) is also \(\perp\)-minimal with respect to \(\{c,d\}\). When \(\mathscr{S}\) is non-embeddable of rank 3 then \(\mathscr{S}\) is regular [12, Proposition 5.9.4]; also [3, Result 3.3]). In this case there is nothing to prove. Finally, suppose that \(\mathrm{rank}(\mathscr{S})=2\). Then \(N^{\prime}\) and \(N^{\prime}\) are points, say \(p=N\) and \(p^{\prime}=N^{\prime}\). If \(p\neq p^{\prime}\) and \(\{p,p^{\prime}\}\) is \(\perp\)-minimal with respect to \(\{a,b\}\) then \(\{p,p^{\prime}\}^{\perp}=\{a,b\}^{\perp\perp}\). Consequently, since \(c,d\in\{p,p^{\prime}\}^{\perp}\) and \(c\not\perp d\), we have \(c,d\in\{a,b\}^{\perp\perp}\), namely \(\{c,d\}^{\perp\perp}=\{a,b\}^{\perp\perp}\). Again, there is nothing to prove. Finally, let \(p^{\prime}=p\). If the stabilizer of \(p\) in \(\mathrm{Aut}(\mathscr{S})\) acts transitively on the set of hyperbolic lines contained in \(p^{\perp}\) (as it is the case when \(\mathscr{S}\) is embeddable), then the conclusion follows. \(\Box\) When either \(\mathrm{rank}(\mathscr{S})>2\) or \(\mathscr{S}\) is embeddable of rank 2 Theorem 3.6 allows us to replace Definition 3.1 with the following. **Definition 3.7**: Let \(N\) and \(N^{\prime}\) be sub-generators of \(\mathscr{S}\), possibly \(N=N^{\prime}\). We say that \(N\) and \(N^{\prime}\) form a \(\perp\)_-minimal pair_ if either \(\{N,N^{\prime}\}^{\perp}\) is a singular subspace or \(\{N,N^{\prime}\}^{\perp}=\langle N\cap N^{\prime},\{a,b\}^{\perp\perp}\rangle\) for every (equivalently, at least one) choice of opposite points \(a,b\in\{N,N^{\prime}\}^{\perp}\). A sub-generator \(N\) is said to be \(\perp\)_-minimal_ if \(\{N,N\}\) is \(\perp\)-minimal, namely either \(N^{\perp}\) is a generator of \(\mathscr{S}\) or \(N^{\perp}=\langle N,\{a,b\}^{\perp\perp}\rangle\) for some (equivalently, every) pair \(\{a,b\}\) of opposite points of \(N^{\perp}\). A pair \(\{a,b\}\) of opposite points is _regular_ if all generators of \(\{a,b\}^{\perp}\) are \(\perp\)-minimal. The polar space \(\mathscr{S}\) is _regular_ if all of its sub-generators are \(\perp\)-minimal (equivalently, all pairs of opposite points of \(\mathscr{S}\) are regular). Note that in Definition 3.7 when defining \(\perp\)-minimal sub-generators Theorem 3.3 is also taken into consideration. Note also that, if \(N\) and \(N^{\prime}\) are opposite sub-generators, then not two distinct points of \(\{N,N^{\prime}\}^{\perp}\) are colliear. Hence \(\{N,N^{\prime}\}^{\perp}\) is a singular subspace if and only it is either empty or a singleton. When \(\mathrm{rank}(\mathscr{S})<\infty\) no sub-generator is contained in one single generator and, if \(N\) and \(N^{\prime}\) are opposite sub-generators, then \(\{N,N^{\prime}\}^{\perp}\) has the same cardinality as the set of generators containing a given sub-generator; hence \(\{N,N^{\prime}\}^{\perp}\) is never a singular subspace. In contrast, in general a polar space of infinite rank admits sub-generators which are contained in just one generator and pairs of opposite generators \(\{N,N^{\prime}\}\) such that \(|\{N,N^{\prime}\}^{\perp}|\leq 1\). **Remark 4**: When \(\mathrm{rank}(\mathscr{S})=2\), if \(a\) and \(b\) are opposite points and \(c\) and \(d\) are distinct (hence opposite) points of \(\{a,b\}^{\perp}\), then \(\{c,d\}^{\perp}=\{a,b\}^{\perp\perp}\) if and only if \(\{a,b\}^{\perp}=\{c,d\}^{\perp\perp}\). Accordingly, \(\{a,b\}\) is regular if and only if \(\{c,d\}\) is regular. This is a special case of the following general fact: for any choice of singular subspaces \(X_{1},X_{2},Y_{1},Y_{2}\), we have \(\{Y_{1},Y_{2}\}^{\perp}=\{X_{1},X_{2}\}^{\perp\perp}\) if and only if \(\{X_{1},X_{2}\}^{\perp}=\{Y_{1},Y_{2}\}^{\perp\perp}\). ### Proof of Theorem 1.4 and more on tight embeddings Throughout this subsection the polar space \(\mathscr{S}\) is assumed to be embeddable. We firstly prove Theorem 1.4. With the terminology introduced so far, we rephrase Theorem 1.4 as follows. **Theorem 3.8**: _The polar space \(\mathscr{S}\) admits a tight embedding if and only if_ * \(\mathscr{S}\) _admits a_ \(\perp\)_-minimal pair of opposite sub-generators_ \(N\) _and_ \(N^{\prime}\) _such that_ \(|\{N,N^{\prime}\}^{\perp}|>1\) _and_ \(\langle N,N^{\prime}\rangle\) _is optimally embeddable._ **Proof.** Let \(\varepsilon:\mathscr{S}\to\operatorname{PG}(V)\) be a minimal embedding of \(\mathscr{S}\). So, \(\pi_{\varepsilon}\) is a polarity. Accordingly, \(\langle\varepsilon(\{a,b\}^{\perp\perp})\rangle=\{\varepsilon(a),\varepsilon (b)\}^{\perp_{\varepsilon}\perp_{\varepsilon}}=\langle\varepsilon(a), \varepsilon(b)\rangle\) for every pair of opposite points \(a,b\) of \(\mathscr{S}\). Suppose that (R4) holds. By assumption, \(\{N,N^{\prime}\}^{\perp}\) contains at least two points and, since \(N\) and \(N^{\prime}\) are opposite sub-generators, all points of \(\{N,N^{\prime}\}^{\perp}\) are mutually opposite. Let \(a\) and \(b\) be distinct points of \(\{N,N^{\prime}\}^{\perp}\). Then \(\{N,N^{\prime}\}^{\perp}=\{a,b\}^{\perp\perp}\) since \(\{N,N^{\prime}\}\) is \(\perp\)-minimal. Moreover \(\langle N,N^{\prime}\rangle\) is optimally embeddable. Therefore \[\langle\varepsilon(N),\varepsilon(N^{\prime}),\varepsilon(a), \varepsilon(b)\rangle=\langle\varepsilon(N),\varepsilon(N^{\prime}), \varepsilon(\{a,b\}^{\perp\perp})\rangle=\] \[=\ \langle\varepsilon(N),\varepsilon(N^{\prime}),\varepsilon( \{N,N^{\prime}\}^{\perp})\rangle=\langle\varepsilon(N),\varepsilon(N^{\prime }),\{\varepsilon(N),\varepsilon(N)\}^{\perp_{\varepsilon}}\rangle=\] \[=\operatorname{PG}(V).\] Hence \[\langle\varepsilon(N),\varepsilon(N^{\prime}),\varepsilon(a), \varepsilon(b)\rangle\ =\ \operatorname{PG}(V). \tag{10}\] Put \(M=\langle N,a\rangle\) and \(M^{\prime}=\langle N^{\prime},b\rangle\). Then \(M\) and \(M^{\prime}\) are opposite generators of \(\mathscr{S}\). Equation (10) shows that \(\langle\varepsilon(M),\varepsilon(M^{\prime})\rangle=\operatorname{PG}(V)\). So, \(\varepsilon\) is tight. Conversely, let \(\varepsilon:\mathscr{S}\to\operatorname{PG}(V)\) be a tight embedding of \(\mathscr{S}\) and let \(M\) and \(M^{\prime}\) be generators of \(\mathscr{S}\) such that \(\operatorname{PG}(V)=\langle\varepsilon(M),\varepsilon(M^{\prime})\rangle\). Then \(M\cap M^{\prime}=\emptyset\), namely \(M\) and \(M^{\prime}\) are opposite. The equality \(\langle\varepsilon(M),\varepsilon(M^{\prime})\rangle=\operatorname{PG}(V)\) and the fact that \(M\cap M^{\prime}=\emptyset\) force \(\pi_{\varepsilon}\) to be a polarity. Choose \(a\in M\) and \(b\in M^{\prime}\setminus a^{\perp}\) and put \(N=b^{\perp}\cap M\) and \(N^{\prime}=a^{\perp}\cap N^{\prime}\). Then \(N\) and \(N^{\prime}\) are opposite generators of \(\{a,b\}^{\perp}\). Clearly, \(\langle\langle\varepsilon(N\cup N^{\prime})\rangle,\langle\varepsilon(a), \varepsilon(b)\rangle\rangle=\langle\varepsilon(M),\varepsilon(M^{\prime})\rangle\). However \(\langle\varepsilon(M),\varepsilon(M^{\prime})\rangle=\operatorname{PG}(V)\) by assumption and \(\{N,N^{\prime}\}^{\perp}\supseteq\{a,b\}^{\perp\perp}\). Therefore \[\langle\varepsilon(N),\varepsilon(N^{\prime}),\{\varepsilon(N), \varepsilon(N^{\prime}\}^{\perp_{\varepsilon}}\rangle\ =\ \operatorname{PG}(V) \tag{11}\] and, since \(\langle\varepsilon(N),\varepsilon(N^{\prime})\rangle\cap\{\varepsilon(N), \varepsilon(N^{\prime})\}^{\perp_{\varepsilon}}=\emptyset\), necessarily \[\{\varepsilon(N),\varepsilon(N^{\prime})\}^{\perp_{\varepsilon}}\ =\ \langle\varepsilon(a), \varepsilon(b)\rangle\ =\ \langle\varepsilon(\{a,b\}^{\perp\perp})\rangle. \tag{12}\] Equality (11) shows that \(\langle N,N^{\prime}\rangle\) is optimally embeddale while (12) is equivalent to \(\{N,N^{\prime}\}^{\perp}=\{a,b\}^{\perp\perp}\), namely \(\{N,N^{\prime}\}\) is a \(\perp\)-minimal pair. \(\Box\) As previously said, the condition called (R4) in Theorem 3.8 is just a rephrasing of condition (R4) of Theorem 1.4. The subspaces called \(X\) and \(Y\) in the statement of Theorem 1.4 are called \(N\) and \(N^{\prime}\) in Theorem 3.8, condition (1) of (R4) as stated in Theorem 1.4 says that \(\{X,Y\}\) is \(\perp\)-minimal and \(\{X,Y\}^{\perp}\) is not a singular subspace and condition (2) is equivalent to \(\langle X,Y\rangle\) being optimally embeddable. Indeed, since \(\{X,Y\}^{\perp}\not\subseteq\{X,Y\}^{\perp\perp}\), we have \(\varepsilon(X\cup Y)^{\perp_{\varepsilon}}=\langle\varepsilon((X\cup Y)^{ \perp})\rangle\) by Proposition 2.1. So, \(\langle\varepsilon(X\cup Y\cup(X\cup Y)^{\perp})\rangle=\langle\varepsilon( \langle X,Y\rangle),\varepsilon(\langle X,Y\rangle)^{\perp_{\varepsilon}}\rangle\). The next Corollary is just the same as Proposition 1.3. We add its proof here in order to show that Proposition 1.3 indeed follows from Theorem 1.4. **Corollary 3.9**: _Let \(\mathscr{S}\) be embeddable of finite rank. Then \(\mathscr{S}\) is regular if and only if it admits a tight embedding._ **Proof.** Let \({\rm rank}(\mathscr{S})<\infty\). Then the subspace spanned by two opposite sub-generators is optimally embeddable by Proposition 2.7. Accordingly, we can drop the condition that \(\langle N,N^{\prime}\rangle\) is optimally embeddable from (R4) of Theorem 3.8. However, if we remove that condition then, in view of Corollary 3.5, what remains of (R4) amounts to say that \(\mathscr{S}\) is regular. \(\Box\) **Lemma 3.10**: _Suppose that \({\rm rank}(\mathscr{S})\geq 3\) and \(\mathscr{S}\) admits a unique embedding. Let \(N\) and \(N^{\prime}\) be opposite sub-generators of \(\mathscr{S}\) such that the pair \(\{N,N^{\prime}\}\) is \(\perp\)-minimal and \(|\{N,N^{\prime}\}^{\perp}|>1\). Then \(\langle N,N^{\prime}\rangle\) is optimally embeddable if and only if \(\langle N,N^{\prime}\rangle=\{a,b\}^{\perp}\) for every choice of distinct (hence opposite) points \(a,b\in\{N,N^{\prime}\}^{\perp}\)._ **Proof.** The 'if' part of the statement immediately follows from the fact that \(\{a,b\}^{\perp}\) is optimally embeddable for every choice of opposite points \(a\) and \(b\) (Proposition 2.9). Note that the hypotheses that \({\rm rank}(\mathscr{S})>2\) and \(\mathscr{S}\) admits a unique embedding play no role in this implication. Turning to the 'only if' part, let \(\varepsilon:\mathscr{S}\to{\rm PG}(V)\) be the (unique) embedding of \(\mathscr{S}\) and suppose that \(\langle N,N^{\prime}\rangle\) is optimally embeddable. Then \[\langle\varepsilon(\langle N,N^{\prime}\rangle),\varepsilon(N\cup N^{\prime} )^{\perp_{\varepsilon}}\rangle\ =\ \varepsilon(\langle N,N^{\prime}\rangle\cap\{N,N^{\prime}\}^{\perp})^{\perp_{ \varepsilon}}.\] Hence, given two opposite points \(a,b\in\{N,N^{\prime}\}^{\perp}\) (which exist because \(\{N,N^{\prime}\}^{\perp}\) is non-singular by assumption), we have \[\langle\varepsilon(\langle N\cup N^{\prime}\rangle),\varepsilon(\{a,b\}^{ \perp_{\varepsilon}\perp_{\varepsilon}})\ =\ {\rm PG}(V) \tag{13}\] because \(\{N,N^{\prime}\}^{\perp}=\{a,b\}^{\perp\perp}\) by assumption, \(\{a,b\}^{\perp}\cap\{a,b\}^{\perp\perp}=\emptyset\) and \(\langle\varepsilon(\{a,b\}^{\perp\perp})\rangle=\varepsilon(\{a,b\})^{\perp_{ \varepsilon}\perp_{\varepsilon}}\) by Corollary 2.3. Recall that \(\varepsilon(\{a,b\})^{\perp_{\varepsilon}}\cup\varepsilon(\{a,b\})^{\perp_{ \varepsilon}\perp_{\varepsilon}}\) spans \(\mathrm{PG}(V)\). The embedding \(\varepsilon\) is both absolutely universal and minimal, since by assumption \(\varepsilon\) is the unique embedding of \(\mathscr{S}\). As \(\varepsilon\) is minimal, \(\varepsilon(\{a,b\})^{\perp_{\varepsilon}}\) and \(\varepsilon(\{a,b\})^{\perp_{\varepsilon}\perp_{\varepsilon}}\) are disjoint by Lemma 2.8. So, \(\mathrm{PG}(V)\) is the direct sum of \(\varepsilon(\{a,b\})^{\perp_{\varepsilon}}\) and \(\varepsilon(\{a,b\})^{\perp_{\varepsilon}\perp_{\varepsilon}}\). Moreover \(\varepsilon(\{a,b\})^{\perp_{\varepsilon}}=\langle\varepsilon(\{a,b\}^{ \perp})\rangle\) by Corollary 2.3 and \(\langle\varepsilon(\langle N,N^{\prime}\rangle)\rangle\subseteq\langle \varepsilon(\{a,b\}^{\perp})\rangle\), since \(N\cup N^{\prime}\subseteq\{a,b\}^{\perp}\). By comparing all this information with equation (13) we see that \(\langle\varepsilon(\langle N,N^{\prime}\rangle)\rangle=\langle\varepsilon(\{ a,b\}^{\perp})\rangle\). So far we have made no use of the hypothesis that \(\mathrm{rank}(\mathscr{S})>2\). We shall use it now. Since \(\mathrm{rank}(\mathscr{S})>2\), the subspace \(\langle N,N^{\prime}\rangle\) is not a rosette. Therefore it arises from an embedding of \(\mathscr{S}\), by Proposition 2.5. However \(\varepsilon\) is the unique embedding of \(\mathscr{S}\). Hence \(\langle N,N^{\prime}\rangle\) arises from \(\varepsilon\), namely \(\langle N,N^{\prime}\rangle=\varepsilon^{-1}(\langle\varepsilon(\langle N,N^{ \prime}\rangle)\rangle)\). Similarly, \(\{a,b\}^{\perp}=\varepsilon^{-1}(\langle\varepsilon(\{a,b\}^{\perp})\rangle)\). The equality \(\langle\varepsilon(\langle N,N^{\prime}\rangle)\rangle=\langle\varepsilon(\{ a,b\}^{\perp})\rangle\) now forces \(\langle N,N^{\prime}\rangle=\{a,b\}^{\perp}\). \(\Box\) **Theorem 3.11**: _Suppose that \(\mathrm{rank}(\mathscr{S})\geq 3\) and \(\mathscr{S}\) admits a unique embedding. Then the following are equivalent:_ 1. _the unique embedding of_ \(\mathscr{S}\) _is tight;_ 2. \(\mathscr{S}\) _admits admits a pair opposite sub-generators_ \(N\) _and_ \(N^{\prime}\) _such that_ \(\langle N,N^{\prime}\rangle=\{a,b\}^{\perp}\) _for two opposite points_ \(a\) _and_ \(b\) _of_ \(\mathscr{S}\)_;_ 3. \(\mathscr{S}\) _admits two opposite generators_ \(M\) _and_ \(M^{\prime}\) _such that_ \(\langle M,M^{\prime}\rangle=\mathscr{S}\)_._ **Proof.** Trivially (3) implies (1) while (1) implies (2) by Theorem 3.8 and Lemma 3.10. It remains to show that (2) implies (3). Given \(N,N^{\prime},a\) and \(b\) as in (2), let \(M=\langle N,a\rangle\) and \(M^{\prime}=\langle N^{\prime},b\rangle\). Then \(M\) and \(M^{\prime}\) are opposite generators of \(\mathscr{S}\). Moreover \(\langle M,M^{\prime}\rangle=\langle N,N^{\prime},a,b\rangle=\langle\{a,b\}^{ \perp},a,b\rangle\), since \(\langle N,N^{\prime}\rangle=\{a,b\}^{\perp}\). However \(\langle\{a,b\}^{\perp},a,b\rangle=\mathscr{S}\). Hence \(\langle M,M^{\prime}\rangle=\mathscr{S}\). \(\Box\) **Remark 5**: In Lemma 3.10 and Theorem 3.11 we cannot drop the hypothesis that \(\mathrm{rank}(\mathscr{S})>2\). Indeed the sub-generators of a generalized quadrangle are its points. Hence, with \(N,N^{\prime},a\) and \(b\) as in the hypotheses of Lemma 3.10, when \(\mathrm{rank}(\mathscr{S})=2\) we have \(\langle N,N^{\prime}\rangle=\{a,b\}^{\perp}\) only if \(\mathscr{S}\) is a grid. When \(\mathrm{rank}(\mathscr{S})=2\) we should replace the equality \(\langle N,N^{\prime}\rangle=\{a,b\}^{\perp}\) with \(\langle\varepsilon(\langle N,N\rangle)\rangle=\langle\varepsilon(\{a,b\}^{ \perp})\rangle\), for a minimal embedding \(\varepsilon\) of \(\mathscr{S}\), but this condition amounts to \(\langle N,N^{\prime}\rangle\) being optimally embeddable. The hypothesis that \(\mathscr{S}\) admits a unique embedding is also necessary, both in Lemma 3.10 and in Theorem 3.11. Indeed suppose that \(\mathscr{S}\) admits a tight embedding \(\varepsilon:\mathscr{S}\to\mathrm{PG}(V)\) and let \(M\) and \(M^{\prime}\) be opposite generators of \(\mathscr{S}\) such that \(\langle\varepsilon(M),\varepsilon(M^{\prime})\rangle=\mathrm{PG}(V)\). If \(\varepsilon\), which is minimal, is not universal, let \(\tilde{\varepsilon}:\mathscr{S}\to\mathrm{PG}(V)\) be a proper cover of \(\varepsilon\). Then \(\langle\tilde{\varepsilon}(M),\tilde{\varepsilon}(M^{\prime})\rangle\subset \mathrm{PG}(\widetilde{V})\), hence \(\langle M,M^{\prime}\rangle\subset\mathscr{S}\); similarly, \(\langle\ X,Y\rangle\subset\{a,b\}^{\perp}\) for any two singular subspaces \(X,Y\subseteq\{a,b\}^{\perp}\), even if \(X\) and \(Y\) are such that \(\langle\varepsilon(\langle X,Y\rangle)\rangle=\langle\varepsilon(\{a,b\}^{ \perp})\rangle\). **Remark 6**: When \({\rm rank}(\mathscr{S})<\infty\), if \(\mathscr{S}=\langle M,M^{\prime}\rangle\) for two (necessarily opposite) generators \(M\) and \(M^{\prime}\) then \(\mathscr{S}=\langle X,X^{\prime}\rangle\) for any two opposite generators \(X\) and \(X^{\prime}\). In general, as we shall see in Section 5, when the rank of \(\mathscr{S}\) is infinite \(\mathscr{S}\) can admit pairs of generators \(\{X,X^{\prime}\}\) such that \(\langle X,X^{\prime}\rangle=\mathscr{S}\) as well as opposite generators \(Y\) and \(Y^{\prime}\) such that \(\langle Y,Y^{\prime}\rangle\subset\mathscr{S}\). ### Complementary generators and regularity In this subsection \(\mathscr{S}\) is supposed to admit the absolutely universal embedding. Hence it also admits the minimum embedding. Throughout this subsection \(\varepsilon:\mathscr{S}:\to{\rm PG}(V)\) is the minimum embedding of \(\mathscr{S}\). The following definitions, which complete previously stated definitions, will help us to make our exposition more concise. Let \(M\) and \(M^{\prime}\) be opposite generators of \(\mathscr{S}\). If \(\langle\varepsilon(M),\varepsilon(M^{\prime})\rangle={\rm PG}(V)\) then we say that \(M\) and \(M^{\prime}\) are _complementary_, also that each of them is a _complement_ of the other one. Obviously, \(\mathscr{S}\) admits a pair of complementary generators if and only if \(\varepsilon\) is tight. A pair \(\{N,N^{\prime}\}\) of sub-generators of \(\mathscr{S}\) is said to be _optimally embeddable_ if \(\langle N,N^{\prime}\rangle\) is optimally embeddable (Definition 2.6); it is said to be \(\bot\)_-degenerate_ if \(\{N,N^{\prime}\}^{\bot}\) is a singular subspace. Note that, according to Definition 3.7, all \(\bot\)-degenerate pairs of opposite sub-generators are \(\bot\)-minimal. Note also that, if \(N\) and \(N^{\prime}\) are opposite sub-generators, then \(\{N,N^{\prime}\}\) is \(\bot\)-degenerate precisely when \(|\{N,N^{\prime}\}^{\bot}|\leq 1\). **Lemma 3.12**: _No optimally embeddable pair of opposite sub-generators is \(\bot\)-degenerate._ **Proof.** Let \(\{N,N^{\prime}\}\) be an optimally embeddable pair of opposite sub-generators of \(\mathscr{S}\). By way of contradiction, let \(|\{N,N^{\prime}\}^{\bot}|\leq 1\). Suppose first that \(\{N,N^{\prime}\}^{\bot}=\{c\}\) for a point \(c\). Let \(\tilde{\varepsilon}:\mathscr{S}\to{\rm PG}(\widetilde{V})\) be the universal embedding of \(\mathscr{S}\). Since \(\{N,N^{\prime}\}\) is optimally embeddable, we have \(\langle\tilde{\varepsilon}(N),\tilde{\varepsilon}(N^{\prime}),\{\tilde{ \varepsilon}(N),\tilde{\varepsilon}(N^{\prime})\}^{\bot_{\varepsilon}}\rangle= {\rm PG}(\widetilde{V})\). However, \(\tilde{\varepsilon}(c)\) is the unique singular point of \(\{\tilde{\varepsilon}(N),\tilde{\varepsilon}(N^{\prime})\}^{\bot_{\varepsilon}}\). Then \(\{\tilde{\varepsilon}(N),\tilde{\varepsilon}(N^{\prime})\}^{\bot_{\varepsilon}} \subseteq\tilde{\varepsilon}(c)^{\bot_{\varepsilon}}\). This forces \(\tilde{\varepsilon}(c)^{\bot_{\varepsilon}}=PG(\widetilde{V})\), hence \(c^{\bot}=\mathscr{S}\). We have reached a contradiction. Therefore \(\{N,N^{\prime}\}^{\bot}=\emptyset\). Hence \(\langle\tilde{\varepsilon}(N),\tilde{\varepsilon}(N^{\prime})\rangle={\rm PG }(\widetilde{V})\), since \(\{N,N^{\prime}\}\) is optimally embedded by assumption. Consequently, if \(M\) is a generator of \(\mathscr{S}\) contaning \(N\), then \(\tilde{\varepsilon}(M)\) meets \(\tilde{\varepsilon}(N^{\prime})\) non-trivially. Equivalently \(M\) meets \(N^{\prime}\) non-trivially. This contradicts the assumption that \(N\) and \(N^{\prime}\) are opposite. \(\Box\) **Lemma 3.13**: _Let \(\{N,N^{\prime}\}\) be an optimally embeddable pair of opposite sub-generators of \(\mathscr{S}\) and suppose that \(\{N,N^{\prime}\}\) is \(\bot\)-minimal. Then both \(N\) and \(N^{\prime}\) are \(\bot\)-minimal._ **Proof.** By Lemma 3.12, \(\{N,N^{\prime}\}^{\perp}\) contains at least two distinct (hence opposite) points \(a\) and \(b\). So, \(N\) and \(N^{\prime}\) are opposite generators of \(\{a,b\}^{\perp}\) and \(\{N,N^{\prime}\}^{\perp}=\{a,b\}^{\perp\perp}\) since \(\{N,N^{\prime}\}\) is \(\perp\)-minimal. By way of contradiction, suppose that \(N\) is not \(\perp\)-minimal. So, \(N^{\perp}\) contains a point \(c\not\in N\cup\{a,b\}^{\perp\perp}\). However \(\{N,N^{\prime}\}^{\perp}=\{a,b\}^{\perp\perp}\) by assumption. Moreover \(\{\varepsilon(a),\varepsilon(b)\}^{\perp_{\varepsilon}\perp_{\varepsilon}}= \langle\varepsilon(a),\varepsilon(b)\rangle\) since \(\varepsilon\) is minimal and \(\mathrm{PG}(V)=\langle\varepsilon(N),\varepsilon(N^{\prime}),\varepsilon(a), \varepsilon(b)\rangle\), since \(\langle N,N^{\prime}\rangle\) is optimally embeddable. It follows that the projective plane \(P=\langle\varepsilon(a),\varepsilon(b),\varepsilon(c)\rangle\) meets \(\langle\varepsilon(N),\varepsilon(N^{\prime})\rangle\) in a point, say \(p\). However \(P\subseteq\varepsilon(N)^{\perp_{\varepsilon}}\). Hence \(p\in X:=\varepsilon(N)^{\perp_{\varepsilon}}\cap\langle\varepsilon(N), \varepsilon(N^{\prime})\rangle\). However \(p\not\in\varepsilon(N)\). Therefore \(X\) properly contains \(\varepsilon(N)\). Consequently, \(X\cap\varepsilon(N^{\prime})\neq\emptyset\). So, \(\varepsilon(N^{\prime})\) meets \(X\) non-trivially. Equivalently, \(N^{\prime}\) meets \(N^{\perp}\) non-trivially. This contradicts the hypothesis that \(N\) and \(N^{\prime}\) are opposite. \(\Box\) In the finite rank case the next lemma is absolutely trivial and its hypotheses are redundant. However \(\mathrm{rank}(S)=\infty\) is allowed here. We recall that, so far, nobody has been able to prove that in every polar space of infinite rank every generator admits an opposite, although no counterexample is known which refutes this conjecture. So, the hypothesis that \(M\) admits an opposite cannot be dropped from the next lemma and a proof is required for the conclusions. **Lemma 3.14**: _Let \(M\) be a generator of \(\mathscr{S}\) and \(p\) a point exterior to \(M\). If \(M\) is opposite to at least one generators of \(\mathscr{S}\) then there exists a generator of \(\mathscr{S}\) opposite to \(M\) and containing \(p\). If moreover \(M\) admits a complement, then \(M\) also admits a complement which contains \(p\)._ **Proof.** We firstly prove that if \(M\) admits an opposite generator then a generator \(M^{\prime}\) also exists which is opposite to \(M\) and contains \(p\). Let \(M_{1}\) be a generator of \(\mathscr{S}\) opposite to \(M\). If \(M_{1}\) contains \(p\) then there is nothing to prove. Otherwise, let \(M_{2}:=\langle p^{\perp}\cap M_{1},p\rangle\). If \(M_{2}\cap M=\emptyset\) then again we are done: \(M^{\prime}=M_{2}\) does the job. Assume that \(M_{2}\cap M\neq\emptyset\). As \(M_{2}\) contains a hyperplane \(p^{\perp}\cap M_{1}\) which is disjoint from \(M\), necessarily \(M_{2}\cap M=\{a\}\) for a point \(a\). Clearly, \(a\neq p\). Put \(N:=M\cap p^{\perp}\). As both \(p\) and \(a\) belong to \(N^{\perp}\), the line \(\langle p,a\rangle\) is contained in \(N^{\perp}\). Therefore this line meets \(N\) in a point, since \(N\) is a sub-generator. It follows that \(a\in N\). Now, \(p^{\perp}\not\subseteq a^{\perp}\) (indeed \(\mathscr{S}\) is non-degenerate and \(p^{\perp}\neq a^{\perp}\) since \(p\neq a\)). Hence there exists a point \(b\in p^{\perp}\setminus a^{\perp}\). Put \(M_{3}:=\langle b,b^{\perp}\cap M_{2}\rangle\). The generator \(M_{3}\) contains \(p\). We claim that \(M_{3}\cap M=\emptyset\). Indeed, suppose the contrary and let \(c\in M_{3}\cap M\). Then \(c\neq b\), as \(b\not\perp a\) while \(c\perp a\). The line \(\langle b,c\rangle\) meets \(b^{\perp}\cap M_{2}\) in a point \(d\). If \(c\neq d\) then \(b\in\langle c,d\rangle\). However, \(\langle c,d\rangle\subseteq a^{\perp}\), since \(c\perp a\) (because \(c,a\in M\)) and \(d\perp a\) (because \(d,a\in M_{2}\)). It follows that \(b\perp a\). This conclusion contradicts the choice of \(b\). Therefore \(d=c\). However \(d\in b^{\perp}\cap M_{2}\), which is disjont from \(M\) since \(b\not\perp a\) and \(a\) is the unique point of \(M_{2}\cap M\), while \(c\in M\). Again a contradiction. Hence \(M_{3}\cap M=\emptyset\) and \(M^{\prime}=M_{3}\) is the required generator opposite to \(M\) and containing \(p\). Turning to the second claim of the lemma, suppose that \(M_{1}\) is a complement of \(M\). If \(p\in M_{1}\) there is nothing to prove. Otherwise, let \(M_{2}\) be constructed as above. If \(M_{2}\cap M=\emptyset\) then \(M_{2}\) is the required complement. Otherwise \(\varepsilon(M)\cup\varepsilon(M_{2})\) spans a hyperplane of \(\mathrm{PG}(V)\). In this case the generator \(M_{3}\) constructed as above is a complement of \(M\). \(\Box\) **Theorem 3.15**: _The following are equivalent:_ 1. \(\mathscr{S}\) _is regular and, for every sub-generator_ \(N\) _of_ \(\mathscr{S}\)_, if_ \(N^{\perp}\) _is not a generator of_ \(\mathscr{S}\) _then a sub-generator_ \(N^{\prime}\) _of_ \(\mathscr{S}\) _exists opposite to_ \(N\) _and such that_ \(\{N,N^{\prime}\}\) _is optimally embeddable._ 2. _Every generator of_ \(\mathscr{S}\) _admits a complement._ **Proof.** Assume (1). Let \(M\) be a generator of \(\mathscr{S}\) and, given a point \(a\in M\), let \(b\) be a point opposite to \(a\). Put \(N=b^{\perp}\cap M\). Then \(a,b\in N^{\perp}\), hence \(N^{\perp}\) is not a generator. By (1), there exists a sub-generator \(N^{\prime}\) such that \(\{N,N^{\prime}\}\) is optimally embeddable, hence it is not \(\perp\)-degenerate by Lemma 3.12. However, \(\mathscr{S}\) is regular by (1). Hence \(N^{\perp}=\langle N,\{a,b\}^{\perp\perp}\rangle\) and, since \(\{N,N^{\prime}\}\) is not \(\perp\)-degenerate, \(N^{\prime}\subset\{c,d\}^{\perp}\) for two opposite points \(c,d\in N^{\perp}\). Moreover, \(\{N,N^{\prime}\}\) is \(\perp\)-minimal, namely \(\{N,N^{\prime}\}^{\perp}=\{c,d\}^{\perp\perp}\), because \(\mathscr{S}\) is regular. The hyperbolic line \(\{c,d\}^{\perp\perp}\) meets every generator \(\langle N,x\rangle\) with \(x\in\{a,b\}^{\perp\perp}\). So, we can assume that \(c\in\langle N,a\rangle\) and \(d\in\langle N,b\rangle\). Accordingly, we can safely replace \(a\) with \(c\) and \(b\) with \(d\). In other words, we can assume that \(a=c\) and \(b=d\). Put \(M^{\prime}:=\langle N^{\prime},b\rangle\). Then \(M\) and \(M^{\prime}\) are opposite. Recalling that \(\{N,N^{\prime}\}\) is perp-minimal and optimally embeddable, we obtain that \(\varepsilon(M)\cup\varepsilon(M^{\prime})\) spans \(\mathrm{PG}(V)\) as in the proof of Theorem 3.8. So, \(M^{\prime}\) is a complement of \(M\). Conversely, assume (2). We firstly prove that every sub-generator of \(\mathscr{S}\) is \(\perp\)-minimal. Let \(N\) be a sub-generator of \(\mathscr{S}\). If \(N^{\perp}\) is a generator then \(N\) is \(\perp\)-minimal (Definition 3.7). Suppose that \(N\) contains at least two distinct points, say \(a\) and \(b\). Clearly, \(a\not\perp b\). Put \(M:=\langle N,a\rangle\). In view of (2), \(M\) admits a complement. By Lemma 3.14, there exists a complement \(M^{\prime}\) of \(M\) which contains \(b\). Put \(N^{\prime}:=a^{\perp}\cap M^{\prime}\). As in the proof of Theorem 3.8, we see that \(\{N,N^{\prime}\}\) is \(\perp\)-minimal and optimally embeddable. By Lemma 3.13, the sub-generator \(N\) is \(\perp\)-minimal. So we have proved that every sub-generator of \(\mathscr{S}\) is \(\perp\)-minimal. Namely, \(\mathscr{S}\) is regular. Morover, \(\{N,N^{\prime}\}\) is optimally embeddable, as required in (1). \(\Box\) All polar spaces constructed in Sections 5.1, 5.2 and 5.3 are regular and, in view of Proposition 5.3 and its analogues for the hyperbolic and hermitian case, they satisfy condition (2) (hence (1) too) of Theorem 3.15. **Remark 7**: When \(\mathrm{rank}(\mathscr{S})=\infty\), the existence of a pair of complementary generators (namely the minimal embedding being tight) is not sufficient for \(\mathscr{S}\) to be regular and, conversely, \(\mathscr{S}\) being regular does not imply that all opposite generators are complementary. For instance, the quadric \(\mathscr{S}_{q^{\prime}}\) described in Section 5.4 is not regular but it admits a pair of complementary generators, namely \([V]\) and \([V^{\prime}]\). Conversely, the polar spaces described in Sections 5.1, 5.2 and 5.3 are regular but each of them admits pairs of opposite generators which are not complementary. For instance, in each of these spaces, the generators called \([V]\) and \([V^{*}]\) are complementary. Generators also exist which are opposite to both \([V]\) and \([V^{*}]\). Each of them is a complement of \([V^{*}]\) but not of \([V]\). (See also Lemma 4.13.) ## 4 The three generators game Throughout this section \(\mathscr{S}\) is assumed to be embeddable. For ease of exposition, when \(\mathrm{rank}(\mathscr{S})=2\) we assume that \(\mathscr{S}\) is neither a grid of order \(s>3\) nor a generalized quadrangle as in [17, 8.6(II)(a)]. So, \(\mathscr{S}\) admits both the absolutely universal embedding and a unique minimum embedding. We shall deal with triples of mutually opposite generators. Recall that a polar space \(\mathscr{S}\) of finite rank admits triples of opposite generators if and only if either \(\mathscr{S}\) is thick or \(\mathrm{rank}(\mathscr{S})\) is even. We cannot hope for such a sharp picture for polar spaces of infinite rank, all the more that we do not even know if every generator of a polar space of infinite rank admits an opposite. However polar spaces of infinite rank exist which admit triples of mutually opposite generators. For instance, in each of the spaces described in Sections 5.1, 5.2 and 5.3, the generators \([V]\) and \([V^{*}]\) are opposite and generators exist which are opposite to both \([V]\) and \([V^{*}]\). So, the theory we are going to develope in this section is not vacuous in the infinite rank case. ### More on opposite generators Let \(\varepsilon:\mathscr{S}\to\mathrm{PG}(V)\) be the minimum embedding of \(\mathscr{S}\). Given two opposite generators \(M_{1}\) and \(M_{2}\) of \(\mathscr{S}\), put \(\mathscr{S}_{M_{1},M_{2}}:=\varepsilon^{-1}(\langle\varepsilon(M_{1}), \varepsilon(M_{2})\rangle\). Clearly, \(\mathscr{S}_{M_{1},M_{2}}\supseteq\langle M_{1},M_{2}\rangle\). We have \(\mathscr{S}_{M_{1},M_{2}}=\langle M_{1},M_{2}\rangle\) if and only if \(\varepsilon\) is the unique embedding of \(\mathscr{S}\). **Lemma 4.1**: _The following holds for \(\{i,j\}=\{1,2\}\): all hyperplanes of \(M_{i}\) are \(\bot\)-minimal as sub-generators of \(\mathscr{S}_{M_{1},M_{2}}\) and, if \(N\) is a hyperplane of \(M_{i}\) such that \(N^{\bot}\cap\mathscr{S}_{M_{1},M_{2}}\neq M_{i}\), then \(N^{\bot}\cap M_{j}=\{x_{j}\}\) for a point \(x_{j}\in M_{j}\) and \(N^{\bot}\cap\mathscr{S}_{M_{1},M_{2}}=\langle M_{i},x_{j}\rangle=\langle N,x_{ i},x_{j}\rangle\) for any \(x_{i}\in M_{i}\setminus N\)._ **Proof.** Put \(\mathscr{S}^{\prime}:=\mathscr{S}_{M_{1},M_{2}}\) for short and, denoted by \(V^{\prime}\) the subspace of \(V\) corresponding to \(\langle\varepsilon(M_{1}),\varepsilon(M_{2})\rangle\), let \(\varepsilon^{\prime}:\mathscr{S}^{\prime}\to\mathrm{PG}(V^{\prime})=\langle \varepsilon(M_{1}),\varepsilon(M_{2})\rangle\) be the embedding of \(\mathscr{S}^{\prime}\) induced by \(\varepsilon\). The embedding \(\varepsilon^{\prime}\) is minimal and \(\pi_{\varepsilon}\) induces \(\pi_{\varepsilon^{\prime}}\) on \({\rm PG}(V^{\prime})\). Explicitly, \(X^{\perp_{\varepsilon^{\prime}}}=X^{\perp_{\varepsilon}}\cap{\rm PG}(V^{\prime})\) for every subset \(X\) of \({\rm PG}(V^{\prime})\). Let \(N\) be a hyperplane of \(M_{1}\) such that \(M_{1}\subset N^{\perp}\cap\mathscr{S}^{\prime}\). Then \(\varepsilon(N)^{\perp_{\varepsilon^{\prime}}}\) properly contains \(\varepsilon(M_{1})\). Hence it meets \(\varepsilon(M_{2})\) non-trivially. The intersection \(\varepsilon(N)^{\perp_{\varepsilon^{\prime}}}\cap\varepsilon(M_{2})\) cannot contain a line, otherwise \(N^{\perp}\cap M_{2}\) contains a line and \(\langle N,N^{\perp}\cap M_{2}\rangle\) is a singular subspace containg \(N\) as a sub-space of codimension at least \(2\), contradcting the fact that \(N\) is a sub-generator. Therefore \(\varepsilon(N)^{\perp_{\varepsilon^{\prime}}}\cap\varepsilon(M_{2})=\{ \varepsilon(x_{2})\}\) for a point \(x_{2}\in M_{2}\) and \(\varepsilon(N)\) has codimension \(2\) in \(\varepsilon(N)^{\perp_{\varepsilon^{\prime}}}\). This implies that \(\varepsilon(N)^{\perp_{\varepsilon^{\prime}}}=\langle\varepsilon(N), \varepsilon(x_{1}),\varepsilon(x_{2})\rangle=\langle\varepsilon(N),\{ \varepsilon(x_{1}),\varepsilon(x_{2})\}^{\perp_{\varepsilon^{\prime}}\perp_{ \varepsilon^{\prime}}}\rangle\) for \(x_{1}\in M_{1}\setminus N\). Accordingly, \(N^{\perp}=\langle N,x_{1},x_{2}\rangle=\langle N,(\{x_{1},x_{2}\}^{\perp}\cap \mathscr{S}^{\prime})^{\perp}\cap\mathscr{S}^{\prime}\rangle\). \(\Box\) **Corollary 4.2**: _The generators \(M_{1}\) and \(M_{2}\) are complementary if and only if the following holds for \(\{i,j\}=\{1,2\}\):_ * _all hyperplanes of_ \(M_{i}\) _are_ \(\perp\)_-minimal as sub-generators of_ \(\mathscr{S}\) _and, if_ \(N\) _is a hyperplane of_ \(M_{i}\) _such that_ \(N^{\perp}\neq M_{i}\)_, then_ \(N^{\perp}\cap M_{j}\neq\emptyset\) _and_ \(N^{\perp}=\langle M_{i},N^{\perp}\cap M_{j}\rangle\)_._ **Proof.** The 'only if' part is Lemma 4.1 (see also the final part of the proof of that lemma). Turning to the 'if' part, assume \((*)\) and let \(z\) be any point of \(\mathscr{S}\setminus M_{1}\). Put \(N:=z^{\perp}\cap M_{1}\). Then \(N^{\perp}\supset M_{1}\), since \(z\not\in M_{1}\). Hypothesis \((*)\) now implies that \(N^{\perp}\cap M=\{y\}\) for a point \(y\in M_{2}\) and \(N^{\perp}=\langle N,\{x,y\}^{\perp\perp}\rangle\) for \(x\in M_{1}\setminus N\). In the first case \(z\in M_{2}\). In the second case, \(x\in\langle N,z^{\prime}\rangle\) for some \(z^{\prime}\in\{x,y\}^{\perp\perp}\). However \(\varepsilon(\{x,y\}^{\perp\perp})\subseteq\langle\varepsilon(x),\varepsilon(y)\rangle\) because \(\varepsilon\) is minimal and \(\langle\varepsilon(x),\varepsilon(y)\rangle\subseteq\langle\varepsilon(M_{1}), \varepsilon(M_{2})\rangle\). Hence \(z\in\langle\varepsilon(M_{1}),\varepsilon(M_{2})\rangle\) for every point \(z\in\mathscr{S}\). Therefore \(\langle\varepsilon(M_{1}),\varepsilon(M_{2})\rangle={\rm PG}(V)\). \(\Box\) **Remark 8**: In the proof of the 'if' part of Corollary 4.2 we have exploited only the hypothesis that \((*)\) holds for \((i,j)=(1,2)\). This is enough to obtain that \(M_{1}\) and \(M_{2}\) are complementary. Hence if \((*)\) holds for \((i,j)=(1,2)\) then it also holds for \((i,j)=(2,1)\). ### Deep and hyperbolic sub-generators Let \(X\) be a sub-sub-generator of \(\mathscr{S}\), namely a singular subspace of \(\mathscr{S}\) of codimension \(2\) in at least one (hence all) of the generators of \(\mathscr{S}\) which contain it. The star \({\rm St}(X)\) of \(X\) is either a line (if \(X\) is contained in just one generator) or a pencil of lines (if just one of the sub-generators containing \(X\) is contained in at least two generators) or a non-degenerate generalized quadrangle (if none of the sub-generators containing \(X\) is contained in just one generator). **Lemma 4.3**: _If a generator \(M\) of \(\mathscr{S}\) contains a sub-generator which is contained in just two generators, then all sub-generators contained in \(M\) are contained in at most two generators._ **Proof.** By way of contradiction, suppose that \(M\) contains two sub-generators \(N\) and \(N^{\prime}\) with \(N\) contained in just two generators and \(N^{\prime}\) in at least three generators. Put \(X=N\cap N^{\prime}\). Then \(\operatorname{St}(X)\) is a non-degenerate thick-lined generalized quadrangle. However, the point of \(\operatorname{St}(X)\) corresponding to \(N\) belongs to just two lines of \(\operatorname{St}(X)\) while the point corresponding to \(N^{\prime}\) belongs to at least three lines. This is impossible in any thick-lined generalized quadrangle. \(\Box\) **Definition 4.4**: Let \(N\) be a sub-generator of \(\mathscr{S}\). We say that \(N\) is _deep_ if \(N^{\perp}\) is a generator, namely \(N\) is contained in just one generator; in other words, \(\{N,N\}\) is \(\perp\)-degenerate. If \(N\) is contained in exactly two generators then we say that \(N\) is _hyperbolic_. So, by Lemma 4.3, if a generator contains a hyperbolic sub-generator then it is hyperbolic (Definition 1.9). If all non-deep sub-generators of \(\mathscr{S}\) are hyperbolic (equivalently, all of its generators are hyperbolic) then we say that \(\mathscr{S}\) is _hyperbolic_. Obviously, hyperbolic sub-generators are \(\perp\)-minimal. If \(\mathscr{S}\) admits a hyperbolic sub-generator, then at least one of the hyperbolic lines of \(\mathscr{S}\) has size 2. Hence all of them have size 2, since \(\operatorname{Aut}(\mathscr{S})\) is transitive on the set of pairs of opposite points of \(\mathscr{S}\). So, if \(\mathscr{S}\) is hyperbolic then it is regular. If \(M\) is a generator and \(p\) a point exterior to \(M\) then \(p^{\perp}\cap M\) is a non-deep sub-generator. So, every generator contains non-deep sub-generators. Accordingly, a generator is non-hyperbolic if and only if all of its non-deep hyperplanes are non-hyperbolic. We recall that if \(\operatorname{rank}(\mathscr{S})<\infty\) then \(\mathscr{S}\) admits no deep sub-generators. In this case either \(\mathscr{S}\) is hyperbolic or none of its sub-generators is hyperbolic (hence \(\mathscr{S}\) is thick). In contrast, in general, when \(\operatorname{rank}(\mathscr{S})=\infty\) deep sub-generators exist in \(\mathscr{S}\). So, in general, a generator of \(\mathscr{S}\) contains both deep and non-deep sub-generators. Moreover, polar spaces of infinite rank also exist where some but not all of the non-deep sub-generators are hyperbolic. The quadrics discussed in Section 5.4 have this property. **Remark 9**: When \(\operatorname{rank}(\mathscr{S})=\infty\) the stabilizer in \(\operatorname{Aut}(\mathscr{S})\) of a generator \(M\) of \(\mathscr{S}\) acts intransitively on the set of hyperplanes of \(M\). Indeed \(M\) is an infinite-dimensional projective space and the automorphism group of an infinite-dimensional projective space is never transitive on the set of hyperplanes of that space. In view of this fact, the existence of generators containing both deep and non-deep hyperplanes is not so surprising. ### An existence result for triples of opposite generators The next theorem is an anlogue of Lemma 3.14, with triples of opposite generators instead of pairs. **Theorem 4.5**: _Let \(M\) and \(M^{\prime}\) be opposite generators of \(\mathscr{S}\) such that a generator of \(\mathscr{S}\) also exists which is opposite to both \(M\) and \(M^{\prime}\). Suppose moreover that neither \(M\) nor \(M^{\prime}\) are hyperbolic. Then for every point \(p\not\in M\cup M^{\prime}\) there exists a generator of \(\mathscr{S}\) opposite to both \(M\) and \(M^{\prime}\) and containing \(p\)._ **Proof.** Let \(M_{1}\) be a generator opposite to both \(M\) and \(M^{\prime}\). If \(p\in M_{1}\) there is nothig to prove. Otherwise, put \(M_{2}:=\langle p^{\perp}\cap M_{1},p\rangle\). If \(M_{2}\) is opposite to both \(M\) and \(M^{\prime}\) then we are done. Suppose firstly that \(M_{2}\) is opposite to neither \(M\) nor \(M^{\prime}\). Then \(M\cap M_{2}=\{a\}\) and \(M^{\prime}\cap M_{2}=a^{\prime}\) for distinct points \(a,a^{\prime}\in M_{2}\setminus(p^{\perp}\cap M_{1})\). Pick a point \(b\in p^{\perp}\setminus(a^{\perp}\cup a^{\prime\perp})\) and put \(M_{3}:=\langle b^{\perp}\cap M_{2},b\rangle\). Then \(M_{3}\) is a generator and, as in the proof of Lemma 3.14, we can see that \(M_{3}\) is opposite to both \(M\) and \(M^{\prime}\). Assume now that \(M_{2}\) is opposite to just one of \(M\) and \(M^{\prime}\), say \(M_{2}\cap M=\emptyset\) but \(M_{2}\cap M^{\prime}:=\{a\}\) for a point \(a\in M_{2}\setminus(p^{\perp}\cap M_{1})\). Put \(X:=M\cap\{p,a\}^{\perp}\). Then \(X\) has codimension \(2\) in \(M\), \(\langle X,p,a\rangle\) is a generator and \(M\cap\langle X,p,a\rangle=X\). The star \(\operatorname{St}(X)\) of \(X\) in \(\mathscr{S}\) is a non-degenerate polar space of rank \(2\) and \(M\) and \(\langle X,p,a\rangle\) are opposite lines in it. The star \(\operatorname{St}(X)\) is not a grid, since \(M\) is non-hyperbolic by assumption. Hence there exists a point \(b\in X^{\perp}\) such that \(b\perp p\) but \(b\not\perp a\) and \(\langle X,p,b\rangle\cap M=X\), namely \(\{p,b\}^{\perp}\cap M=X\). Put \(M_{3}:=\langle b^{\perp}\cap M_{2},b\rangle\). Then \(M_{3}\) is a generator, it contains \(p\) and, as in the proof of Lemma 3.14, we can see that it is opposite to \(M^{\prime}\). By way of contradiction, suppose that \(M\cap M_{3}\neq\emptyset\). So \(M\cap M_{3}=\{c\}\) for a point \(c\in M_{3}\setminus(b^{\perp}\cap M_{2})\). So, \(c\in\{p,b\}^{\perp}\). However, \(\{p,b\}^{\perp}\cap M=X\). Therefore \(c\in X\). Clearly, \(c\neq b\). Hence \(\langle b,c\rangle\) is a line and meets \(b^{\perp}\cap M_{2}\) in a point, say \(d\). We have \(c\neq d\) because \(d\in M_{2}\) and \(M_{2}\cap M=\emptyset\). Therefore \(b\in\langle c,d\rangle\). However \(d\perp a\) because both \(d\) and \(a\) belong to \(M_{2}\) while \(c\perp a\) because \(c\in X\subset M\cap a^{\perp}\). It follows that \(\langle c,d\rangle\subseteq a^{\perp}\). Therefore \(b\in a^{\perp}\). This contradicts the choice of \(b\). \(\Box\) **Remark 10**: When \(\operatorname{rank}(\mathscr{S})<\infty\) the hypothesis that \(M\) and \(M^{\prime}\) are non-hyperbolic can be dropped from Theorem 4.5. Indeed in this case either \(\mathscr{S}\) admits no triples of opposite generators (when \(\mathscr{S}\) is hyperbolic of odd rank) or for any two opposite generators \(M\) and \(M^{\prime}\) of \(\mathscr{S}\) and every point \(p\not\in M\cup M^{\prime}\) there exists a generator \(M^{\prime\prime}\) opposite to both \(M\) and \(M^{\prime}\) and containing \(p\). So, either the main hypothesis of Theorem 4.5 fails to hold (hence the statement of Theorem 4.5 is logically valid) or the conclusion of Theorem 4.5 holds true. ### Partial dualities defined by pairs of opposite generators Let \(M_{1}\) and \(M_{2}\) be opposite generators of \(\mathscr{S}\). The proof of the next lemma is easy. We leave it to the reader. **Lemma 4.6**: _Let \(X\) be a subspace of \(M_{1}\) of codimension \(2\) in \(M_{1}\). Let \(M_{1}^{*}(X)\) be the set of hyperplanes of \(M_{1}\) containing \(X\) (a line of the dual \(M_{1}^{*}\) of \(M_{1}\)). Then one of the following occurs:_ 1. _We have_ \(N^{\perp}\cap M_{2}\neq\emptyset\) _for every_ \(N\in M_{1}^{*}(X)\)_, the subspace_ \(X^{\perp}\cap M_{2}\) _is a line of_ \(M_{2}\) _and the mapping_ \(N\mapsto N^{\perp}\cap M_{2}\) _is a bijection between_ \(M_{1}^{*}(X)\) _and_ \(X^{\perp}\cap M_{2}\)_._ 2. _We have_ \(N^{\perp}\cap M_{2}=\emptyset\) _for all but at most one of the hyperplanes_ \(N\in M_{1}^{*}(N)\)_; moreover_ \(X^{\perp}\cap M_{2}\) _is either empty of a single point, according to whether_ \(N^{\perp}\cap M_{2}=\emptyset\) _for either all or all but one of the hyperplanes_ \(N\in M_{1}^{*}(X)\)_._ Let now \(M\) be a generator of \(\mathscr{S}\) opposite to both \(M_{1}\) and \(M_{2}\). We define a partial duality \(\pi_{1,2}\) on \(M\) as follows: for \(x\in M\) let \(X_{1}:=x^{\perp}\cap M_{1}\). If \(X_{1}^{\perp}\cap M_{2}=\emptyset\) then we put \(\pi_{1,2}(x)=M\). Otherwise, \(X_{1}^{\perp}\cap M_{1}=\{x_{2}\}\) for a point \(x_{2}\in M_{2}\); in this case we put \(\pi_{1,2}(x)=x_{2}^{\perp}\cap M\). It follows from Lemma 4.6 that \(\pi_{1,2}\) is a partial duality of \(M\). The partial duality \(\pi_{2,1}\) is defined in the same way as \(\pi_{1,2}\) but for permuting the roles of \(M_{1}\) and \(M_{2}\). **Lemma 4.7**: _Suppose that both \(\pi_{1,2}\) and \(\pi_{2,1}\) are non-degenerate. Then \(\pi_{1,2}\) is a polarity if and only if \(\pi_{1,2}=\pi_{2,1}\) (if and only if \(\pi_{2,1}\) is a polarity)._ **Proof.** Suppose that \(\pi_{1,2}\) is a polarity. Then \(x\in\pi_{1,2}(y)\) for every \(y\in\pi_{1,2}(x)\). We have \(\pi_{2,1}(x)=x_{1}^{\perp}\cap M\) where \(x_{1}=X_{2}^{\perp}\cap M_{1}\) and \(X_{2}=x^{\perp}\cap M_{2}\). Also, \(\pi_{1,2}(y)=y_{2}^{\perp}\cap M\) with \(y_{2}=Y_{1}^{\perp}\cap M_{2}\) and \(Y_{1}=y^{\perp}\cap M_{1}\). However \(x\in\pi_{1,2}^{M}(y)\). Therefore \(y_{2}\in X_{2}\). Accordingly, \(x_{1}\in Y_{1}\) and therefore \(y\in\pi_{2,1}(x)\). Hence \(\pi_{1,2}(x)\subseteq\pi_{2,1}(x)\). However both \(\pi_{1,2}(x)\) and \(\pi_{2,1}(x)\) are hyperplanes of \(M\). Consequently, \(\pi_{1,2}(x)=\pi_{2,1}(x)\). As this equality holds for every point \(x\in M\), we have \(\pi_{1,2}=\pi_{2,1}\). By reversing the previous argument we can also see that if \(\pi_{1,2}=\pi_{2,1}\) then \(\pi_{1,2}\) and \(\pi_{2,1}\) are polarites. \(\Box\) **Lemma 4.8**: _For \(x\in M\), we have \(x\in\pi_{1,2}(x)\neq M\) if and only if \(x\) belongs to a line of \(\mathscr{S}\) which meets both \(M_{1}\) and \(M_{2}\) non-trivially. Moreover, if \(x\in\pi_{1,2}(x)\neq M\) then \(\pi_{2,1}(x)=\pi_{1,2}(x)\)._ **Proof.** Suppose \(x\in\pi_{1,2}(x)\neq M\) and let \(X_{1}=x^{\perp}\cap M_{1}\). Then \(X_{1}^{\perp}\cap M_{2}=\{x_{2}\}\) for a point \(x_{2}\in M_{2}\) (because \(\pi_{1,2}(x)\neq M\)) and \(x_{2}\perp x\) (because \(x\in\pi_{1,2}(x)\)). It follows that \(\langle X_{1},x,x_{2}\rangle\) is singular. However \(X_{1}\) is a sub-generator. Therefore the line \(\ell:=\langle x,x_{2}\rangle\) meets \(X_{1}\) in a point. So, \(\ell\) meets both \(M_{1}\) and \(M_{2}\) non-trivially. Conversely, if there exists a line \(\ell\) of \(\mathscr{S}\) through \(x\) which meets both \(M_{1}\) and \(M_{2}\) non trivially, then \(x\in\pi_{1,2}(x)=\pi_{2,1}(x)=\ell^{\perp}\cap M\). \(\Box\) As in Section 4.1, let \(\varepsilon:\mathscr{S}\to\mathrm{PG}(V)\) be the minimum embedding of \(\mathscr{S}\) and put \(\mathscr{S}_{M_{1},M_{2}}=\varepsilon^{-1}(\langle\varepsilon(M_{1}), \varepsilon(M_{2})\rangle\). **Lemma 4.9**: _Suppose that \(M\subseteq\mathscr{S}_{M_{1},M_{2}}\). Then \(\pi_{1,2}=\pi_{2,1}\) and \(\pi_{1,2}\) is a polarity._ **Proof.** Let \(\varepsilon(M)\subseteq\mathscr{S}_{M_{1},M_{2}}\). Lemma 4.1 implies that neither \(\pi_{1,2}\) nor \(\pi_{2,1}\) is degenerate. We shall prove that \(\pi_{1,2}=\pi_{2,1}\). By Lemma 4.7 this will be enough to obtain also that \(\pi_{1,2}\) is a polarity. As in the proof of Lemma 4.1, we put \(\mathscr{S}^{\prime}:=\mathscr{S}_{M_{1},M_{2}}\). Moreover, for a subset \(X\subseteq\mathscr{S}^{\prime}\), we put \(X^{\perp^{\prime}}:=X^{\perp}\cap\mathscr{S}^{\prime}\). Let \(x\in M\), \(X_{1}=x^{\perp}\cap M_{1}\) and \(x_{2}\in M_{2}\) be such that \(X_{1}^{\perp}\cap M_{2}=\{x_{2}\}\). If \(x_{2}\perp x\) then \(x\in\pi_{1,2}(x)\). In this case the equality \(\pi_{1,2}(x)=\pi_{2,1}(x)\) follows from Lemma 4.8. Suppose that \(x_{2}\not\perp x\). Then by (the proof of) Lemma 4.1 we have that \(X_{1}^{\perp^{\prime}}=\langle X_{1},h\rangle\) with \(h=\{x,x_{2}\}^{\perp^{\prime}\perp^{\prime}}\) (a hyperbolic line of \(\mathscr{S}^{\prime}\)). Accordingly, and since \(M_{1}\subseteq X_{1}^{\perp}\), the hyperbolic line \(h\) meets \(M_{1}\) in a point, say \(x_{1}\). The sub-generator \(X_{2}:=x^{\perp}\cap M_{2}\) is contained in \(h^{\perp^{\prime}}\). Hence \(X_{2}\perp x_{1}\), namely \(X_{2}^{\perp}\cap M_{1}=\{x_{1}\}\). However \(\pi_{1,2}(x)=x_{2}^{\perp}\cap M=h^{\perp}\cap M=x_{1}^{\perp}\cap M=\pi_{2,1} (x)\). Hence \(\pi_{1,2}(x)=\pi_{2,1}(x)\). \(\Box\) **Lemma 4.10**: _Suppose that \(\pi_{1,2}\) is a polarity and admits at least one absolute point. Suppose moreover that \(\mathscr{S}\) is defined over a division ring of characteristic different from \(2\). Then \(M\subseteq\mathscr{S}_{M_{1},M_{2}}\)._ **Proof.** Let \(\pi_{1,2}\) be a polarity. Then, denoted by \(V_{M}\) the underlying vector space of \(M\) and by \(\mathbb{K}\) the underlying division ring of \(\mathscr{S}\), the polarity \(\pi_{1,2}\) is defined by a reflexive sesquilinear form \(f_{M}:V_{M}\times V_{M}\to\mathbb{K}\). The form \(f_{M}\) is non-degenerate, since polarities are non-degenerate reflexive partial dualities, by definition. The form \(f_{M}\) is trace-valued because \(\mathrm{char}(\mathbb{K})\neq 2\) by assumption [17, SS8.1.5]. Hence either \(f\) admits no non-zero isotropic vector or \(V_{M}\) is spanned by the isotropic vectors of \(f_{M}\). Therefore, if \(\pi_{1,2}\) admits at least one absolute point, then the absolute points of \(\pi_{1,2}\) generate \(M\). By Lemma 4.8, all abosulte points of \(\pi_{1,2}\) belong to \(\mathscr{S}_{M_{1},M_{2}}\). Hence \(M\subseteq\mathscr{S}_{M_{1},M_{2}}\). \(\Box\) **Remark 11**: When \(\mathscr{S}\) is defined over a division ring of characteristic \(2\) it can happen that the form \(f_{M}\) considered in the proof of Lemma 4.10 is not trace-valued, namely the absolute points of \(\pi_{1,2}\) do not span \(M\). If this is the case then it might happen that \(M\not\subseteq\mathscr{S}_{M_{1},M_{2}}\).__ ### Proof of Theorem 1.10 With \(M\), \(M_{1}\) and \(M_{2}\) as in the previous subsection, in order to keep a record of \(M\) in our notation henceforth we write \(\pi_{1,2}^{M}\) and \(\pi_{2,1}^{M}\) instead of \(\pi_{1,2}\) and \(\pi_{2,1}\). In the sequel we will often refer to the three generators property by the acronym (3G). The next theorem is the same as Theorem 1.10. **Theorem 4.11**: _Suppose that \(\mathscr{S}\) is defined over a division ring \(\mathbb{K}\) of characteristic \(\mathrm{char}(\mathbb{K})\neq 2\). Let \(M_{1}\) and \(M_{2}\) be opposite generators of \(\mathscr{S}\) such that at least one generator of \(\mathscr{S}\) exists which is opposite both \(M_{1}\) and \(M_{2}\) _Assume moreover that neither \(M_{1}\) nor \(M_{2}\) is hyperbolic. Then \(\{M_{1},M_{2}\}\) enjoys the three generators property if and only if \(\langle M_{1},M_{2}\rangle=\mathscr{S}\)._ **Proof.** The 'if' part is Lemma 4.9. Conversely, suppose that \(\{M_{1},M_{2}\}\) enjoys (3G). Choose a line \(\ell\) of \(\mathscr{S}\) meeting both \(M_{1}\) and \(M_{2}\) non-trivially and let \(x\) be a point of \(\ell\) different from \(\ell\cap M_{1}\) and \(\ell\cap M_{2}\). By Theorem 4.5, there exists a generator \(M\) of \(\mathscr{S}\) containing \(x\) and opposite to both \(M_{1}\) and \(M_{2}\). The point \(x\) is absolute for \(p^{M}_{1,2}\) by Lemma 4.8. Hence \(M\subseteq\mathscr{S}_{M_{1},M_{2}}\) by Lemma 4.10 and because \(p^{M}_{1,2}\) is a polarity by (3G) on \(\{M_{1},M_{2}\}\). Let now \(y\) be any point of \(x^{\perp}\setminus\mathscr{S}_{M_{1},M_{2}}\) and put \(M^{\prime}:=y^{\perp}\cap M\). Then \(M^{\prime}\) is opposite to both \(M_{1}\) and \(M_{2}\) and \(\pi^{M^{\prime}}_{1,2}\) is a polarity by (3G) on \(\{M_{1}.M_{2}\}\). Moreover \(x\in M^{\prime}\) is absolute for \(p^{M^{\prime}}_{1,2}\) by Lemma 4.8. Lemma 4.10 forces \(M^{\prime}\subseteq\mathscr{S}_{M_{1},M_{2}}\). In particular \(y\in\mathscr{S}_{M_{1},M_{2}}\), a contradiction with the choice of \(y\). It follows that \(x^{\perp}\subseteq\mathscr{S}_{M_{1},M_{2}}\). So far, we have proved that \(x^{\perp}\subseteq\mathscr{S}_{M_{1},M_{2}}\) for every point \(x\not\in M_{1}\cup M_{2}\) belonging to a line which meets both \(M_{1}\) and \(M_{2}\) non trivially. Let \(X\) be the set of points \(x\) as above and put \(\overline{X}:=\cup_{x\in X}x^{\perp}\). In order to finish the proof it is sufficient to prove that \(\overline{X}\supseteq\mathscr{S}\setminus(M_{1}\cup M_{2})\). Let \(y\) be a point exterior to \(M_{1}\cup M_{2}\) and put \(N_{1}=y^{\perp}\cap M_{1}\) and \(N_{2}=y^{\perp}\cap M_{2}\). At most one of the points of \(M_{1}\setminus N_{1}\) is collinear with \(N_{2}\). Let \(x_{1}\in M_{1}\setminus N_{1}\) be such that \(x_{1}^{\perp}\cap M_{2}\neq N_{2}\) and let \(\ell\) be a line through \(x_{1}\) meeting \(M_{2}\) in a point \(x_{2}\not\in N_{2}\). Then \(y^{\perp}\) meets \(\ell\) in a point \(x\not\in M_{1}\cup M_{2}\). So \(x\in X\) and therefore \(y\in\overline{X}\). \(\Box\) As said in Conjecture 1.11, we believe that the hypothesis that \(\mbox{char}(\mathbb{K})\neq 2\) can be removed from Theorem 4.11, provided that the conclusion is weakened as follows: the pair \(\{M_{1},M_{2}\}\) satisfies the three-generator property if and only if \(M_{1}\) and \(M_{2}\) are complementary. The next theorem offers a (faint) clue in favour of this conjecture. **Theorem 4.12**: _Suppose that \(\mathscr{S}\) is regular and let \(M_{1}\) and \(M_{2}\) be opposite generators of \(\mathscr{S}\) such that at least one generator of \(\mathscr{S}\) exists which is opposite to both \(M_{1}\) and \(M_{2}\). Suppose moreover that neither \(M_{1}\) nor \(M_{2}\) is hyperbolic. Then \(\{M_{1},M_{2}\}\) satisfies the three-generator property if and only if \(M_{1}\) and \(M_{2}\) are complementary._ **Proof.** Lemma 4.9 provides the 'if' part of the theorem. Put \(\mathscr{S}^{\prime}:=\mathscr{S}_{M_{1},M_{2}}\). By definition, \(M_{1}\) and \(M_{2}\) are complementary if and only if \(\mathscr{S}^{\prime}=\mathscr{S}\). We shall prove that if \(\mathscr{S}^{\prime}\subset\mathscr{S}\) then (3G) fails to hold for \(\{M_{1},M_{2}\}\), thus proving also the 'only if' part. Suppose that \(\mathscr{S}^{\prime}\subset\mathscr{S}\). Pick a point \(a\in\mathscr{S}\setminus\mathscr{S}^{\prime}\) and let \(N:=a^{\perp}\cap M_{1}\). Suppose that \(N^{\perp}\cap M_{2}\neq\emptyset\), say \(N^{\perp}\cap M_{2}=\{b\}\). As \(\mathscr{S}\) is regular, \(N\) is \(\perp\)-minimal. Hence \(N^{\perp}=\langle N,\{a,b\}^{\perp\perp\perp}\rangle\) and \(\{a,b\}^{\perp\perp}\) meets \(M_{1}\) in a point \(c\). Recall now that the embedding \(\varepsilon\) of \(\mathscr{S}\) which we use to define the subspace \(\mathscr{S}_{M_{1},M_{2}}=\varepsilon^{-1}(\langle\varepsilon(M_{1}),\varepsilon( M_{2})\rangle_{V})\) is the minimum one. Therefore \(\langle\varepsilon(\{a,b\}^{\perp\perp})\rangle=\langle\varepsilon(a), \varepsilon(b)\rangle\). It follows that the line \(L:=\langle\varepsilon(a),\varepsilon(b)\rangle\) meets \(\langle\varepsilon(M_{1}),\varepsilon(M_{2})\rangle\) in two distinct points, namely \(\varepsilon(b)\) and \(\varepsilon(c)\). Hence \(L\subseteq\langle\varepsilon(M_{1}),\varepsilon(M_{2})\rangle\), namely \(\{a,b\}^{\perp\perp}\subseteq\mathscr{S}^{\prime}=\mathscr{S}_{M_{1},M_{2}}\). So, \(a\in\mathscr{S}^{\prime}\). This contradicts the choice of \(a\). Therefore \(N^{\perp}\cap M_{2}=\emptyset\). Consequently, if \(M\) is a generator of \(\mathscr{S}\) containing \(a\) and opposite both \(M_{1}\) and \(M_{2}\) (which exists by Theorem 4.5), then \(\pi_{1,2}^{M}(a)=M\). Property (3G) fails to hold for \(\{M_{1},M_{2}\}\). \(\Box\) Theorems 4.11 and 4.12 show that, under suitable hypotheses on \(\mathscr{S}\), the three generators property boils down to forming a complementary pair. We shall see now that, given three mutually opposite generators, if they have not the same dimension (hence \(\operatorname{rank}(\mathscr{S})=\infty\)) then at least one of the three pairs we can form with them is not a complementary pair. **Lemma 4.13**: _Let \(\{M_{1},M_{2}\}\) be a complementary pair of generators of \(\mathscr{S}\) with \(\dim(M_{1})\geq\dim(M_{2})\). Then \(\dim(M_{3})\leq\dim(M_{2})\) for every generator \(M_{3}\) of \(\mathscr{S}\) opposite to both \(M_{1}\) and \(M_{2}\). If moreover \(M_{3}\) is a complement of \(M_{1}\) then \(\dim(M_{3})=\dim(M_{2})\)._ **Proof.** Suppose that \(\operatorname{rank}(\mathscr{S})=\infty\) (otherwise there is nothing to prove). So, the hupothesis that \(M_{2}\) is a complement of \(M_{1}\) combined with the hypothesis that \(\dim(M_{1})\geq\dim(M_{2})\) imply that \(\dim(M_{1})=\dim(\varepsilon)=\operatorname{rank}(\mathscr{S})\), with \(\varepsilon:\mathscr{S}\to\operatorname{PG}(V)\) the minimum embedding of \(\mathscr{S}\). Hence \(\dim(M_{3})\leq\dim(M_{1})\). Therefore, if \(\dim(M_{2})=\dim(M_{1})\) then \(\dim(M_{3})\leq\dim(M_{2})\). Let now \(\dim(M_{2})<\dim(M_{1})\) and, for a contradiction, suppose that \(\dim(M_{3})>\dim(M_{2})\). Recall \(\operatorname{PG}(V)=\langle\varepsilon(M_{1}),\varepsilon(M_{2})\rangle\) since \(M_{1}\) and \(M_{2}\) are complementary. Let \(p_{M_{2}}^{M_{1}}:\operatorname{PG}(V)\to\varepsilon(M_{2})\) be the projection of \(\operatorname{PG}(V)\) onto \(\varepsilon(M_{2})\) with \(\varepsilon(M_{1})\) as the kernel. As \(\dim(M_{3})>\dim(M_{2})\), \(p_{M_{2}}^{M_{1}}\) induces a non-injective mapping on \(\varepsilon(M_{3})\). Hence there exist points \(z\in M_{2}\) and \(x,y\in M_{1}\) such that \(x\neq y\) and both lines \(\ell_{x}:=\langle\varepsilon(z),\varepsilon(x)\rangle\) and \(\ell_{y}:=\langle\varepsilon(z),\varepsilon(y)\rangle\) meet \(\varepsilon(M_{3})\) non-trivially, say \(\ell_{x}\cap\varepsilon(M_{3})=\{p_{x}\}\) and \(\ell_{y}\cap\varepsilon(M_{3})=p_{y}\). Clearly \(p_{x}\neq p_{y}\), since otherwise \(p_{x}=p_{y}=\varepsilon(z)\), which is impossible since \(M_{3}\cap M_{2}=\emptyset\). Therefore \(\ell:=\langle p_{x},p_{y}\rangle\) is a line and \(\ell\) meets \(\langle\varepsilon(x),\varepsilon(y)\rangle\) in a point, say \(p\). This point belongs to both \(\varepsilon(M_{3})\) and \(\varepsilon(M_{1})\). This conclusion contradict the hypothesis that \(M_{3}\) is opposite to \(M_{1}\). Therefore \(\dim(M_{3})\leq\dim(M_{2})\). When \(M_{3}\) is also a complement of \(M_{1}\) we can permute \(M_{2}\) and \(M_{3}\) in the previous paragraph, thus obtaining that \(\dim(M_{2})\leq\dim(M_{3})\). In this case \(\dim(M_{3})=\dim(M_{2})\). \(\Box\) **Corollary 4.14**: _Let \(M_{1},M_{2}\) and \(M_{3}\) be three mutually opposite generators of \(\mathscr{S}\) and suppose that they have not the same dimension. Then at least one of \(\{M_{1},M_{2}\},\{M_{1},M_{3}\}\) or \(\{M_{2},M_{3}\}\) is not a complementary pair._ **Proof.** Since \(M_{1},M_{2},M_{3}\) have not the same dimension, necessarily \(\operatorname{rank}(\mathscr{S})=\infty\). To fix ideas, suppose that both \(M_{2}\) and \(M_{3}\) are complements of \(M_{1}\). Suppose firstly that \(\dim(M_{1})\geq\dim(M_{2})\). Then \(\dim(M_{2})=\dim(M_{3})\) by Lemma 4.13. By assumption, \(M_{1},M_{2}\) and \(M_{3}\) have not the same dimension. Hence \(\dim(M_{1})>\dim(M_{2})=\dim(M_{3})\). It follows that \(\dim(M_{1})=\dim(\varepsilon)=\operatorname{rank}(\mathscr{S})\), where \(\varepsilon:\mathscr{S}\to\operatorname{PG}(V)\) is the minimum embedding of \(\mathscr{S}\). Hence \(\dim\langle\varepsilon(M_{2}),\varepsilon(M_{3})\rangle=\dim(M_{2})=\dim(M_{ 3})<\dim(M_{1})=\dim(\varepsilon)\). So, \(\langle\varepsilon(M_{1}),\varepsilon(M_{2})\rangle\subset\operatorname{PG} (V)\). The generators \(M_{3}\) is not a compement of \(M_{2}\). On the other hand, let \(\dim(M_{1})\) be less than \(\dim(M_{2})\) or \(\dim(M_{3})\), say \(\dim(M_{1})<\dim(M_{2})\). Then \(\dim(\varepsilon)=\dim(M_{2})=\max(\dim(M_{1}),\dim(M_{3}))\), since both pairs \(\{M_{1},M_{2}\}\) and \(\{M_{1},M_{3}\}\) are complementary. As \(\dim(M_{1})<\dim(M_{2})\), the equality \(\dim(M_{2})=\max(\dim(M_{1}),\dim(M_{3}))\) implies that \(\dim(M_{3})=\dim(M_{2})\). So, \(\dim(M_{1})<\dim(M_{2})=\dim(M_{3})\). If \(M_{3}\) is a complement of \(M_{2}\) then, by permuting \(M_{1}\) and \(M_{3}\) in the statement of Lemma 4.13, we obtain that \(\dim(M_{1})=\dim(M_{2})\), while \(\dim(M_{1})<\dim(M_{2})\) by assumption. Therefore \(\{M_{2},M_{3}\}\) is not a complementary pair. (Indeed now we would obtain that \(M_{2}\cap M_{3}\neq\emptyset\), thus contradicting the hypothesis that \(M_{1},M_{2}\) and \(M_{3}\) are mutually opposite.) \(\Box\) ## 5 A few examples of infinite rank Throughout this section \(\mathbb{F}\) is a given field, \(V\) is a infinite dimensional \(\mathbb{F}\)-vector space, \(V^{*}\) is the dual of \(V\) and \(\overline{V}:=V\oplus V^{*}\). Recall that \(\dim(V)<\dim(V^{*})\). So \(\dim(\overline{V})=\dim(V^{*})\). We will adopt the following notation. We denote by \([v]\) the point of \(\operatorname{PG}(\overline{V})\) represented by a non-zero vector \(v\) of \(\overline{V}\) and by \([W]\) the subspace of \(\operatorname{PG}(\overline{V})\) corresponding to a subspace \(W\) of \(\overline{V}\). The kernel of a linear functional \(\xi\in V^{*}\) is denoted by \(\ker(\xi)\) and, for a subspace \(\Xi\) of \(V^{*}\), we put \(\ker(\Xi)=\cap_{\xi\in\Xi}\ker(\xi)\). Given \(x\in V\), we define \(\ker^{*}(x):=\{\xi\in V^{*}\mid\xi(x)=0\}\) (a subspace of \(V^{*}\)) and, for a subspace \(X\) of \(V\), we put \(\ker^{*}(X):=\cap_{x\in X}\ker^{*}(x)\). Finally, \(p_{V}\) and \(p_{V^{*}}\) are the projections of \(\overline{V}\) onto \(V\) and \(V^{*}\) respectively with respectively \(V^{*}\) and \(V\) as the kernels and, for a subspace \(W\) of \(\overline{V}\), \(p_{V}^{W}\) and \(p_{V^{*}}^{W}\) are their restrictions to \(W\). All polar spaces to be considered in this section are full subgeometries of \(\operatorname{PG}(\overline{V})\). In order to avoid confusions between generation in a polar space \(\mathscr{S}\subseteq\operatorname{PG}(\overline{V})\) and spans in \(\operatorname{PG}(\overline{V})\), henceforth we renounce to use the symbol \(\langle.\rangle\) when referring to spans in \(\operatorname{PG}(\overline{V})\), keeping it for generation in \(\mathscr{S}\) and using phrases as "the span of... in \(\operatorname{PG}(\overline{V})\)" or switching to spans in \(\overline{V}\) when dealing with spans in \(\operatorname{PG}(\overline{V})\). However we keep the symbol \(\langle.\rangle\) for spans in \(\overline{V}\). No ambiguity arises from doing so, since the vectors of \(\overline{V}\) are not points of \(\operatorname{PG}(\overline{V})\), let alone points of \(\mathscr{S}\). Thus, if \(X\subseteq\overline{V}\) then \(\langle X\rangle\) is the span of on \(\overline{V}\), \([\langle X\rangle]\) is the span of \([X]:=\{[x]\}_{x\in X\setminus\{0\}}\) in \(\mathrm{PG}(\overline{V})\) and, if \([X]\subseteq\mathscr{S}\), then \(\langle[X]\rangle\) is the subspace of \(\mathscr{S}\) generated by \([X]\). Turning to perps, let \(f\) be the bilinear or hermitian form which defines the quasi-polarity of \(\mathrm{PG}(\overline{V})\) associated to \(\mathscr{S}\) (or the quasi-polarity of \(\mathrm{PG}(\overline{V}^{\prime})\) associated to \(\mathscr{S}\) if \(\mathscr{S}\) lives in \(\mathrm{PG}(\overline{V}^{\prime})\) for a subspace \(\overline{V}^{\prime}\) of \(\overline{V}\)). We use the symbol \(\bot_{f}\) for orthogonality with respect to \(f\) both in \(\mathrm{PG}(\overline{V})\) and \(\overline{V}\) (respectively, in \(\mathrm{PG}(\overline{V}^{\prime})\) and \(\overline{V}^{\prime}\)), keeping the symbol \(\bot\) with no subscript for collinearity in \(\mathscr{S}\). ### A symplectic polar space As in [9], we define a (non-degenerate) alternating form \(f:\overline{V}\times\overline{V}\to\mathbb{F}\) as follows: \[f(a\oplus\alpha,b\oplus\beta)\ =\ \alpha(a)-\beta(b),\ \ \ \ \forall a,b\in V,\ \alpha, \beta\in V^{*}. \tag{14}\] Let \(\mathscr{S}_{f}\) be the symplectic polar space defined by \(f\) in \(\mathrm{PG}(\overline{V})\). Of course, the inclusion maping of \(\mathscr{S}_{f}\) into \(\mathrm{PG}(\overline{V})\) is the minimum embedding of \(\mathscr{S}_{f}\) (the unique embedding of \(\mathscr{S}_{f}\) when \(\mathrm{char}(\mathbb{F})\neq 2\).). This embedding is tight. Indeed the subspaces \(V=V\oplus\{0\}\) and \(V^{*}=\{0\}\oplus V^{*}\) of \(\overline{V}\) yield opposite generators \([V]\) and \([V^{*}]\) of \(\mathscr{S}_{f}\) and \(V+V^{*}=\overline{V}\) (but if \(\mathrm{char}(\mathbb{F})=2\) then \(\langle[V],[V^{*}]\rangle\subset\mathscr{S}_{f}\)). Clearly, \(\mathrm{rank}(\mathscr{S}_{f})=\dim(V^{*})\). As proved in [3], all symplectic polar spaces are regular. Hence \(\mathscr{S}_{f}\) is regular. So, if \(N\) is a non-deep sub-generator of \(\mathscr{S}_{f}\) then \(N^{\bot}\) contains a hyperbolic line \(h\) and \(\{\langle N,x\rangle\}_{x\in h}\) is the set of generators containing \(N\). Deep sub-generators also exist in \(\mathscr{S}_{f}\). For instance, \(V^{*}\) admits infinitely many hyperplanes \(H\) such that \(H\neq\ker^{*}(x)\) for any \(x\in V\). If \(N=[H]\) for such a hyperplane \(H\) then \(N^{\bot}=[V^{*}]\). #### 5.1.1 Singular subspaces and generators of \(\mathscr{S}_{f}\) Let \(W\) be a subspace of \(\overline{V}\). Put \(K_{1}:=\ker(p^{W}_{V^{*}})=W\cap V\) and \(K_{2}:=\ker(p^{W}_{V})=W\cap V^{*}\) and let \(C\) be a complement of \(K_{1}\oplus K_{2}\) in \(W\). Then \(p^{C}_{V}\) and \(p^{C}_{V^{*}}\) are injective. Accordingly, if \(C_{1}:=p^{C}_{V}(C)\) and \(C_{2}:=p^{C}_{V^{*}}(C)\), then \(C_{1}\cong C\cong C_{2}\) and the mappings \(p_{1,2}:=p^{C}_{V^{*}}\circ(p^{C}_{V})^{-1}\) and \(p_{2,1}=p^{C}_{V^{*}}\circ(p^{C}_{V})^{-1}\) are isomorphisms from \(C_{1}\) to \(C_{2}\) and from \(C_{2}\) to \(C_{1}\) respectively. Also, \(p_{2,1}=p^{-1}_{1,2}\). So, \(C=\{x\oplus p_{1,2}(x)\}_{x\in C_{1}}=\{p_{2,1}(\xi)\oplus\xi\}_{\xi\in C_{2}}\). **Lemma 5.1**: _With \(C_{1},C_{2},K_{1},K_{2}\) as above, \(C_{i}\cap K_{i}=\{0\}\) for \(i=1,2\)._ **Proof.** If \(x\in C_{1}\cap K_{1}\) then \(W\) contains \((x\oplus p_{1,2}(x))-x=p_{1,2}(x)\). Therefore \(p_{1,2}(x)\in W\cap V^{*}=K_{2}\), namely \(x\oplus p_{1,2}(x)\in K_{1}\oplus K_{2}\). Hence \(x\oplus p_{1,2}(x)=0\), namely \(x=0\). \(\Box\) Clearly, \([W]\) is a singular subspace of \({\mathscr{S}}_{f}\) if and only if \[\left.\begin{array}{ll}K_{1}\subseteq\ker(C_{2}+K_{2}),&K_{2}\subseteq\ker^{*} (C_{1}+K_{1})\ \ \ \mbox{and}\ \ \right\}\\ p_{1,2}(x)(y)=p_{1,2}(y)(x),&\forall x,y\in C_{1}.\end{array}\right\} \tag{15}\] where for subspaces \(X\subseteq V\) and \(\Xi\subseteq V^{*}\) we put \[\ker^{*}(X):=\{\xi\in V^{*}\ |\ \ker(\xi)\supseteq X\}\ \mbox{and}\ \ker(\Xi):= \cap_{\xi\in\Xi}\ker(\xi).\] **Lemma 5.2**: _Assume that \([W]\) is a singular subspace of \({\mathscr{S}}_{f}\). Then \([W]\) is a generator of \({\mathscr{S}}_{f}\) if and only if_ \[\ker^{*}(C_{1}+K_{1})=K_{2}\ \ \mbox{and}\ \ \ker(C_{2}+K_{2})=K_{1}. \tag{16}\] **Proof.** Suppose that \([W]\) is a generator of \({\mathscr{S}}_{f}\). Clearly \(W^{\perp_{f}}\) contains both \(\ker(C_{2}+K_{2})\) and \(\ker^{*}(C_{1}+K_{1})\). However \(W\) is maximal among the totally \(f\)-isotropic subspaces of \(\overline{V}\). Hence both \(\ker(C_{2}+K_{2})\) and \(\ker^{*}(C_{1}+K_{1})\) are contained in \(W\), namely \(\ker(C_{2}+K_{2})=K_{1}\) and \(\ker^{*}(C_{1}+K_{1})=K_{2}\), as claimed in (16). Conversely, assume (16). Note that, if \(X\) is a subspace of \(V\), then for every vector \(x\in V\setminus X\) there exists a hyperplane \(H\) of \(V\) which contains \(X\) but does not contain \(x\). Consequently, \[\ker(\ker^{*}(X))\ =\ X. \tag{17}\] By (17) and the first equality of (16) we obtain \[C_{1}+K_{1}\ =\ \ker(K_{2}). \tag{18}\] Let now \(a\oplus\alpha\in W^{\perp_{f}}\). Then \(0=f(a\oplus\alpha,\kappa)=\kappa(a)\) for every \(\kappa\in K_{2}\). Therefore \(a\in\ker(K_{2})\). Hence \(a=c+k\) for some \(c\in C_{1}\) and some \(k\in K_{1}\) by (18). So, we can replace \(a\oplus\alpha\) with \(c\oplus\alpha=(a\oplus\alpha)-(k\oplus 0)\). In other words, we can assume to have chosen \(a\oplus\alpha\) with \(a\in C_{1}\). However \(W\) contains \(a\oplus p_{1,2}(a)\). Hence we can replace \(a\oplus\alpha\) with \(a\oplus\alpha-a\oplus p_{1,2}(a)=0\oplus(\alpha-p_{1,2}(a))=\alpha-p_{1,2}(a)\). So, we can assume to have chosen \(a\oplus\alpha\) with \(a=0\). Thus, we are reduced to the case of \(\alpha\in V^{*}\) such that \(\alpha\perp_{f}W\). This condition forces \(\alpha\in\ker^{*}(C_{1}+K_{1})\). Hence \(\alpha\in K_{2}\) by the first equality of (16). We have proved that \(W^{\perp_{f}}=W\). Thus \([W]\) is a generator of \({\mathscr{S}}_{f}\). \(\Box\) **Proposition 5.3**: _Every generator of \({\mathscr{S}}_{f}\) admits a complement._ **Proof.** Let \([W]\) be a generator of \({\mathscr{S}}_{f}\). With \(C_{1},C_{2},K_{1},K_{2}\) defined as in the first paragraph of this subsection, choose a complement \(H_{1}\) of \(C_{1}+K_{1}\) in \(V\) and put \(H_{2}:=\ker^{*}(H_{1})\). Let \(W^{\prime}:=H_{1}\oplus H_{2}\). The two relations of (16), referred to \(W^{\prime}=H_{1}\oplus H_{2}\), amount to the equality \(\ker^{*}(H_{1})=H_{2}\), which holds by definition of \(H_{2}\), and the equality \(\ker(H_{2})=H_{1}\), namely \(\ker(\ker^{*}(H_{1}))=H_{1}\), which is a special case of (17). So, \(W^{\prime}\) satisfies both equations of (16). Hence \([W^{\prime}]\) is a generator. Let \(x\oplus\xi\in W\cap W^{\prime}\). Then \(x\in H_{1}\), \(\xi\in H_{2}\) and \(x\oplus\xi=(k+c)\oplus(\kappa+p_{1,2}(c))\) for suitable \(k\in K_{1}\), \(\kappa\in K_{2}\) and \(c\in C_{1}\). However \(H_{1}\cap(K_{1}+C_{1})=0\). Therefore \(x=0\). Accordingly, \(k+c=0\), namely \(c=-k\). Lemma 5.1 now forces \(c=k=0\). Hence \(p_{1,2}(c)=0\). In the end, \(\xi=\kappa\). Recall that \(\xi\in H_{2}=\ker^{*}(H_{1})\) while \(\kappa\in K_{2}=\ker^{*}(K_{1}+C_{1})\). Moreover \[\ker^{*}(H_{1})\cap\ker^{*}(K_{1}+C_{1})\ =\ \ker^{*}(H_{1}+(K_{1}+C_{1})).\] However \(H_{1}+K_{1}+C_{1}=V\) by our choice of \(H_{1}\). Hence \(H_{2}\cap K_{2}=\ker^{*}(V)=0\). It follows that \(\xi=\kappa=0\). Therefore \(W\cap W^{\prime}=0\). We shall now prove that \(W+W^{\prime}=\overline{V}\). The subspace \(W+W^{\prime}\) contains \(K_{2}+H_{2}\). Let \(\xi\in V^{*}\). If \(\ker(\xi)\) contains \(H_{1}\) then \(\xi\) belongs to \(H_{2}\). Suppose that \(\ker(\xi)\cap H_{1}\) is a hyperplane of \(H_{1}\). As \(H_{1}\) is a complement of \(K_{1}+H_{1}\) and \(K_{2}=\ker^{*}(K_{1}+H_{1})\), there exists \(\kappa\in K_{2}\) such that \(\kappa\) and \(\xi\) induce the same linear functional on \(H_{1}\). Put \(\chi:=\xi-\kappa\). Then \(\chi\in\ker^{*}(H_{1})=H_{2}\). So, \(\xi=\kappa+\chi\in K_{2}+H_{2}\). We have proved that \(W+W^{\prime}\) contains \(V^{*}\). Consequently \((W+W^{\prime})\cap V\) contains \((K_{1}+C_{1})+H_{1}=V\). So, \(W+W^{\prime}\) contains both \(V\) and \(V^{*}\). Hence \(W+W^{\prime}=\overline{V}\). \(\Box\) The space \(\mathscr{S}_{f}\) also admits opposite generators which are not complementary, as in the following example. **Example 5.4**: Given a basis \((e_{i})_{i\in\mathfrak{I}}\) of \(V\), for every \(i\in\mathfrak{I}\) define a linear functional \(\eta_{i}\in V^{*}\) by the clause that \(\eta_{i}(e_{j})=\delta_{i,j}\) (Kronecker symbol) for any \(j\in\mathfrak{I}\). The linear functionals \(\eta_{i}\) defined in this way span a proper subspace \(V^{\prime}\) of \(V^{*}\) isomorphic to \(V\). Note that \(\ker(V^{\prime})=0\). Let \(W\) be the subspace of \(\overline{V}\) spanned by the vectors \(e_{i}\oplus\eta_{i}\) for \(i\in\mathfrak{I}\). It is easy to see that \([W]\) is a generator of \(\mathscr{S}_{f}\). Clearly, \(V\cap W=0\). However \(\dim(V+W)=\dim(V)<\dim(V^{*})=\dim(\overline{V})\). Hence \(V+W\subset\overline{V}\). So, \([W]\) is opposite to \([V]\) but it is not a complement of \([V]\). #### 5.1.2 The characteristic two case The natural embeding of \(\mathscr{S}_{f}\) in \(\mathrm{PG}(\overline{V})\) is universal if and only if \(\mathrm{char}(\mathbb{F})\neq 2\). Accordingly, when \(\mathrm{char}(\mathbb{F})\neq 2\) the union of any two complementary generators generates \(\mathscr{S}_{f}\). In contrast, when \(\mathrm{char}(\mathbb{F})=2\) no two generators of \(\mathscr{S}_{f}\) generate \(\mathscr{S}_{f}\), even when they are complementary. For instance, when \(\mathrm{char}(\mathbb{F})=2\) the subspace \(\mathscr{Q}:=\langle[V],[V^{*}]\rangle\) of \(\mathscr{S}_{f}\) is the set of points of \(\mathrm{PG}(\overline{V})\) represented by the non-zero vectors \(a\oplus\alpha\) of \(\overline{V}\) such that \(\alpha(a)=0\). The set \(\mathscr{Q}\) is a proper subspace of \(\mathscr{S}_{f}\). In fact \(\mathscr{Q}\) is an infinite dimensional analogue of a hyperbolic quadric, defined by the quadratic form which maps every vector \(a\oplus\alpha\in\overline{V}\) onto the scalar \(\alpha(a)\in\mathbb{F}\). Referring to the next subsection (Section 5.2) for a discussion of these quadrics, we devote the next paragraph to a description the universal embedding of \(\mathscr{S}_{f}\) when \(\mathrm{char}(\mathbb{F})=2\). Assuming that \({\rm char}(\mathbb{F})=2\), let \(\mathbb{F}^{2}:=\{t^{2}\}_{t\in\mathbb{F}}\) be the square subfield of \(\mathbb{F}\) and let \(\mathbb{F}^{1/2}\) be the extension of \(\mathbb{F}\) obtained by adding a square root \(t^{1/2}\) of \(t\) for every \(t\in\mathbb{F}\setminus\mathbb{F}^{2}\). Of course, \(\mathbb{F}^{2}\subseteq\mathbb{F}\subseteq\mathbb{F}^{1/2}\), with \(\mathbb{F}^{2}=\mathbb{F}=\mathbb{F}^{1/2}\) if and only if \(\mathbb{F}\) is perfect. As \(\mathbb{F}^{1/2}\) contains \(\mathbb{F}\), the field \(\mathbb{F}^{1/2}\) is naturally equipped with an \(\mathbb{F}\)-vector space structure. Put \(\widetilde{V}:=V\oplus\overline{V}\oplus\mathbb{F}^{1/2}\) and define a quadratic form \(q:\widetilde{V}\to\mathbb{F}\) as follows: \[q(x\oplus\xi\oplus t)\ =\ \xi(x)+t^{2},\ \ \ \ \forall x\in V,\xi\in V^{*},t \in\mathbb{F}^{1/2}.\] The polar space \(\mathscr{S}_{q}\) defined by \(q\) in \({\rm PG}(\widetilde{V})\) is isomorphic to \(\mathscr{S}_{f}\), the isomorphism being induced by the linear mapping from \(\overline{V}\) to \(\widetilde{V}\) which maps every vector \(x\oplus\xi\in\overline{V}\) onto the \(\phi\)-singular vector \(x\oplus\xi\oplus\xi(x)^{1/2}\in\widetilde{V}\). This isomorphism yields the universal embedding \(\tilde{\varepsilon}:\mathscr{S}_{f}\to{\rm PG}(\widetilde{V})\) of \(\mathscr{S}_{f}\). The subspace \(\mathscr{Q}=\langle[V],[V^{*}]\rangle\) of \(\mathscr{S}_{f}\) is mapped by \(\tilde{\varepsilon}\) onto \(\tilde{\varepsilon}(\mathscr{S}_{q})\cap[\overline{V}\oplus\{0\}]\). ### Hyperbolic quadrics of infinite rank Without assuming any hypothesis on \(\mathbb{F}\), let \(q:\overline{V}\to\mathbb{F}\) be the non-degenerate quadratic form defined on \(\overline{V}\) as follows: \[q(a\oplus\alpha)\ =\ \alpha(a),\ \ \ \ \forall a\in V,\ \alpha\in V^{*}. \tag{19}\] The bilinearized of \(q\) is the symmetric bilinear form \(f_{q}\) defined as follows: \[f_{q}(a\oplus\alpha,b\oplus\beta)\ =\ \alpha(b)+\beta(a),\ \ \ \ \forall a,b\in V,\ \alpha,\beta\in V^{*}. \tag{20}\] Let \(\mathscr{S}_{q}\) be the polar space defined by \(q\). Note that \(f_{q}\) non-degenerate, no matter what \({\rm char}(\mathbb{F})\) is. Hence the inclusion mapping of \(\mathscr{S}_{q}\) in \({\rm PG}(\overline{V})\) is the unique embedding of \(\mathscr{S}_{q}\). As in Section 5.1, the subspaces \([V]\) and \([V^{*}]\) are opposite generators of \(\mathscr{S}_{q}\) and span \({\rm PG}(\overline{V})\). Hence the embedding of \(\mathscr{S}_{q}\) in \({\rm PG}(\overline{V})\) (which is the unique embedding of \(\mathscr{S}_{q}\)) is tight. **Remark 12**: When \({\rm char}(\mathbb{F})=2\) the form \(f_{q}\) is alternating and, with \(f=f_{q}\), the polar space \(\mathscr{S}_{q}\) is the same as the subspace of \(\mathscr{S}_{f}\) called \(\mathscr{Q}\) in Section 5.1.2. As noticed at the end of Section 5.1.2, the space \(\mathscr{Q}\) admits an embedding in \({\rm PG}(\overline{V}\oplus\{0\})\) which however, since \(\mathscr{S}_{q}\) admits a unique embedding, is isomorphic to the natural embedding of \(\mathscr{Q}=\mathscr{S}_{q}\) in \({\rm PG}(\overline{V})\). **Lemma 5.5**: _For every line \(\ell\) of \({\rm PG}(\overline{V})\), if \(|\ell\cap\mathscr{S}_{q}|>2\) then \(\ell\subseteq\mathscr{S}_{q}\)._ **Proof.** Let \([a\oplus\alpha],[b\oplus\beta]\) and \([c\oplus\gamma]\) be three distinct points of \(\mathscr{S}_{q}\) on the same line \(\ell\) of \({\rm PG}(\overline{V})\). We can assume that \(c\oplus\gamma=a\oplus\alpha+b\oplus\beta\). By assumption \(\alpha(a)=\beta(b)=\gamma(c)=0\). Hence \(\alpha(b)+\beta(a)=0\), namely \(f_{q}(a\oplus\alpha,b\oplus\beta)=0\). Therefore \([a\oplus\alpha]\perp_{f_{q}}[b\oplus\beta]\). Hence \(\ell\) is totally \(q\)-singular. \(\Box\) **Proposition 5.6**: _The hyperbolic lines of \(\mathscr{S}_{q}\) have size 2._ **Proof.** As \(f_{q}\) is non-degenerate, every hyperbolic line of \({\mathscr{S}}_{q}\) is contained in a projective line of \({\rm PG}(\overline{V})\). Lemma 5.5 yields the conclusion. \(\Box\) All we have said in Section 5.1.1 on singular subspaces of \({\mathscr{S}}_{f}\) is word by word valid for singular subspaces of \({\mathscr{S}}_{q}\), but for replacing the last condition of (15) wth the following: \(p_{1,2}(x)=0\) for every \(x\in C_{1}\). In particular, the generators of \({\mathscr{S}}_{q}\) are still characterized by the two conditions of (16). We have \([W]^{\perp_{f_{q}}}=[W]\) for every generator \([W]\) of \({\mathscr{S}}_{q}\) and the analogue of Proposition 5.3 holds: every generator of \({\mathscr{S}}_{q}\) admits a complement. Note also that, since the inclusion mapping of \({\mathscr{S}}_{q}\) in \({\rm PG}(\overline{V})\) is the unique embedding of \({\mathscr{S}}_{q}\), two opposite generators of \({\mathscr{S}}_{q}\) are complementary only if their union generates \({\mathscr{S}}_{q}\). **Lemma 5.7**: _If a plane \(P\) of \({\rm PG}(\overline{V})\) contains three distinct lines of \({\mathscr{S}}_{q}\), then \(P\subseteq{\mathscr{S}}_{q}\)._ **Proof.** Let \(P\) be a plane of \({\rm PG}(\overline{V})\) containing three distinct lines of \({\mathscr{S}}_{q}\). Every line of \(P\) meets each of those three lines non trivially and, if meets no two of them in the same point, then it is fully contained in \({\mathscr{S}}_{q}\) by Lemma 5.5. Consequently, \(P\subseteq{\mathscr{S}}_{q}\). \(\Box\) **Proposition 5.8**: _All non-deep sub-generators of \({\mathscr{S}}_{q}\) are hyperbolic._ **Proof.** Let \(N\) be a sub-generator of \({\mathscr{S}}_{q}\) and suppose that \(N^{\perp}\) contains two opposite points of \({\mathscr{S}}_{q}\). Since \({\rm Aut}({\mathscr{S}}_{q})\) acts transitively on the pairs of opposite points of \({\mathscr{S}}_{q}\), we can safely assume that \(N^{\perp}\) contains \([a]\) and \([\alpha]\) for a vector \(a\in V\) and a linear functional \(\alpha\in V^{*}\) such that \(\alpha(a)=1\). We shall prove that \(\langle N,[a]\rangle\) and \(\langle N,[\alpha]\rangle\) are the unique generators of \({\mathscr{S}}_{q}\) which contain \(N\). Put \(V_{0}:=\ker(\alpha)\). Clearly, \(\ker^{*}(a)\cong V_{0}^{*}\). Freely regarding \(V_{0}^{*}\) as the same as \(\ker^{*}(a)\), put \(\overline{V}_{0}:=V_{0}\oplus V_{0}^{*}=\ker(\alpha)\oplus\ker^{*}(a)\). So, \(N=[W]\) for a maximal totally \(q\)-singular subspace \(W\) of \(\overline{V}_{0}\). For a contradiction, let \(\langle N,[b\oplus\beta]\rangle\) be a generator of \({\mathscr{S}}_{q}\) containing \(N\) and different from both \(\langle N,[a]\rangle\) and \(\langle N,[\alpha]\rangle\). The points \([a]\), \([\alpha]\) and \([b\oplus\beta]\) are mutually non collinear in \({\mathscr{S}}_{q}\). Hence they span a plane \(P\) of \({\rm PG}(\overline{V})\) contained in \(N^{\perp_{f_{q}}}\) and \(P\cap N=\emptyset\) by Lemma 5.7. However, \(\{[a],[\alpha]\}^{\perp_{f_{q}}}=[\overline{V}_{0}]\) has codimension 2 in \({\rm PG}(\overline{V})\) and the line of \({\rm PG}(\overline{V})\) spanned by \([a]\) and \([\alpha]\) is a complement of \(\{[a],[\alpha]\}^{\perp_{f_{q}}}\). Hence \(P\) meets \(\{[a],[\alpha]\}^{\perp_{f_{q}}}\) non trivially. Let \([c\oplus\gamma]\) be a point of \(P\cap\{[a],[\alpha]\}^{\perp_{f_{q}}}\). Then \(c\oplus\gamma\in W^{\perp_{f_{q}}}\cap\overline{V}_{0}\). As in the final part of the proof of Lemma 5.2, but with \(V,V^{*}\) and \(\overline{V}\) replaced by \(V_{0},V_{0}^{*}\) and \(\overline{V}_{0}\) respecttively, we obtain that \(c\oplus\gamma\in W\). This contradicts the fact that, as previously noticed, \(P\cap N=\emptyset\). \(\Box\) Deep sub-generators also exists in \({\mathscr{S}}_{q}\). For instance, as in \({\mathscr{S}}_{f}\), every hyperplane \(H\) of \(V^{*}\) such that \(\ker(H)=\{0\}\) yields a deep sub-generator \([H]\) of \({\mathscr{S}}_{q}\). **Corollary 5.9**: _The polar space \(\mathscr{S}_{q}\) is regular._ **Proof.** By Proposition 5.6, the space \(\mathscr{S}_{q}\) is hyperbolic (Definition 4.4). All hyperbolic polar spaces are regular. \(\Box\) As noticed at the end of Section 5.1.1, opposite generators \([W]\) and \([W^{\prime}]\) of \(\mathscr{S}_{f}\) exist which are not complementry. The same occurs in \(\mathscr{S}_{q}\), as we can see by a slight modification of Example 5.4. **Example 5.10**: Given a basis \((e_{i})_{I\in\mathfrak{I}}\) of \(V\), choose the linear functionals \(\eta_{i}\) in such a way that \(\eta_{i}(e_{i})=0\) for every \(i\) and \(\eta_{i}(e_{j})+\eta_{j}(e_{i})=0\) for any choice of \(i,j\in\mathfrak{I}\). To this goal we can use the following trick. Let \(P\) be a partition of \(\mathfrak{I}\) in subsets of size \(2\) and define two sets \(P^{+}\) and \(P^{-}\) of ordered pairs as follows: for every pair \(\{i,j\}\in P\), choose one of the two ordered pairs \((i,j)\) and \((j,i)\) and put it in \(P^{+}\), putting the other one in \(P^{-}\). Then define \(\eta_{j}(e_{i})=0\) if \(\{i,j\}\not\in P\) (in particular, if \(i=j\)), \(\eta_{j}(e_{i})=1\) if \((i,j)\in P^{+}\) and \(\eta_{j}(e_{i})=-1\) if \((i,j)\in P^{-}\). As in Example 5.4, the vectors \(e_{i}+\eta_{i}\) span a subspace \(W\) of \(\overline{V}\). The corresponding subspace \([W]\) of \(\mathrm{PG}(\overline{V})\) is a generator of \(\mathscr{S}_{q}\) opposite to \([V]\) but it is not a complement of \([V]\). ### Regular hermitian varieties Assume that \(\mathbb{F}\) admits a separable quadratic extension and let \(\sigma\) be an involutory non-trivial automorphism of \(\mathbb{F}\). Assuming that \(V\) is a right \(\mathbb{F}\)-vector space, its dual \(V^{*}\) should be regarded as a left vector space. However, we can re-define it as a right vector space by a vector-times-scalar multiplication defined as follows: \(\alpha\cdot t:=t^{\sigma}\alpha\) for every \(\alpha\in V^{*}\). Accordingly, \((a\oplus\alpha)t=at\oplus\alpha t=at\oplus t^{\sigma}\alpha\). Put \(\overline{V}=V\otimes V^{*}\), with \(V^{*}\) re-defined as said above. We can define a non-degenerate \(\sigma\)-hermitian form \(h:\overline{V}\times\overline{V}\to\mathbb{F}\) by declaring that \[h(a\oplus\alpha,b\oplus\beta)\ =\ \alpha(b)+(\beta(a))^{\sigma},\ \ \ \ \forall a,b\in V,\ \alpha,\beta\in V^{*}. \tag{21}\] The form \(h\) is non-degenerate. Let \(\mathscr{S}_{h}\) be the polar space associated to it. The inclusion mapping of \(\mathscr{S}_{h}\) in \(\mathrm{PG}(\overline{V})\) is the unique embedding of \(\mathscr{S}_{h}\). The subspaces \([V]\) and \([V^{*}]\) are opposite generators of \(\mathscr{S}_{h}\) and \(\langle[V],[V^{*}]\rangle=\mathscr{S}_{h}\). Hence the unique embedding of \(\mathscr{S}_{h}\) is tight. The hyperbolic lines of \(\mathscr{S}_{h}\) are the intersections \(\ell\cap\mathscr{S}_{h}\) for \(\ell\) a projective line of \(\mathrm{PG}(\overline{V})\) which meets \(\mathscr{S}_{h}\) in at least two points but is not fully contained in \(\mathscr{S}_{h}\). For instance, if \(a\in V\) and \(\alpha\in V^{*}\) are such that \(\alpha(a)=1\) then the hyperbolic line through \([a]\) and \([\alpha]\) consists of \([a]\) and all points \([at\oplus\alpha]\) for \(t+t^{\sigma}=0\). If a plane \(P\) of \(\mathrm{PG}(\overline{V})\) contains a point \(p\in\mathscr{S}_{h}\) such that \(p^{\perp_{h}}\supseteq P\), then either \(P\subseteq\mathscr{S}_{h}\) of \(P\cap\mathscr{S}_{h}\) is the union of a set of lines through \(p\) and \(\ell\cap p^{\perp_{h}}\) is a hyperbolic line for every line \(\ell\) of \(P\) which does not pass through \(p\). By these fact we can prove an analogue of Proposition 5.8: if \(N\) is a non-deep sub-generator of \(\mathscr{S}_{h}\) then \(N^{\perp}\) is a hyperbolic line. Consequently, \(\mathscr{S}_{h}\) is regular. Non-deep subgenerators also exist in \(\mathscr{S}_{h}\), as in \(\mathscr{S}_{f}\) and \(\mathscr{S}_{q}\). The results of Section 5.1.1 remain valid for \(\mathscr{S}_{h}\), with obvious minor modifications. For instance, every generator of \(\mathscr{S}_{h}\) admits a complement but not all pairs of opposite generators are complementary. Non complementary pairs of opposite generators can be obtained by a minor modifcation of the construction described in Example 5.10. ### A family of non-regular quadrics The construction of Section 5.1 can be generalized by replacing \(V^{*}\) with a subspace \(V^{\prime}\) of \(V^{*}\) such that \(\ker(V^{\prime})=0\), possibly \(V^{\prime}\cong V\). We still obtain a symplectic space. As all symplectic spaces are regular [3], all polar spaces obtained in this way are still regular. In contrast, if we do the same in Sections 5.2 and 5.3 then the polar space we obtain admits a tight embedding, since it lives in \(\mathrm{PG}(V\oplus V^{\prime})\) and admits \([V]\) and \([V^{\prime}]\) as generators, but in general it is not regular, as in the following case. Let \(\mathrm{char}(\mathbb{F})=2\). Given a basis \((e_{i})_{i\in\mathfrak{I}}\) of \(V\), define the linear functionals \(\eta_{i}\) as in Example 5.4. So, \(\eta_{i}(e_{j})=\delta_{i,j}\) for every choice of \(i,j\in\mathfrak{I}\). Put \(V^{\prime}:=\langle\eta_{i}\rangle_{i\in\mathfrak{I}}\) and \(\overline{V}^{\prime}:=V\oplus V^{\prime}\). Clearly, \(\ker(V^{\prime})=\{0\}\). With \(q\) and \(f_{q}\) as in Section 5.2, let \(q^{\prime}\) and \(f^{\prime}\) be their restrictions to \(\overline{V}^{\prime}\) and \(\overline{V}^{\prime}\times\overline{V}^{\prime}\). Then \(f^{\prime}\) is the sesquilinearized of \(q^{\prime}\). Explicitly, the vectors of \(V\oplus V^{\prime}\) are sums \(v=\sum_{i\in I}e_{i}t_{i}\oplus\sum_{j\in J}s_{j}\eta_{j}\) for \(I\) and \(J\) finite (possibly empty) subsets of \(\mathfrak{I}\), with \(t_{i}\neq 0\neq s_{j}\) for every \(i\in I\) and \(j\in J\), by convention. With this convention, we call the ordered pair \((I,J)\) the _support_ of \(v\). With this convention, if \(v=\sum_{i\in I}e_{i}t_{i}\oplus\sum_{j\in J}s_{j}\eta_{j}\) and \(v^{\prime}=\sum_{i\in I^{\prime}}e_{i}t^{\prime}_{i}\oplus\sum_{j\in J^{ \prime}}s^{\prime}_{j}\eta_{j}\) with supports \((I,J)\) and \((I^{\prime},J^{\prime})\) respectively. then \[f^{\prime}(v,v^{\prime})=\sum_{i\in I\cap J^{\prime}}t_{i}s^{\prime}_{i}+\sum _{j\in J\cap I^{\prime}}s_{j}t^{\prime}_{j}\ \ \text{and}\ \ q^{\prime}(v)=\sum_{i\in I\cap J}t_{i}s_{i}.\] Clearly, \(f(v,v^{\prime})=0\) for every \(v\) if and only if \(v^{\prime}=0\), namely \(I^{\prime}=J^{\prime}=\emptyset\). Therefore \(f^{\prime}\) is non-degenerate. Hence \(q^{\prime}\) is non-degenerate as well. Let \(\mathscr{S}_{q^{\prime}}\) be the polar space associated to \(q^{\prime}\). As \(f^{\prime}\) is non-degenerate, the inclusion mapping of \(\mathscr{S}_{q^{\prime}}\) in \(\mathrm{PG}(\overline{V}^{\prime})\) is the unique embedding of \(\mathscr{S}_{q^{\prime}}\). Clearly, \([V]\) and \([V^{\prime}]\) are opposite generators of \(\mathscr{S}_{q^{\prime}}\) and their join spans \(\mathrm{PG}(\overline{V}^{\prime})\). So, the natural embedding of \(\mathscr{S}_{q^{\prime}}\) in \(\mathrm{PG}(\overline{V}^{\prime})\) is tight. #### 5.4.1 Deep and hyperbolic sub-generators We have \(\dim(\ker^{*}(H)\cap V^{\prime})\leq 1\) for every hyperplane \(H\) of \(V\). Therefore, if \(H\) is a hyperplane of \(V\) and \([a\oplus\alpha]\in[H]^{\perp}\) then either \(\alpha=0\) (hence \(a\oplus\alpha\in V\)) or \(\ker(\alpha)=H\). In the latter case, the condition \(\alpha(a)=0\) forces \(a\in H\). Hence \(\langle H,a\oplus\alpha\rangle=\langle H,\alpha\rangle=\langle H,\ker^{*}(H)\rangle\). It follows that \([H]\) is contained in at most two generators of \(\mathscr{S}_{q^{\prime}}\), namely \([V]\) and possibly \([H+(\ker^{*}(H)\cap V^{\prime})]\) (if \(\ker^{*}(H)\cap V^{\prime}\neq\{0\}\)). Similarly, since \(\dim(\ker(H^{\prime}))\leq 1\) for every hyperplane \(H^{\prime}\) of \(V^{\prime}\), we obtain that every hyperplane of \([V^{\prime}]\) is contained in at most two generators of \(\mathscr{S}_{q^{\prime}}\). So, both \([V]\) and \([V^{\prime}]\) are hyperbolic generators. Since every hyperbolic generator contains hyperbolic sub-generators, \(\mathscr{S}_{q^{\prime}}\) admits hyperbolic sub-generators. Hyperbolic sub-generators are \(\bot\)-minimal and, if \(N\) is a hyperbolic sub-generator and \(a,b\) are opposite points of \(N^{\bot}\), then \(\{a,b\}^{\bot\bot}=\{a,b\}\). So, \(\mathscr{S}_{q^{\prime}}\) admits hyperbolic lines of size \(2\). Hence all hyperbolic lines of \(\mathscr{S}_{q^{\prime}}\) have size \(2\), since \(\operatorname{Aut}(\mathscr{S}_{q^{\prime}})\) acts transitively on the set of pairs of opposite points of \(\mathscr{S}_{q^{\prime}}\). However, as we shall prove in the next subsection (Corollary 5.14), \(\mathscr{S}_{q^{\prime}}\) also admits sub-generators which are neither deep nor hyperbolic. They cannot be \(\bot\)-minimal. Therefore, **Proposition 5.11**: _The polar space \(\mathscr{S}_{q^{\prime}}\) is not regular._ #### 5.4.2 Sub-generators which are not \(\bot\)-minimal For \(i\in\mathfrak{I}\) put \(u_{i}:=e_{i}\oplus\eta_{i}\) and, for \(i,j\in\mathfrak{I}\), put \(u_{i,j}:=u_{i}+u_{j}\). So, \(U:=\langle u_{i,j}\rangle_{i,j\in\mathfrak{I}}\) is a maximal totally \(q^{\prime}\)-singular subspace of \(\overline{V}^{\prime}\). Note that, however, \(U\) is not maximal among the totally \(f^{\prime}\)-isotropic subspaces of \(\overline{V}^{\prime}\). Indeed \(U\) is a hyperplane of \(U^{\prime}:=\langle u_{i}\rangle_{i\in\mathfrak{I}}\) (\(=\langle U,u_{k}\rangle\) for any \(k\in\mathfrak{I}\)), which is a maximal totally \(f\)-isotropic subspace of \(\overline{V}^{\prime}\) (but not a totally \(q^{\prime}\)-singular subspace, since none of the vectors \(u_{i}\) is \(q^{\prime}\)-singular). Pick two elements \(i_{0}\) and \(i_{1}\) of \(\mathfrak{I}\), henceforth called \(0\) and \(1\) for short, and let \(\leq\) be a well ordering of \(\mathfrak{I}\) with \(0\) and \(1\) as the first and second element. Put \(U_{0,1}:=\langle u_{i,j}\rangle_{1<i<j}\). So, \(\operatorname{cod}_{U}(U_{0,1})=2\), namely \([U_{0,1}]\) is a sub-sub-generator of \(\mathscr{S}_{q^{\prime}}\). Indeed \(U=\langle U_{0,1},u_{k,1},u_{h,0}\rangle\) for any choice of \(h,k\in\mathfrak{I}\) such that \(k\neq 1\), \(h\neq 0\) and \(\{k,h\}\neq\{0,1\}\). **Lemma 5.12**: _We have \(U_{0,1}^{\bot}{}^{f^{\prime}}=\langle U_{0,1},e_{0},e_{1},\eta_{0},\eta_{1},u_ {k}\rangle\), for any choice of \(k>1\)._ **Proof.** Pick \(k\in\mathfrak{I}\setminus\{0,1\}\). Clearly \(U_{0,1}^{\bot}{}^{f^{\prime}}\supseteq\langle E_{0,1},e_{0},e_{1},\eta_{0}, \eta_{1},u_{k}\rangle\). Conversely, let \(v=\sum_{i\in I}e_{i}t_{i}\oplus\sum_{j\in J}s_{j}\eta_{j}\) with support \((I,J)\) and suppose that \(v\) belongs to \(U_{0,1}^{\bot}{}^{f^{\prime}}\). We shall prove that \(v\in\langle U_{0,1},e_{0},e_{1},\eta_{0},\eta_{1},e_{k}\oplus\eta_{k}\rangle\). Up to adding a suitable combination of \(e_{0},e_{1},\eta_{0}\) and \(\eta_{1}\) we can assume that \(I,J\subseteq\mathfrak{I}\setminus\{0,1\}\). So, let \(I,J\subseteq\mathfrak{I}\setminus\{0,1\}\). If \(I\neq J\) then we can always find \(i,j>1\) such that \(j\not\in I\cup J\) and \(i\) belongs to only one of \(I\) or \(J\). With \(i\) and \(j\) chosen in this way, let \(i\in I\setminus J\), to fix ideas. Then \(f^{\prime}(u_{i,j},v)=t_{i}\) However \(t_{i}\neq 0\) since \(i\in I\) and \((I,J)\) is the support of \(v\). It follows that \(v\not\perp_{f^{\prime}}u_{i,j}\in E_{0,1}\). This contradicts the assumption that \(v\in E_{0,1}^{\perp_{f^{\prime}}}\). Therefore \(I=J\). Suppose now that, for at least one \(i\in I\), we have \(t_{i}\neq s_{i}\). Then, with \(j\in{\mathfrak{I}}\setminus(\{0,1\}\cup I)\), we get \(f^{\prime}(u_{i,j},v)=t_{i}+s_{i}\neq 0\). Again, this contradicts the assumption \(v\in E_{0,1}^{\perp_{f^{\prime}}}\). Therefore \(t_{i}=s_{i}\) for every \(i\in I\), namely \(v=\sum_{i\in I}u_{i}t_{i}\in U^{\prime}\). We shall now prove that \(v\in\langle U_{0,1},u_{k}\rangle\). We shall argue by induction on \(|I|\). If \(I=\emptyset\) there is nothing to prove. Let \(|I|=1\), say \(I=\{i\}\). So, \(v=u_{i}t_{i}\). If \(i=k\) there is nothing to prove. Let \(i\neq k\). Then \(v+u_{i,k}t_{i}=u_{k}t_{i}\). Hence \(v=u_{k}t_{i}+u_{k,h}t_{i}\in\langle U_{0,1},u_{k}\rangle\), as claimed. Assuming that \(|I|>1\), let \(i\) and \(j\) be distinct elements of \(I\). Then \(v+u_{i,j}t_{i}\) has support \((J,J)\) with \(J\) equal to either \(I\setminus\{i,j\}\) or \(I\setminus\{i\}\) according to whether \(t_{i}=t_{j}\) or \(t_{i}\neq t_{j}\). By the inductive hypothesis, \(v+u_{i,j}t_{i}\in\langle U_{0,1},u_{k}\rangle\). Hence \(v\in\langle U_{0,1},u_{k}\rangle\). \(\Box\) **Proposition 5.13**: _The star \({\rm St}([U_{0,1}])\) of \([U_{0,1}]\) in \({\mathscr{S}}_{q^{\prime}}\) is isomorphic to the quadric \({\mathscr{Q}}_{4}({\mathbb{F}})\) of \({\rm PG}(4,{\mathbb{F}})\)._ **Proof.** The star \({\rm St}([U_{0,1}])\) is isomorphic to the polar space associated with the form induced by \(q^{\prime}\) on a complement \(X\) of \(U_{0,1}\) in \(U_{0,1}^{\perp_{f^{\prime}}}\). By Lemma 5.12, we can choose \(X=\langle e_{0},e_{1},\eta_{0},\eta_{1},u_{k}\rangle\). It is straightforward to check that, with \(X\) chosen in this way, the restriction of \(q^{\prime}\) to \(X\) defines a copy of \({\mathscr{Q}}_{4}({\mathbb{F}})\). Note that \([u_{k}]\) is the nucleus of this quadric. \(\Box\) **Corollary 5.14**: _Each of the sub-generators of \({\mathscr{S}}_{q^{\prime}}\) containing \([U_{0,1}]\) is contained in exactly \(|{\mathbb{F}}|+1\) (hence at least three) generators of \({\mathscr{S}}_{q^{\prime}}\)._ For every totally \(q^{\prime}\)-singular subpace \(W\) of \(\overline{V}^{\prime}\) containing \(U_{0,1}\), the generator \([W]\) of \({\mathscr{S}}_{q^{\prime}}\) is a sub-generator of the polar space \({\mathscr{S}}_{f^{\prime}}\) associated to \(f^{\prime}\). Explicitly, if \(W^{\prime}:=\langle W,u_{k}\rangle\) then \([W^{\prime}]\) is the (unique) generator of \({\mathscr{S}}_{f^{\prime}}\) which contains \([W]\). (Recall that \([U_{0,1}+\langle u_{k}\rangle]\) is the nucleus of the quadric \({\rm St}([U_{0,1}])\).) **Proposition 5.15**: _None of the generators of \({\mathscr{S}}_{q^{\prime}}\) containing \([U_{0,1}]\) admits a complement in \({\mathscr{S}}_{q^{\prime}}\)._ **Proof.** Let \(X\) be a complement of \(W\) in \(\overline{V}^{\prime}\), with \(W\supseteq U_{0,1}\) and \([W]\) a generator of \({\mathscr{S}}_{q^{\prime}}\) and let \(W^{\prime}=\langle W,u_{k}\rangle\). So, \([W^{\prime}]\) is the generator of \({\mathscr{S}}_{f^{\prime}}\) which contains \([W]\). None of the vectors of \(W^{\prime}\setminus W\) is singular for \(q^{\prime}\). Morever, \(u_{k}=w+x\) for suitable vectors \(w\in W\) and \(x\in X\). So, \(x=u_{k}-w\in W^{\prime}\setminus W\). Hence \(q^{\prime}(x)\neq 0\). Therefore \([X]\) is not a singular subspace of \({\mathscr{S}}_{q^{\prime}}\). \(\Box\) #### 5.4.3 Stars of sub-sub-generators Let \(X\) be a sub-sub-generator of \(\mathscr{S}_{q^{\prime}}\). In principle, \(\operatorname{St}(X)\) could be any of the following: 1. a copy of the hyperbolic quadric \(\mathscr{Q}_{4}^{+}(\mathbb{F})\) of \(\operatorname{PG}(3,\mathbb{F})\); 2. a copy of the quadric \(\mathscr{Q}_{4}(\mathbb{F})\) of \(\operatorname{PG}(4,\mathbb{F})\); 3. a non-degenerate quadric of rank 2 in a projective space of dimension at least 5 over \(\mathbb{F}\); 4. a quadratic cone in a projective space of dimension at least 3, with a non-degenerate quadric of rank 1 as a basis; 5. a pair of lines meeting at a point; 6. one single line. In case (1) all sub-generators containing \(X\) are hyperbolic while in cases (2) and (3) none of them is hyperbolic nor deep. In case (4) the sub-generator corresponding to the vertex of the cone is neither hyperbolic nor deep while all remaining points of \(\operatorname{St}(X)\) are provided by deep sub-generators. In case (5) the meet-point of the two lines of \(\operatorname{St}(X)\) corresponds to a hyperbolic sub-generator of \(\mathscr{S}_{q^{\prime}}\) and the remaining points of \(\operatorname{St}(X)\) correspond to deep sub-generators. Finally, in case (6) all sub-generators containing \(X\) are deep. Case (1), (2), (5) and (6) actually occur in \(\mathscr{S}_{q^{\prime}}\). We have shown in Section 5.4.2 how to define a sub-sub-generator \(X\) with \(\operatorname{St}(X)\) as in case (2). In order to obtain (1), choose two hyperplanes \(H\) and \(H^{\prime}\) of \(V\) (or of \(V^{\prime}\)) such that \(\ker^{*}(H)\cap V^{\prime}\neq\{0\}\neq\ker^{*}(H^{\prime})\cap V^{\prime}\) (respectivley \(\ker(H)\neq\{0\}\neq\ker(H^{\prime})\)) and put \(X=[H\cap H^{\prime}]\). When \(\ker^{*}(H)\cap V^{\prime}=\{0\}\neq\ker^{*}(H^{\prime})\cap V^{\prime}\) we get (5) and, if \(\ker^{*}(H\cap H^{\prime})\cap V^{\prime}=\{0\}\) then we get (6). We do not know if cases (3) and (4) actually occur. In cases (1), (5) and (6) all generators of \(\mathscr{S}_{q^{\prime}}\) containing \(X\) are also generators of \(\mathscr{S}_{f^{\prime}}\) while in cases (2) and (5) they are deep sub-generators of \(\mathscr{S}_{f^{\prime}}\). Several possibilities for \(\operatorname{St}(X)\) are allowed in cases (3) and (4), depending on the dimensions of the projective space \(X^{\perp_{f^{\prime}}}/X\) which hosts the quadric \(\operatorname{St}(X)\) and the dimension of the radical of the bilinearization of the quadratic form which defines that quadric. Of course, the range of hypothetical possibilities depends on \(\mathbb{F}\). For instance, if \(\mathbb{F}\) is finite then \(\operatorname{St}(X)\cong\mathscr{Q}_{5}^{-}(\mathbb{F})\) is the unique possibility in (3) while two subcases are allowed in (4), according to whether the basis of the cone is a conic of \(\operatorname{PG}(2,\mathbb{F})\) or an ovoid of \(\operatorname{PG}(3,\mathbb{F})\). In contrast, if \(\mathbb{F}\) is quadratically closed then (3) is impossible and just one subcase survives in (4), where the basis of the cone is a conic. If, as we believe, cases (3) and (4) never occur then the generators of \(\mathscr{S}_{q^{\prime}}\) are partioned in two families, namely the family \(\mathscr{M}_{1}\) formed by generators of \(\mathscr{S}_{q^{\prime}}\) which are generators of \(\mathscr{S}_{f^{\prime}}\) and the family \(\mathscr{M}_{2}\) formed by the generators of \(\mathscr{S}_{q^{\prime}}\) which are (deep) sub-generators of \(\mathscr{S}_{f^{\prime}}\). The family \(\mathscr{M}_{1}\) contains all hyperbolic generators of \(\mathscr{S}_{q^{\prime}}\) while \(\mathscr{M}_{2}\) contains the non-hyperbolic ones. Moreover, a non-hyperbolic generator of \(\mathscr{S}_{q^{\prime}}\) contains no deep or hyperbolic sub-generator. #### 5.4.4 A conjecture The quadric \(\mathscr{S}_{q}\) described in Section 5.2 is regular because it is hyperbolic (Proposition 5.8). The proof of Proposition 5.8 relies on an analogue of Lemma 5.2. In the proof of Lemma 5.2 we have exploited the fact that \(\ker(\ker^{*}(X))=X\) for every subspace \(X\) of \(V\). If we replace \(V^{*}\) with a subspace \(V^{\prime}\) of \(V^{*}\) then, in order to repeat that proof, we need that \(\ker(\ker^{*}(X)\cap V^{\prime})=X\) for every subspace \(X\) of \(V\). This property holds only if \(V^{\prime}=V^{*}\). This remark suggests the following conjecture. **Conjecture 5.16**: _With \(q\) as in Section 5.2, let \(V^{\prime}\) be any proper subspace of \(V^{*}\) such that \(\ker(V^{\prime})=\{0\}\). Then the polar space associated to the form induced by \(q\) on \(V\oplus V^{\prime}\) is non-regular._ Proposition 5.11 testifies in favor of this conjecture.
2305.06821
Constant-depth circuits vs. monotone circuits
We establish new separations between the power of monotone and general (non-monotone) Boolean circuits: - For every $k \geq 1$, there is a monotone function in ${\sf AC^0}$ that requires monotone circuits of depth $\Omega(\log^k n)$. This significantly extends a classical result of Okol'nishnikova (1982) and Ajtai and Gurevich (1987). In addition, our separation holds for a monotone graph property, which was unknown even in the context of ${\sf AC^0}$ versus ${\sf mAC^0}$. - For every $k \geq 1$, there is a monotone function in ${\sf AC^0}[\oplus]$ that requires monotone circuits of size $\exp(\Omega(\log^k n))$. This makes progress towards a question posed by Grigni and Sipser (1992). These results show that constant-depth circuits can be more efficient than monotone circuits when computing monotone functions. In the opposite direction, we observe that non-trivial simulations are possible in the absence of parity gates: every monotone function computed by an ${\sf AC^0}$ circuit of size $s$ and depth $d$ can be computed by a monotone circuit of size $2^{n - n/O(\log s)^{d-1}}$. We show that the existence of significantly faster monotone simulations would lead to breakthrough circuit lower bounds. In particular, if every monotone function in ${\sf AC^0}$ admits a polynomial size monotone circuit, then ${\sf NC^2}$ is not contained in ${\sf NC^1}$ . Finally, we revisit our separation result against monotone circuit size and investigate the limits of our approach, which is based on a monotone lower bound for constraint satisfaction problems established by G\"o\"os et al. (2019) via lifting techniques. Adapting results of Schaefer (1978) and Allender et al. (2009), we obtain an unconditional classification of the monotone circuit complexity of Boolean-valued CSPs via their polymorphisms. This result and the consequences we derive from it might be of independent interest.
Bruno P. Cavalar, Igor C. Oliveira
2023-05-11T14:13:52Z
http://arxiv.org/abs/2305.06821v1
# Constant-Depth Circuits vs. Monotone Circuits ###### Abstract We establish new separations between the power of monotone and general (non-monotone) Boolean circuits: * For every \(k\geq 1\), there is a monotone function in \(\mathsf{AC}^{0}\) (constant-depth poly-size circuits) that requires monotone circuits of depth \(\Omega(\log^{k}n)\). This significantly extends a classical result of Okoi'nishnikova [1] and Ajtai and Gurevich [1]. In addition, our separation holds for a monotone graph property, which was unknown even in the context of \(\mathsf{AC}^{0}\) versus \(\mathsf{mAC}^{0}\). * For every \(k\geq 1\), there is a monotone function in \(\mathsf{AC}^{0}[\oplus]\) (constant-depth poly-size circuits extended with parity gates) that requires monotone circuits of size \(\exp(\Omega(\log^{k}n))\). This makes progress towards a question posed by Grigni and Sipser [1]. These results show that constant-depth circuits can be more efficient than monotone formulas and monotone circuits when computing monotone functions. In the opposite direction, we observe that non-trivial simulations are possible in the absence of parity gates: every monotone function computed by an \(\mathsf{AC}^{0}\) circuit of size \(s\) and depth \(d\) can be computed by a monotone circuit of size \(2^{n-n/O(\log s)^{d-1}}\). We show that the existence of significantly faster monotone simulations would lead to breakthrough circuit lower bounds. In particular, if every monotone function in \(\mathsf{AC}^{0}\) admits a polynomial size monotone circuit, then \(\mathsf{NC}^{2}\) is not contained in \(\mathsf{NC}^{1}\). Finally, we revisit our separation result against monotone circuit size and investigate the limits of our approach, which is based on a monotone lower bound for constraint satisfaction problems (CSPs) established by Goos, Kamath, Robere and Sokolov [1] via lifting techniques. Adapting results of Schaefer [1] and Allender, Bauland, Immerman, Schnoor and Vollmer [1], we obtain an unconditional classification of the monotone circuit complexity of Boolean-valued CSPs via their polymorphisms. This result and the consequences we derive from it might be of independent interest. ###### Contents * 1 Introduction * 1.1 Results * 1.1.1 Constant-depth circuits vs. monotone circuits * 1.1.2 Non-trivial monotone simulations and their consequences * 1.1.3 Monotone complexity of constraint satisfaction problems * 1.2 Techniques * 1.3 Directions and open problems * 2 Preliminaries * 2.1 Notation * 2.2 Background results * 3 Constant-Depth Circuits vs. Monotone Circuits * 3.1 A monotone size lower bound for a function in \(\mathsf{AC}^{0}[\oplus]\) * 3.2 A monotone depth lower bound for a graph property in \(\mathsf{AC}^{0}\) * 3.3 Efficient monotone padding for graph properties * 4 Non-Trivial Monotone Simulations and Their Consequences * 4.1 A non-trivial simulation for bounded-depth circuits * 4.2 Non-monotone lower bounds from monotone simulations * 5 Monotone Complexity of Constraint Satisfaction Problems * 5.1 Definitions * 5.2 Basic facts about CSP-SAT * 5.3 A monotone dichotomy for CSP-SAT * 5.4 Some auxiliary results * 5.5 Consequences for monotone circuit lower bounds via lifting * A A Lower Bound for 3-XOR-SAT Using the Approximation Method * B Schaefer's Theorem in Monotone Complexity * B.1 Connectivity and generation functions * B.2 Proof of reduction lemmas * B.3 Monotone circuit upper bounds * C Background on Post's Lattice and Clones Introduction A Boolean function \(f\colon\{0,1\}^{n}\to\{0,1\}\) is monotone if \(f(x)\leq f(y)\) whenever \(x_{i}\leq y_{i}\) for each coordinate \(1\leq i\leq n\). Monotone Boolean functions, and the monotone Boolean circuits1 that compute them, have been extensively investigated for decades due to their relevance in circuit complexity [10], cryptography [11], learning theory [1], proof complexity [12, 13], property testing [10], pseudorandomness [14], optimisation [10], hazard-free computations [12], and meta-complexity [15], among other topics. In addition, over the last few years a number of results have further highlighted the importance of monotone complexity as a central topic in the study of propositional proofs, total search problems, communication protocols, and related areas (see [1] for a recent survey). Footnote 1: Recall that in a monotone Boolean circuit the gate set is limited to \(\{\mathsf{AND},\mathsf{OR}\}\) and input gates are labelled by elements from \(\{x_{1},\ldots,x_{n},0,1\}\). Some of the most fundamental results about monotone functions deal with their complexities with respect to different classes of Boolean circuits, such as the monotone circuit lower bound of Razborov [10] for Matching and the constant-depth circuit lower bound of Rossman [13] for \(k\)-Clique. Particularly important to our discussion is a related strand of research that contrasts the computational power of monotone circuits relative to general (non-monotone) \(\mathsf{AND}/\mathsf{OR}/\mathsf{NOT}\) circuits, which we review next. Weakness of Monotone Circuits.The study of monotone simulations of non-monotone computations and associated separation results has a long and rich history. In a sequence of celebrated results, [10, 1, 1, 13] showed the existence of monotone functions that can be computed by circuits of polynomial size but require monotone circuits of size \(2^{n^{\Omega(1)}}\). In other words, the use of negations can significantly speedup the computation of monotone functions. More recently, Goos, Kamath, Robere and Sokolov [11] considerably strengthened this separation by showing that some monotone functions in \(\mathsf{NC}^{2}\) (poly-size \(O(\log^{2}n)\)-depth fan-in two circuits) require monotone circuits of size \(2^{n^{\Omega(1)}}\). (An earlier weaker separation against monotone depth \(n^{\Omega(1)}\) was established in [14].) Therefore, negations can also allow monotone functions to be efficiently computed in parallel. Similar separations about the limitations of monotone circuits are also known at the low-complexity end of the spectrum: Okol'nishnikova [12] and (independently) Ajtai and Gurevich [1] exhibited monotone functions in \(\mathsf{AC}^{0}\) (i.e., constant-depth poly-size \(\mathsf{AND}/\mathsf{OR}/\mathsf{NOT}\) circuits) that require monotone \(\mathsf{AC}^{0}\) circuits (composed of only \(\mathsf{AND}/\mathsf{OR}\) gates) of super-polynomial size.2 This result has been extended to an exponential separation in [11], which shows the existence of a monotone function in \(\mathsf{AC}^{0}\) that requires monotone depth-\(d\) circuits of size \(2^{\widetilde{\Omega}(n^{1/d})}\) even if \(\mathsf{MAJ}\) (majority) gates are allowed in addition to \(\mathsf{AND}/\mathsf{OR}\) gates.3 Footnote 2: We refer to [1] for an alternate exposition of this result. Footnote 3: Separations between monotone and non-monotone devices have also been extensively investigated in other settings. This includes average-case complexity [1], different computational models, such as span programs [1, 1] and algebraic complexity (see [1] and references therein), and separations in first-order logic [12, 13, 14]. We restrict our attention to worst-case separations for Boolean circuits in this paper. Strength of Monotone Circuits.In contrast to these results, in many settings negations do not offer a significant speedup and monotone computations can be un instance, monotone circuits are able to efficiently implement several non-trivial algorithms, such as solving constraint satisfaction problems using treewidth bounds (see, e.g., [14, Chapter 3]). As another example, in the context of cryptography, it has been proved that if one-way functions exist, then there are monotone one-way functions [13]. Below we describe results that are more closely related to the separations investigated in our paper. In the extremely constrained setting of depth-\(2\) circuits, Quine [15] showed that monotone functions computed by size-\(s\)\(\mathsf{DNFs}\) (resp., \(\mathsf{CNFs}\)) can always be computed by size-\(s\) monotone \(\mathsf{DNFs}\) (resp., \(\mathsf{CNFs}\)). Some results along this line are known for larger circuit depth, but with respect to more structured classes of monotone Boolean functions. Rossman [16, 17] showed that any homomorphism-preserving graph property computed by \(\mathsf{AC}^{0}\) circuits is also computed by monotone \(\mathsf{AC}^{0}\) circuits.4 Under no circuit depth restriction, Berkowitz [1] proved that the monotone and non-monotone circuit size complexities of every slice function are polynomially related.5 Footnote 4: A function \(f:\{0,1\}^{(\overline{2})}\to\{0,1\}\) is called a _graph property_ if \(f(G)=f(H)\) whenever \(G\) and \(H\) are isomorphic graphs, and _homomorphism-preserving_ if \(f(G)\leq f(H)\) whenever there is a graph homomorphism from \(G\) to \(H\). It is easy to see that every homomorphism-preserving graph property is monotone. Footnote 5: A function \(f:\{0,1\}^{(\overline{2})}\to\{0,1\}\) is a _slice function_ if there is \(i\geq 0\) such that \(f(x)\) is \(0\) on inputs of Hamming weight less than \(i\) and \(1\) on inputs of Hamming weight larger than \(i\). Despite much progress and sustained efforts, these two classes of results leave open tantalising problems about the power of cancellations in computation.6 In particular, they suggest the following basic question about the contrast between the weakness of monotone computations and the strength of negations: Footnote 6: Any non-monotone circuit can be written as an XOR (parity) of distinct monotone sub-circuits (see, e.g., [1, Appendix A.1]), so negations can be seen as a way of combining, or cancelling, different monotone computations. See also a related discussion in Valiant [15]. _What is the largest computational gap between the power of monotone and_ _general_ (_non-monotone_) _Boolean circuits?_ A concrete formalisation of this question dates back to the seminal work on monotone complexity of Grigni and Sipser [11] in the early nineties. They asked if there are monotone functions in \(\mathsf{AC}^{0}\) that require super-polynomial size monotone Boolean circuits, i.e., if \(\mathsf{AC}^{0}\cap\mathsf{Mono}\nsubseteq\mathsf{mSIZE}[\mathsf{poly}]\). In case this separation holds, it would exhibit the largest qualitative gap between monotone and general Boolean circuits, i.e., even extremely parallel non-monotone computations can be more efficient than arbitrary monotone computations. ### Results Our results show that, with respect to the computation of monotone functions, highly parallel (non-monotone) Boolean circuits can be super-polynomially more efficient than unrestricted monotone circuits. Before providing a precise formulation of these results, we introduce some notation. For a function \(d\colon\mathbb{N}\to\mathbb{N}\), let \(\mathsf{mDEPTH}[d]\) denote the class of Boolean functions computed by monotone fan-in two \(\mathsf{AND}/\mathsf{OR}\) Boolean circuits of depth \(O(d(n))\). Similarly, we use \(\mathsf{mSIZE}[s]\) to denote the class of Boolean functions computed by monotone circuits of size \(O(s(n))\). More generally, for a circuit class \(\mathcal{C}\), we let \(\mathsf{m}\mathcal{C}\) denote its natural monotone analogue. Finally, for a Boolean function \(f\colon\{0,1\}^{n}\to\{0,1\}\), we use \(\mathsf{mSIZE}(f)\) and \(\mathsf{mDEPTH}(f)\) to denote its monotone circuit size and depth complexities, respectively. We refer to Jukna [14] for standard background on circuit complexity theory. #### 1.1.1 Constant-depth circuits vs. monotone circuits Recall that the Okol'nishnikova-Ajtai-Gurevich [11, 1] theorem states that \(\mathsf{AC}^{0}\cap\mathsf{Mono}\nsubseteq\mathsf{mAC}^{0}\). In contrast, as our main result, we establish a separation between constant-depth Boolean circuits and monotone circuits of much larger depth. In particular, we show that constant-depth circuits with negations can be significantly more efficient than monotone formulas. **Theorem 1.1** (Polynomial-size constant-depth vs. larger monotone depth).: _For every \(k\geq 1\), we have \(\mathsf{AC}^{0}\cap\mathsf{Mono}\nsubseteq\mathsf{mDEPTH}[(\log n)^{k}]\). Moreover, this separation holds for a monotone graph property._ In a more constrained setting, Kuperberg [13, 14] exhibited a monotone graph property expressible in first-order logic that cannot be expressed in positive first-order logic. A separation that holds for a monotone graph property was unknown even in the context of \(\mathsf{AC}^{0}\) versus \(\mathsf{mAC}^{0}\). Let \(\mathsf{HomPreserving}\) denote the class of all homomorphism-preserving graph properties, and recall that Rossman [12, 13] established that \(\mathsf{AC}^{0}\cap\mathsf{HomPreserving}\subseteq\mathsf{mAC}^{0}\). Theorem 1.1 implies that this efficient monotone simulation does not extend to the larger class of monotone graph properties, even if super-logarithmic depth is allowed. Our argument is completely different from those of [11, 1, 1, 15] and their counterparts in first-order logic [12, 13, 14]. In particular, it allows us to break the \(O(\log n)\) monotone depth barrier present in previous separations with an \(\mathsf{AC}^{0}\) upper bound, which rely on lower bounds against monotone circuits of depth \(d\) and size (at most) \(2^{n^{O(1/d)}}\). We defer the discussion of our techniques to Section 1.2. In our next result, we consider monotone circuits of unbounded depth. **Theorem 1.2** (Polynomial-size constant-depth vs. larger monotone size).: _For every \(k\geq 1\), we have \(\mathsf{AC}^{0}[\oplus]\cap\mathsf{Mono}\nsubseteq\mathsf{mSIZE}[2^{(\log n)^{k }}]\)._ Theorem 1.1 and Theorem 1.2 are incomparable: while the monotone lower bound is stronger in the latter, its constant-depth upper bound requires parity gates. Theorem 1.2 provides the first separation between constant-depth circuits and monotone circuits of polynomial size, coming remarkably close to a solution to the question considered by Grigni and Sipser [10]. We note that in both of our results the family of monotone functions is explicit and has a simple description (see Section 1.2). #### 1.1.2 Non-trivial monotone simulations and their consequences While Theorem 1.1 and Theorem 1.2 provide more evidence for the existence of monotone functions in \(\mathsf{AC}^{0}\) which require monotone circuits of super-polynomial size, they still leave open the intriguing possibility that unbounded fan-in \(\oplus\)-gates might be crucial to achieve the utmost cancellations (speedups) provided by constant-depth circuits. This further motivates the investigation of efficient monotone simulations of constant-depth circuits without parity gates, which we consider next. For convenience, let \(\mathsf{AC}^{0}_{d}[s]\) denote the class of Boolean functions computed by \(\mathsf{AC}^{0}\) circuits of depth \(\leq d\) and size \(\leq s(n)\). (We might omit \(s(n)\) and/or \(d\) when implicitly quantifying over all families of polynomial size circuits and/or all constant depths.) We observe that a non-trivial monotone simulation is possible in the absence of parity gates. Indeed, by combining existing results from circuit complexity theory, it is not hard to show that \(\mathsf{AC}^{0}_{d}[s]\cap\mathsf{Mono}\subseteq\mathsf{mSIZE}[2^{n(1-1/O(\log s )^{d-1})}]\) (see Section 4.1). Moreover, this upper bound is achieved by monotone \(\mathsf{DNF}\)s of the same size. This is the best upper bound we can currently show for the class of all monotone functions when the depth \(d\geq 3\). (Negations offer no speedup at depths \(d\leq 2\)[21].) In contrast, we prove that a significantly faster monotone simulation would lead to new (non-monotone) lower bounds in complexity theory. Recall that it is a notorious open problem to obtain explicit lower bounds against depth-\(d\) circuits of size \(2^{\omega(n^{1/(d-1)})}\), for any fixed \(d\geq 3\). We denote by \(\mathsf{GraphProperties}\) the set of all Boolean functions which are graph properties. **Theorem 1.3** (New circuit lower bounds from monotone simulations).: _There exists \(\varepsilon>0\) such that the following holds._ 1. _If_ \(\mathsf{AC}^{0}_{3}\cap\mathsf{Mono}\subseteq\mathsf{mNC}^{1}\)_, then_ \(\mathsf{NP}\not\subseteq\mathsf{AC}^{0}_{3}[2^{o(n)}]\)_._ 2. _If_ \(\mathsf{AC}^{0}_{4}\cap\mathsf{Mono}\subseteq\mathsf{mSIZE}[\mathsf{poly}]\)_, then_ \(\mathsf{NP}\not\subseteq\mathsf{AC}^{0}_{4}[2^{o(\sqrt{n}/\log n)}]\)_._ 3. _If_ \(\mathsf{AC}^{0}\cap\mathsf{Mono}\subseteq\mathsf{mSIZE}[\mathsf{poly}]\)_, then_ \(\mathsf{NC}^{2}\not\subseteq\mathsf{NC}^{1}\)_._ 4. _If_ \(\mathsf{NC}^{1}\cap\mathsf{Mono}\subseteq\mathsf{mSIZE}[2^{O(n^{\varepsilon})}]\)_, then_ \(\mathsf{NC}^{2}\not\subseteq\mathsf{NC}^{1}\)_._ 5. _If_ \(\mathsf{AC}^{0}\cap\mathsf{Mono}\cap\mathsf{GraphProperties}\subseteq \mathsf{mSIZE}[\mathsf{poly}]\)_, then_ \(\mathsf{NP}\not\subseteq\mathsf{NC}^{1}\)_._ 6. _If_ \(\mathsf{NC}^{1}\cap\mathsf{Mono}\cap\mathsf{GraphProperties}\subseteq \mathsf{mSIZE}[\mathsf{poly}]\)_, then_ \(\mathsf{L}\not\subseteq\mathsf{NC}^{1}\)_._ Item (3) of Theorem 1.3 implies in particular that, if the upper bound of Theorem 1.2 cannot be improved to \(\mathsf{AC}^{0}\) (i.e., the question asked by [10] has a negative answer), then \(\mathsf{NC}^{2}\not\subseteq\mathsf{NC}^{1}\). It also improves a result from [11] showing the weaker conclusion \(\mathsf{NP}\not\subseteq\mathsf{NC}^{1}\) under the same assumption. Even if it's impossible to efficiently simulate \(\mathsf{AC}^{0}\) circuits computing monotone functions using unbounded depth monotone circuits, it could still be the case that a simulation exists for certain classes of monotone functions with additional structure. As explained above, Rossman's result [22, 23] achieves this for graph properties that are preserved under homomorphisms. Items (5) and (6) of Theorem 1.3 show that a simulation that holds for all monotone graph properties is sufficient to get new separations in computational complexity. #### 1.1.3 Monotone complexity of constraint satisfaction problems Recall that [10] showed the existence of a monotone function \(f^{\mathsf{GKRS}}\) in \(\mathsf{NC}^{2}\) that is not in \(\mathsf{mSIZE}[2^{n^{\Omega(1)}}]\). As opposed to classical results [13, 1, 1, 15] that rely on the approximation method, their monotone circuit lower bound employs a lifting technique from communication complexity. It is thus natural to consider if their approach can be adapted to provide a monotone function \(g\) that is efficiently computable by constant-depth circuits but is not in \(\mathsf{mSIZE}[\mathsf{poly}]\). As remarked in [10, 1], all monotone lower bounds obtained from lifting theorems so far also hold for monotone encodings of constraint satisfaction problems (CSPs). Next, we introduce a class of monotone Boolean functions \(\mathsf{CSP-SAT}_{S}\) which capture the framework and lower bound of [10]. **Encoding CSPs as monotone Boolean functions.** Let \(R\subseteq\left\{0,1\right\}^{k}\) be a relation. We call \(k\) the _arity_ of \(R\). Let \(V=\left(i_{1},\ldots,i_{k}\right)\in\left[n\right]^{k}\), and let \(f_{R,V}:\left\{0,1\right\}^{n}\rightarrow\left\{0,1\right\}\) be the function that accepts a string \(x\in\left\{0,1\right\}^{n}\) if \(\left(x_{i_{1}},\ldots,x_{i_{k}}\right)\in R\). We call \(f_{R,V}\) a _constraint application_ of \(R\) on \(n\) variables. (A different choice of the sequence \(V\) gives a different constraint application of \(R\).) If \(S\) is a finite set of Boolean relations, we call any set of constraint applications of relations from \(S\) on a fixed set of variables an \(S\)_-formula_. In particular, we can describe an \(S\)-formula through a set of pairs \(\left(V,R\right)\). We say that an \(S\)-formula \(F\) is _satisfiable_ if there exists an assignment to the variables of \(F\) which satisfies all the constraints of \(F\). Let \(S=\left\{R_{1},\ldots,R_{k}\right\}\) be a finite set of Boolean relations. Let \(\ell_{i}\) be the arity of the relation \(R_{i}\). Note that there are \(n^{\ell_{i}}\) possible constraint applications of the relation \(R_{i}\) on \(n\) variables. Let \(N:=\sum_{i=1}^{k}n^{\ell_{i}}\). We can identify each \(S\)-formula \(F\) on a fixed set of \(n\) variables with a corresponding string \(w^{F}\in\left\{0,1\right\}^{N}\), where \(w^{F}_{j}=1\) if and only if the \(j\)-th possible constraint application (corresponding to one of the \(N\) pairs \(\left(V,R\right)\)) appears in \(F\). Let \(\mathsf{CSP\text{-}SAT}^{n}_{S}:\left\{0,1\right\}^{N}\rightarrow\left\{0,1\right\}\) be the Boolean function which accepts a given \(S\)-formula \(F\) if \(F\) is _unsatisfiable_. Note that this is a monotone function. When \(n\) is clear from the context or we view \(\{\mathsf{CSP\text{-}SAT}^{n}_{S}\}_{n\geq 1}\) as a sequence of functions, we simply write \(\mathsf{CSP\text{-}SAT}_{S}\). The function \(f^{\mathsf{GKRS}}\) from [10] is simply \(\mathsf{CSP\text{-}SAT}_{S}\) for \(S=\left\{\oplus_{3}^{0},\oplus_{3}^{1}\right\}\), where \(\oplus_{3}^{b}(x_{1},x_{2},x_{3})=1\) if and only if \(\sum_{i}x_{i}=b\) (\(\mathsf{mod}\) 2). More generally, for any finite set \(S\) of Boolean relations, their framework shows how to lift a Resolution width (resp. depth) lower bound for an arbitrary unsatisfiable \(S\)-formula \(F\) over \(m\) variables into a corresponding monotone circuit size (resp. depth) lower bound for \(\mathsf{CSP\text{-}SAT}^{n}_{S}\), where \(n=\mathsf{poly}(m)\). Despite the generality of the technique from [10] and the vast number of possibilities for \(S\), we prove that a direct application of their approach cannot establish Theorem 1.1 and Theorem 1.2. This is formalised as follows. (We refer to Section 5 for much stronger forms of the result.) **Theorem 1.4** (Limits of the direct approach via lifting and CSPs).: _Let \(S\) be a finite set of Boolean relations. The following holds._ 1. _If_ \(\mathsf{CSP\text{-}SAT}_{S}\notin\mathsf{mSIZE}[\mathsf{poly}]\) _then_ \(\mathsf{CSP\text{-}SAT}_{S}\) _is_ \(\oplus\mathsf{L}\)_-hard under_ \(\leq_{m}^{\mathsf{AC}^{0}}\) _reductions._ 2. _If_ \(\mathsf{CSP\text{-}SAT}_{S}\notin\mathsf{mNC}^{1}\) _then_ \(\mathsf{CSP\text{-}SAT}_{S}\) _is_ \(\mathsf{L}\)_-hard under_ \(\leq_{m}^{\mathsf{AC}^{0}}\) _reductions._ In particular, since there are functions (e.g., Majority) computable in logarithmic space that are not in \(\mathsf{AC}^{0}[\oplus]\), Theorem 1.4 (Part 2) implies that any \(\mathsf{CSP\text{-}SAT}_{S}\) function that is hard for poly-size monotone formulas (\(\mathsf{mNC}^{1}\)) must lie outside \(\mathsf{AC}^{0}[\oplus]\). Observe that this can also be interpreted as a _monotone simulation_: for any finite set \(S\) of Boolean relations, if \(\mathsf{CSP\text{-}SAT}_{S}\in\mathsf{AC}^{0}[\oplus]\) then \(\mathsf{CSP\text{-}SAT}_{S}\in\mathsf{mNC}^{1}\).7 Footnote 7: Jumping ahead, our proof of Theorem 1.2 still relies in a crucial way on the monotone lower bound obtained by [10]. However, our argument requires an extra ingredient and does not follow from a direct application of their template. We provide more details about it in Section 1.2 below. Interestingly, the proof of Theorem 1.1 was discovered by trying to avoid the “barrier” posed by Theorem 1.4. Theorem 1.4 is a corollary of a general result that completely classifies the monotone circuit complexity of Boolean-valued constraint satisfaction problems based on the set \(\mathsf{Pol}(S)\) of _polymorphisms_ of \(S\), a standard concept in the investigation of CSPs.8 We present next a simplified version of this result, which shows a dichotomy for the monotone circuit size and depth of Boolean-valued constraint satisfaction problems. We refer to Section 5 for a more general formulation and additional consequences. **Theorem 1.5** (Dichotomies for the monotone complexity of Boolean-valued CSPs).: _Let \(S\) be a finite set of Boolean relations. The following holds._ 1. Monotone Size Dichotomy: _If_ \(\mathsf{Pol}(S)\subseteq\mathrm{L}_{3}\) _there is_ \(\varepsilon>0\) _such that_ \(\mathsf{mSIZE}(\mathsf{CSP}\mbox{-}\mathsf{SAT}_{S})=2^{\Omega(n^{\varepsilon})}\)_. Otherwise,_ \(\mathsf{mSIZE}(\mathsf{CSP}\mbox{-}\mathsf{SAT}_{S})=n^{O(1)}\)_._ 2. Monotone Depth Dichotomy: _If_ \(\mathsf{Pol}(S)\subseteq\mathrm{L}_{3}\) _or_ \(\mathsf{Pol}(S)\subseteq\mathrm{V}_{2}\) _or_ \(\mathsf{Pol}(S)\subseteq\mathrm{E}_{2}\)_, there is_ \(\varepsilon>0\) _such that_ \(\mathsf{mDEPTH}(\mathsf{CSP}\mbox{-}\mathsf{SAT}_{S})=\Omega(n^{\varepsilon})\)_. Otherwise,_ \(\mathsf{CSP}\mbox{-}\mathsf{SAT}_{S}\in\mathsf{mNC}^{2}\)_._ We note that previous papers of Schaefer [14] and Allender, Bauland, Immerman, Schnoor and Vollmer [1] provided a _conditional_ classification of the complexity of such CSPs. Theorem 1.5 and its extensions, which build on their results and techniques, paint a complete and _unconditional_ picture of their monotone complexity.9 Footnote 9: We remark that only recently has Schaefer’s classification been extended to the non-Boolean case [13, 14]. Though the refined classification of [1] is conjectured to hold analogously in the case of non-Boolean CSPs [10], this is still open (see the discussion in [14, Section 7]). ### Techniques Our arguments combine in novel ways several previously unrelated ideas from the literature. The exposition below follows the order in which the results appear above, except for the overview of the proof of Theorem 1.1, which appears last. We discuss this result after explaining the proof of Theorem 1.2 and the classification of the monotone complexity of CSPs (Theorem 1.4 and Theorem 1.5), as this sheds light into how the proof of Theorem 1.1 was discovered and into the nature of the argument. A monotone circuit size lower bound for a function in \(\mathsf{AC}^{0}[\oplus]\).We first give an overview of the proof of Theorem 1.2. The lower bound of [11].We begin by providing more details about the aforementioned monotone circuit lower bound of [11], since their result is a key ingredient in our separation (see [1] for a more detailed overview). Recall that their function \(f^{\mathsf{GKRS}}\) corresponds to \(\mathsf{CSP}\mbox{-}\mathsf{SAT}_{S}\) for \(S=\{\oplus_{3}^{0},\oplus_{3}^{1}\}\). Following their notation, this is simply the Boolean function \(\mathsf{3}\mbox{-}\mathsf{XOR}\mbox{-}\mathsf{SAT}_{n}\colon\{0,1\}^{2n^{3} }\to\{0,1\}\) which uses each input bit to indicate the presence of a linear equation with exactly \(3\) variables. This (monotone) function accepts a given linear system over \(\mathbb{F}_{2}\) if the system is _unsatisfiable_. As one of their main results, [11] employed a lifting technique from communication complexity to show the existence of a constant \(\varepsilon>0\) such that \(\mathsf{mSIZE}(\mathsf{3}\mbox{-}\mathsf{XOR}\mbox{-}\mathsf{SAT}_{n})=2^{n^ {\varepsilon}}\). (We show in Appendix A that a weaker super-polynomial monotone circuit size lower bound for \(\mathsf{3}\mbox{-}\mathsf{XOR}\mbox{-}\mathsf{SAT}_{n}\) can also be obtained using the approximation method and a reduction.) Sketch of the proof of Theorem 1.2.Since \(\mathsf{3}\mbox{-}\mathsf{XOR}\mbox{-}\mathsf{SAT}_{n}\in\mathsf{NC}^{2}\) (see, e.g, [11]), their result implies that \(\mathsf{NC}^{2}\cap\mathsf{Mono}\nsubseteq\mathsf{mSIZE}[2^{n^{\Omega(1)}}]\). On the other hand, we are after a separation between constant-depth_ (non-monotone) circuits and _polynomial-size_ (unbounded depth) monotone circuits. There are two natural ways that one might try to approach this challenge, as discussed next. First, the lifting framework explored by [1] offers in principle the possibility that by carefully picking a different set \(S\) of Boolean relations, one might be able to reduce the non-monotone depth complexity of \(\mathsf{CSP-SAT}_{S}\) while retaining super-polynomial monotone hardness. However, Theorem 1.4 shows that this is impossible, as explained above. A second possibility is to combine the _exponential_\(2^{n^{\varepsilon}}\) monotone circuit size lower bound for \(\mathsf{3-XOR-SAT}_{n}\) and a padding argument, since we only need _super-polynomial_ hardness. Indeed, this argument can be used to define a monotone function \(g\colon\{0,1\}^{n}\to\{0,1\}\) that is computed by polynomial-size fan-in two circuits of depth \(\mathsf{poly}(\log\log n)\) but requires monotone circuit of size \(n^{\omega(1)}\). However, it is clear that no padding argument alone can reduce the non-monotone circuit depth bound to \(O(1)\) while retaining the desired monotone hardness. Given that both the classical widely investigated approximation method for monotone lower bounds and the more recent lifting technique do not appear to work in their current forms, for some time it seemed to us that, if true, a significantly new technique would be needed to establish a separation similar to the one in Theorem 1.2. Perhaps surprisingly, it turns out that a more clever approach that combines padding with a non-trivial circuit upper bound can be used to obtain the result. The first key observation, already present in [1] and other papers, is that \(\mathsf{3-XOR-SAT}_{n}\) can be computed not only in \(\mathsf{NC}^{2}\) but actually by polynomial-size span programs over \(\mathbb{F}_{2}\). On the other hand, it is known that this model is equivalent in power to parity branching programs [13], which correspond to the non-uniform version of \(\oplus\mathsf{L}\), i.e., counting modulo \(2\) the number of accepting paths of a nondeterministic Turing machine that uses \(O(\log n)\) space. A second key idea is that such a computation can be simulated by \(\mathsf{AC}^{0}[\oplus]\) circuits of sub-exponential size and large depth. More precisely, similarly to an existing simulation of \(\mathsf{NL}\) (nondeterministic logspace) by \(\mathsf{AC}^{0}\) circuits of depth \(d\) and size \(2^{n^{O(1/d)}}\) via a "guess-and-verify" approach, it is possible to achieve an analogous simulation of \(\oplus\mathsf{L}\) using \(\mathsf{AC}^{0}[\oplus]\) circuits (this folklore result appears implicit in [1] and [15]). Putting everything together, it follows that for a large enough but constant depth, \(\mathsf{3-XOR-SAT}_{n}\) can be computed by \(\mathsf{AC}^{0}[\oplus]\) circuits of size \(2^{n^{\varepsilon/2}}\). Since this function is hard against monotone circuits of size \(2^{n^{\varepsilon}}\), a padding argument can now be used to establish a separation between \(\mathsf{AC}^{0}[\oplus]\) and \(\mathsf{mSIZE[poly]}\). (A careful choice of parameters provides the slightly stronger statement in Theorem 1.2.) Non-trivial monotone simulations and their consequences.In order to conclude that significantly stronger monotone simulations imply new complexity separations (Theorem 1.3), we argue contrapositively. By supposing a complexity collapse, we can exploit known monotone circuit lower bounds to conclude that a hard monotone function exists in a lower complexity class. For instance, if \(\mathsf{NC}^{2}\subseteq\mathsf{NC}^{1}\), then \(\mathsf{3-XOR-SAT}\in\mathsf{NC}^{1}\), and we can conclude by standard depth-reduction for \(\mathsf{NC}^{1}\) and padding, together with the exponential lower bound for \(\mathsf{3-XOR-SAT}\) due to [1], that there exists a monotone function in \(\mathsf{AC}^{0}\) which is hard for polynomial-size monotone circuits. The other implications are argued in a similar fashion. In particular, we avoid the more complicated use of hardness magnification from [1] to establish this kind of result, while also getting a stronger consequence. A little more work is required in the case of graph properties (Theorem 1.3 Items 5 and 6), as padding the function computing a graph property does not yield a graph property. We give a general lemma that allows us to pad monotone graph properties while preserving their struc ture (Lemma 3.6). We then argue as in the case for general functions, using known monotone lower bounds for graph properties. We note that Lemma 3.6 is also important in the proof of Theorem 1.1, which will be discussed below. We believe that our padding technique for graph properties might find additional applications. Monotone complexity of CSPs.These are the most technical results of the paper. Since explaining the corresponding proofs requires more background and case analysis, here we only briefly describe the main ideas and references behind Theorem 1.4, Theorem 1.5, and the extensions discussed in Section 5. A seminal work of Schaefer [13] proved that any Boolean CSP is either solvable in polynomial-time or it is -complete. Later, Jeavons [14] observed that the complexity of deciding if a given set of constraint applications of is satisfiable depends exclusively on the set of _polymorphisms_ of. Intuitively, the set of polymorphisms of a set of relations is a measure of its symmetry. The more symmetric a set of relations is, the lesser is its expressive power. Jeavons formally proves this intuition by showing that, if, then the problem of deciding the satisfiability of a given -formula can be reduced in polynomial-time to that of deciding the satisfiability of a given -formula. This allows Jeavons to reprove Schaefer's result. Existing proofs and classification results for constraint satisfaction problems do not encode the satisfiability problem as a monotone Boolean function, in the way we described above. We reexamine Schaefer's and Jeavons's proofs and establish that the reduction from to can also be done with efficient monotone circuits. Making use of and adapting parts of the refined results and analysis of [1], which builds on the earlier dichotomy result of [13] and provides a detailed picture of the computational complexity of Boolean-valued CSPs, we prove in fact that the underlying reductions can all be done in monotone nondeterministic logspace. Finally, using known upper and lower bounds for monotone circuits together with a direct analysis of some basic cases, and inspecting Post's lattice [15, 1, 1], we are able to show that is hard for monotone circuits only when is -complete, as in Theorem 1.4 Part 1. A monotone circuit depth lower bound for a function in.Next, we combine insights obtained from the monotone lower bound of [10], our proof of Theorem 1.2 via a guess-and-verify depth reduction and padding, and the statement of Theorem 1.4 (limits of the direct approach via CSPs) to get the separation in Theorem 1.1. As alluded to above, our approach differs from those of [13, 1, 15] and related results in the context of first-order logic [13, 14, 15]. Recall that the [10] framework lifts a Resolution width lower bound for an unsatisfiable -formula into a corresponding monotone circuit size lower bound for. On the other hand, Theorem 1.4 rules out separating constant-depth circuits from monotone circuits of polynomial size via functions. In particular, we cannot directly apply the chain of reductions from [10] to obtain the desired separation result. Instead, we extract from the specific -formula that they use a _structural property_ that will allow us to improve the upper from Theorem 1.2 to the desired upper bound in Theorem 1.1. In [10] the formula is a _Tseitin_ contradiction, a well-known class of unsatisfiable CNFs with a number of applications in proof complexity. For an undirected graph, the Tseitin formula encodes a system of linear equations modulo 2 as follows: each edge becomes a Boolean variable \(x_{e}\), and each vertex \(v\in V(G)\) corresponds to a constraint (linear equation) \(C_{v}\) stating that \(\sum_{u\in N_{G}(v)}x_{\{v,u\}}=1\) (\(\mathsf{mod}\) 2), where \(N_{G}(v)\) denotes the set of neighbours of \(v\) in \(G\). Crucially, \(T(G)\) does not encode an arbitrary system of linear equations, i.e., the following key structural property holds: every variable \(x_{e}\) appears in exactly 2 equations. On a technical level, this property is not preserved when obtaining a (total) monotone function \(\mathsf{CSP-SAT}_{S}\) by the gadget composition employed in the lifting framework and its reductions. However, we can still hope to explore this property in a somewhat different argument with the goal of obtaining CSP instances that lie in a complexity class weaker than \(\oplus\mathsf{L}\), which is the main bottleneck in the proof of Theorem 1.2 yielding \(\mathsf{AC}^{0}[\oplus]\) circuits instead of \(\mathsf{AC}^{0}\). At the same time, considering this structural property immediately takes us outside the domain of Theorem 1.4, which does not impose structural conditions over the CSP instances. We can capture the computational problem corresponding to this type of system of linear equations using the following Boolean function. Let \(\mathsf{OddFactor}_{n}:\{0,1\}^{n\choose 2}\to\{0,1\}\) be the function that accepts a given graph \(G\) if the formula \(T(G)\) described above is _satisfiable_. (Equivalently, if \(G\) admits a spanning subgraph in which the degree of every vertex is odd.) Note that \(\mathsf{OddFactor}_{n}\) is a monotone Boolean function: adding edges to \(G\) cannot make a satisfiable system unsatisfiable, since we can always set a new edge variable \(x_{e}\) to 0. While 3-\(\mathsf{XOR-SAT}\) (the corresponding \(\mathsf{CSP-SAT}_{S}\) function obtained from an appropriate Tseitin formula via the framework of [1]) admits a \(\oplus\mathsf{L}\) upper bound, we observe that \(\mathsf{OddFactor}_{n}\) can be computed in \(\mathsf{L}\) thanks to its more structured class of input instances. Indeed, one can prove that the formula \(T(G)\) is satisfiable if and only if every connected component of G has an even number of vertices.10 In turn, the latter condition can be checked in logarithmic space using Reingold's algorithm for undirected \(s\)-\(t\)-connectivity [14]. (We note that related ideas appear in an unpublished note of Johannsen [15].) This is the first application of Reingold's algorithm to this kind of separation. Footnote 10: A simple parity argument shows that odd-sized components cannot be satisfied. On the other hand, we can always satisfy an even-sized component by starting with an arbitrary assignment, which must satisfy an even number of constraints by a parity argument, and flipping the values of the edges in a path between unsatisfied nodes, until all nodes in the connected component are satisfied. At the same time, \(\mathsf{OddFactor}_{n}\) retains at least part of the monotone hardness of 3-\(\mathsf{XOR-SAT}\). Using a different reduction from a communication complexity lower bound, [1] proved that the monotone circuit depth of \(\mathsf{OddFactor}_{n}\) is \(n^{\Omega(1)}\). Altogether, we obtain a monotone Boolean function (indeed a graph property) that lies in \(\mathsf{L}\) but is not in \(\mathsf{mDEPTH}[n^{o(1)}]\). Applying a guess-and-verify depth reduction for \(\mathsf{L}\) and using (graph) padding (analogously to the proof sketch of Theorem 1.2), we get a monotone graph property in \(\mathsf{AC}^{0}\) that is not in \(\mathsf{mDEPTH}[\log^{k}n]\). This completes the sketch of the proof of Theorem 1.1. ### Directions and open problems Constant-depth circuits and monotone circuits are possibly the two most widely investigated models in circuit complexity theory. Although our results provide new insights about the relation between them, there are exceptionally basic questions that remain open. While [16] showed that negations can be efficiently eliminated from circuits of depth \(d\leq 2\) that compute monotone functions, already at depth \(d=3\) the situation is much less clear. Theorem 4.3 (see Section 4.1) implies that every monotone function in depth-3 \(\mathsf{AC}^{0}\) admits a monotone circuit of size \(2^{n-\Omega(n/\log^{2}n)}\). It is unclear to us if this is optimal. While [12] rules out an efficient _constant-depth_ monotone simulation, it is still possible (and consistent with Theorem 1.1) that \(\mathsf{AC}^{0}_{3}\cap\mathsf{Mono}\subseteq\mathsf{mNC}^{1}\). Is there a significantly better monotone circuit size upper bound for monotone functions computed by polynomial-size depth-3 circuits? Our results come close to solving the question posed by Grigni and Sipser [11]. Using our approach, it would be sufficient to show that \(\mathsf{OddFactor}_{n}\) requires monotone circuits of size \(\exp(n^{\Omega(1)})\). This is closely related to the challenge of obtaining an exponential monotone circuit size lower bound for \(\mathsf{Matching}_{n}\), a longstanding open problem in monotone complexity (see [11, Section 9.11]).11 Indeed, it's possible to reduce \(\mathsf{OddFactor}\) to \(\mathsf{Matching}\) using monotone \(\mathsf{AC}^{0}\) circuits (see [1, Lemma 6.18]). Footnote 11: Note that in \(\mathsf{OddFactor}\) we are concerned with the existence of a spanning subgraph where the degree of every vertex is odd, while in \(\mathsf{Matching}\) the degree should be exactly 1. Incidentally, the algebraic complexity variant of the \(\mathsf{AC}^{0}\) vs. \(\mathsf{mSIZE}[\mathsf{poly}]\) problem has been recently settled in a strong way through a new separation result obtained by Chattopadhyay, Datta, and Mukhopadhyay [13]. Could some of their techniques be useful to attack the more elusive Boolean case? Finally, it would be interesting to develop a more general theory able to explain when cancellations can speedup the computation of monotone Boolean functions. Our investigation of monotone simulations and separations for different classes of monotone functions (graph properties and constraint satisfaction problems) can be seen as a further step in this direction. **Acknowledgements.** We thank Arkadev Chattopadhyay for several conversations about the \(\mathsf{AC}^{0}\) versus \(\mathsf{mSIZE}[\mathsf{poly}]\) problem and related questions. We are also grateful to Denis Kuperberg for explaining to us the results from [12, 13]. The first author thanks Ninad Rajgopal for helpful discussions about depth reduction. Finally, we thank Gernot Salzer for the code used to generate Figures 1, 2, and 3. This work received support from the Royal Society University Research Fellowship URF\(\backslash\)R1\(\backslash\)191059, the EPSRC New Horizons Grant EP/V048201/1, and the Centre for Discrete Mathematics and its Applications (DIMAP) at the University of Warwick. ## 2 Preliminaries ### Notation Boolean functions.We denote by \(\mathsf{Mono}\) the set of all monotone Boolean functions. We define \(\mathsf{poly}=\big{\{}n\mapsto n^{C}:C\in\mathbb{N}\big{\}}\). A Boolean function \(f:\{0,1\}^{\binom{n}{2}}\to\{0,1\}\) is said to be a graph property if \(f(G)=f(H)\) for any two isomorphic graphs \(G\) and \(H\). Let \(\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}\) be a sequence of graph properties, where \(f_{n}\) is defined over undirected graphs on \(n\) vertices. We say that \(\mathcal{F}\) is _preserved under homomorphisms_ if, whenever there is a homomorphism from a graph \(G\) to a graph \(H\), we have \(\mathcal{F}(G)\leq\mathcal{F}(H)\). We denote by \(\mathsf{HomPreserving}\) the set of all graph properties which are preserved under homomorphisms. Note that \(\mathsf{HomPreserving}\subseteq\mathsf{Mono}\). Boolean circuits.We denote by \(\mathsf{AC}^{0}_{d}[s]\) the family of Boolean functions computed by size-\(s\), depth-\(d\) Boolean circuits with unbounded fan-in \(\{\wedge,\vee\}\)-gates and input literals from \(\{x_{1},\overline{x_{1}},\ldots,x_{n},\overline{x_{n}}\}\). We write \(\mathsf{AC}^{0}[s]\) as a shorthand for \(\bigcup_{d=1}^{\infty}\mathsf{AC}^{0}_{d}[s]\), and \(\mathsf{AC}^{0}\) as a shorthand of \(\mathsf{AC}^{0}[n^{O(1)}]=\mathsf{AC}^{0}[\mathsf{poly}]\). We will also refer to \(\mathsf{AC}^{0}_{d}[\mathsf{poly}]\) by \(\mathsf{AC}^{0}_{d}\). We write \(\mathsf{DNF}[s]\) to denote the family of Boolean functions computed by size-\(s\) DNFs, where size is measured by number of terms. We write \(\mathsf{CNF}[s]\) analogously. We write \(\mathsf{SIZE}[s]\) to denote the family of Boolean functions computed by size-\(s\) circuits. We write \(\mathsf{DEPTH}[d]\) to denote the family of Boolean functions computed by fan-in 2 circuits of depth \(d\). We denote by \(\mathsf{AC}^{0}[\oplus]\) the family of Boolean functions computed by polynomial-size \(\mathsf{AC}^{0}\) circuits with unbounded fan-in \(\oplus\)-gates. We denote by \(\mathsf{L}\) the family of Boolean functions computed by logspace machines, and by \(\mathsf{NL}\) the family of Boolean functions computed by polynomial-time nondeterministic logspace machines. Moreover, we denote by \(\oplus\mathsf{L}\) the family of Boolean functions computed by polynomial-time nondeterministic logspace machines with a _parity_ acceptance condition (i.e., an input is accepted if the number of accepting paths is odd). Circuit complexity.Given a circuit class \(\mathcal{C}\), we write \(\mathsf{m}C\) to denote the monotone version of \(\mathcal{C}\). Given a function \(f\), we write \(\mathsf{mSIZE}(f)\) to denote the size of the smallest monotone circuit computing \(f\) and \(\mathsf{mDEPTH}(f)\) to denote the smallest depth of a fan-in 2 monotone circuit computing \(f\). Given two Boolean functions \(f,g\), we write \(f\leq_{m}^{\mathsf{mProj}}g\) if there exists a many-one reduction from \(f\) to \(g\) in which each bit of the reduction is a monotone projection12 of the input. Footnote 12: A monotone projection is a projection without negations. Miscellanea.We define \(|\alpha|_{1}:=\sum_{i=1}^{n}\alpha_{i}\). We call \(|\alpha|_{1}\) the _Hamming weight_ of \(\alpha\). We let \(\mathrm{supp}(\alpha)=\{i\in[n]:\alpha_{i}=1\}\). We let \(\mathrm{THR}_{k,n}:\{0,1\}^{n}\to\{0,1\}\) be the Boolean function such that \(\mathrm{THR}_{k,n}(x)=1\iff|x|_{1}\geq k\). ### Background results The next lemma, which is proved via a standard "guess-and-verify" approach, shows that nondeterministic logspace computations can be simulated by circuits of size \(2^{n^{\varepsilon}}\) and of depth \(d=O_{\varepsilon}(1)\). **Lemma 2.1** (Folklore; see, e.g., [1, Lemma 8.1]).: _For all \(\varepsilon>0\), we have \(\mathsf{NL}\subseteq\mathsf{AC}^{0}[2^{n^{\varepsilon}}]\)._ ## 3 Constant-Depth Circuits vs. Monotone Circuits In this section, we prove Theorems 1.1 and 1.2. For the upper bounds, we require the logspace graph connectivity algorithm due to [14] and the \(\oplus\mathsf{L}\) algorithm for solving linear systems over \(\mathbb{F}_{2}\) due to [1], as well as the depth-reduction techniques of [1, 2]. On the lower bounds side, our proofs rely on previous monotone circuit and depth lower bounds from [1, 1]. In order to obtain a monotone formula lower bound for a graph property, we prove a graph padding lemma in Section 3.2. ### A monotone size lower bound for a function in \(\mathsf{AC}^{0}[\oplus]\) In this section, we prove Theorem 1.2. We first recall the monotone circuit lower bound of [1] and a depth-reduction lemma implicit in [1] and [13], whose full proof we give below for completeness. We remark that similar arguments can be employed to prove Lemma 2.1, essentially by replacing the \(\oplus\) gates by \(\vee\) gates. As explained in Section 1.2, in its strongest form the separation result from [1] can be stated as follows. **Theorem 3.1** ([14]).: _There exists \(\varepsilon>0\) such that \(\oplus\mathsf{L}\cap\mathsf{Mono}\not\subseteq\mathsf{mSIZE}[2^{o(n^{\varepsilon})}]\). Moreover, this separation is witnessed by 3-XOR-SAT._ **Lemma 3.2** (Folklore; see, e.g., [1, 1]).: _Let \(f:\{0,1\}^{n}\to\{0,1\}\) be a Boolean function computed by a \(\oplus\mathsf{L}\) machine. For every \(\delta>0\), there exists an \(\mathsf{AC}^{0}[\oplus]\) circuit of size \(2^{n^{\delta}}\) that computes \(f\)._ Proof.: Let \(M\) be a \(\oplus\mathsf{L}\)-machine computing \(f\). Without loss of generality, we may assume that each configuration in the configuration graph \(G\) of \(M\) is _time-stamped_ - in other words, each configuration carries the information of the number of computational steps it takes to arrive at it.13 We may also assume that every accepting computation takes exactly the same amount of time, which means that every path from the starting configuration \(v_{\mathsf{start}}\) to the accepting configuration \(v_{\mathsf{accept}}\) has the same length in the configuration graph. These assumptions imply that the configuration graph is _layered_ (because a configuration with time-stamp \(t\) can only point to configurations with time-stamp \(t+1\)) and acyclic. Note that, for a fixed machine, the configuration graph can be computed from the input string using a projection. Footnote 13: Formally, we can define a \(\oplus\mathsf{L}\)-machine \(M^{\prime}\) such that the configurations of \(M^{\prime}\) are \((C,t)\), where \(C\) is a configuration of \(M\), and \(t=0,1,\ldots,m=n^{O(1)}\) is a number denoting the time in which the configuration was achieved. A configuration \((C,t)\) can only reach a configuration \((C^{\prime},t+1)\) in the configuration graph of \(M^{\prime}\). Let \(m=n^{O(1)}\) be the time that an accepting computation takes. We now show how to count (modulo 2) the number of accepting paths from \(v_{\mathsf{start}}\) to \(v_{\mathsf{accept}}\) with a depth-\(d\)\(\mathsf{AC}^{0}[\oplus]\) circuit. First, choose \(m^{1/d}-1\) configurations \(v_{1},\ldots,v_{m^{1/d}-1}\) (henceforth called "checkpoints") from \(V(G)\), such that the configuration \(v_{i}\) is at the level \(i\cdot m^{1-1/d}\) in the configuration graph (i.e., it takes \(i\cdot m^{1-1/d}\) time steps to arrive at \(v_{i}\)). For convenience, we let \(v_{0}=v_{\mathsf{start}}\) and \(v_{m^{1/d}}=v_{\mathsf{accept}}\). We then count the number of paths from from \(v_{\mathsf{start}}\) to \(v_{\mathsf{accept}}\) that go through \(v_{1},\ldots,v_{m^{1/d}-1}\), and sum over all possible choices of the checkpoints. Since the graph is layered and each path from \(v_{0}\) to \(v_{m^{1/d}}\) has length exactly \(m\), there is only one choice of checkpoints that witnesses a given path from \(v_{0}\) to \(v_{m^{1/d}}\), so no path is counted twice in this summation. Letting \(\#\mathsf{paths}(s,t,\ell)\) denote the number of paths between configurations \(s\) and \(t\) with distance exactly \(\ell\), we obtain \[\#\mathsf{paths}(v_{0},v_{m^{1/d}},m)=\sum_{v_{1},\ldots,v_{m^{1/d}-1}}\prod_ {i=0}^{m^{1/d}-1}\#\mathsf{paths}(v_{i},v_{i+1},m^{1-1/d}).\] The above calculation can be done in modulo 2 with an unbounded fan-in XOR gate (replacing the summation) and an unbounded fan-in AND gate (replacing the product). Note that the formula above is recursive. Repeating the same computation for calculating (modulo 2) the expression \(\#\mathsf{paths}(v_{i},v_{i+1},m^{1-1/d})\) for each \(i\), we obtain a depth-\(2d\)\(\mathsf{AC}^{0}[\oplus]\) circuit for calculating the number of paths from \(v_{\mathsf{start}}\) to \(v_{\mathsf{accept}}\) (modulo 2). Clearly, the total size of the circuit is \(2^{O(m^{1/d}\cdot\log m)}\), which is smaller than \(2^{n^{\delta}}\) for a large enough constant \(d\). We now restate Theorem 1.2 and prove it by combining Theorem 3.1 and Lemma 3.2 with a padding trick. **Theorem 1.2** (Polynomial-size constant-depth vs. larger monotone size).: _For every \(k\geq 1\), we have \(\mathsf{AC}^{0}[\oplus]\cap\mathsf{Mono}\not\subseteq\mathsf{mSIZE}[2^{( \log n)^{k}}]\)._ Proof.: By Theorem 3.1, there exists \(\varepsilon>0\) and a monotone function \(f\in\oplus\mathsf{L}\) such that any monotone circuit computing \(f\) has size \(2^{\Omega(n^{\varepsilon})}\). Let \(\delta=\varepsilon/k\) and let \(m=2^{n^{\delta}}\). Let \(g:\{0,1\}^{n}\times\{0,1\}^{m}\to\{0,1\}\) be the Boolean function defined as \(g(x,y)=f(x)\). Note that \(g\) is a function on \(N:=m+n=2^{\Theta(n^{\delta})}\) bits. By Lemma 3.2, there exists an \(\mathsf{AC}^{0}[\oplus]\) circuit computing \(f\) of size \(2^{n^{\delta}}=N^{O(1)}\). The same circuit computes \(g\). On the other hand, any monotone circuit computing \(g\) has size \(2^{\Omega(n^{\varepsilon})}=2^{\Omega((\log N)^{\varepsilon/\delta})}=2^{ \Omega((\log N)^{k})}\). ### A monotone depth lower bound for a graph property in \(\mathsf{AC}^{0}\) In this section, we prove Theorem 1.1. We prove moreover that the function that separates \(\mathsf{AC}^{0}\cap\mathsf{Mono}\) and \(\mathsf{mNC}^{i}\) can be taken to be a graph property. We state our result in its full generality below. **Theorem 3.3**.: _For every \(i\geq 1\), we have \(\mathsf{AC}^{0}\cap\mathsf{Mono}\cap\mathsf{GraphProperties}\not\subseteq \mathsf{mDEPTH}[(\log n)^{i}]\). In particular, we have \(\mathsf{AC}^{0}\cap\mathsf{Mono}\cap\mathsf{GraphProperties}\not\subseteq \mathsf{mNC}^{i}\)._ First, we recall a result of [1], which proves monotone lower bounds for the following function. Let \(\mathsf{OddFactor}_{n}:\{0,1\}^{\binom{n}{2}}\to\{0,1\}\) be the function that accepts a given graph if it contains an _odd factor_ - in other words, a spanning subgraph in which the degree of every vertex is odd. Babai, Gal and Wigderson [1] proved the following result: **Theorem 3.4** ([1]).: _Any monotone formula computing \(\mathsf{OddFactor}_{n}\) has size \(2^{\Omega(n)}\), and any monotone circuit computing \(\mathsf{OddFactor}_{n}\) has size \(n^{\Omega(\log n)}\)._ The proof in [1] is actually for the case of _bipartite graphs_, but it easily extends to general graphs, since the bipartite case reduces to the general case by a monotone projection. The formula lower bound stated above is slightly stronger because it makes use of asymptotically optimal lower bounds on the randomized communication complexity of \(\mathsf{DISJ}_{n}\)[11], which were not available to [1]. We remark that, with a different language, a monotone circuit lower bound for \(\mathsf{OddFactor}\) is also implicitly proved in Feder and Vardi [12, Theorem 30]. We now recall an upper bound for \(\mathsf{OddFactor}\), implicitly proved in an unpublished note due to Johannsen [13]. **Theorem 3.5** ([13]).: _We have \(\mathsf{OddFactor}\in\mathsf{L}\)._ Proof.: We first recall the following observation about the \(\mathsf{OddFactor}\) function, which appears in different forms in the literature (see [12, Lemma 4.1] or [10, Lemma 18.16]; see also [13, Proposition 1] for a different proof.) **Claim**.: _A graph \(G\) has an odd factor if and only if every connected component of \(G\) has an even number of vertices._ Proof.: If a graph \(G\) has an odd factor, we can conclude that every connected component of \(G\) has an even number of vertices from the well-known observation that in every graph there is an even number of vertices of odd degree. Now suppose that every connected component of \(G\) has an even number of vertices. We will iteratively construct an odd factor \(F\) of \(G\). We begin with the empty graph. We take any two vertices \(u,v\) in the same connected component of \(G\) which currently have even degree in \(F\), and consider any path \(P=(x_{1},\ldots,x_{k})\) between \(u\) and \(v\), where \(x_{1}=u\) and \(x_{k}=v\). If the edge is currently in \(F\), we remove \(x_{i}x_{i+1}\) from \(F\); otherwise, we add \(x_{i}x_{i+1}\) to \(F\). It's easy to check that, in every iteration of this procedure, only the vertices \(u\) and \(v\) have the parity of their degree changed in \(F\); the degree of every other vertex stays the same (modulo 2). Since every connected component has an even number of vertices, this means that, eventually, every vertex in \(F\) will have odd degree. Now it's easy to check in logspace if every connected component of \(G\) has an even number of vertices using Reingold's algorithm for undirected connectivity [14]. It suffices to check if, for every vertex \(v\) of \(G\), the number of vertices reachable from \(v\) is odd. Now, if we only desire to obtain a function in \(\mathsf{AC}^{0}\) not computed by monotone circuits of depth \((\log n)^{i}\), we can follow the same argument of Theorem 1.2, using Lemma 2.1 instead of Lemma 3.2. In order to obtain moreover a monotone _graph property_ witnessing this separation, we will need the following lemma, which enables us to obtain a graph property after "padding" a graph property. We defer the proof of this lemma to the end of this section. **Lemma 3.6**.: _Let \(f:\{0,1\}^{\binom{n}{2}}\to\{0,1\}\) be a monotone graph property on graphs of \(n\) vertices. The following holds._ 1. _If_ \(f\in\mathsf{NC}^{i}\) _for some_ \(i>1\)_, then there exists a monotone graph property_ \(g\) _on graphs of_ \(N=2^{(\log n)^{i}}\) _vertices such that_ \(g\in\mathsf{NC}^{1}\) _and_ \(f\leq_{m}^{\mathsf{proj}}g\)_._ 2. _If_ \(f\in\mathsf{NL}\)_, then for all_ \(\varepsilon>0\) _there exists a monotone graph property_ \(g\) _on graphs of_ \(N=2^{n^{\varepsilon}}\) _vertices such that_ \(g\) _can be computed by_ \(\mathsf{AC}^{0}\) _circuits of size_ \(N^{2+o(1)}\) _and_ \(f\leq_{m}^{\mathsf{proj}}g\)_._ 3. _If_ \(f\in\oplus\mathsf{L}\)_, then for all_ \(\varepsilon>0\) _there exists a monotone graph property_ \(g\) _on graphs of_ \(N=2^{n^{\varepsilon}}\) _vertices such that_ \(g\) _can be computed by_ \(\mathsf{AC}^{0}[\oplus]\) _circuits of size_ \(N^{2+o(1)}\) _and_ \(f\leq_{m}^{\mathsf{proj}}g\)_._ We are now ready to prove Theorem 3.3. Proof of Theorem 3.3.: Fix \(n\in\mathbb{N}\) and take an \(\varepsilon<1/i\). Observing that \(\mathsf{L}\subseteq\mathsf{NL}\), from Theorem 3.5 and item (2) of Lemma 3.6 we conclude that there exists a monotone graph property \(f\) on \(N=2^{n^{\varepsilon}}\) vertices such that \(f\in\mathsf{AC}^{0}\) and \(\mathsf{OddFactor}_{n}\leq_{m}^{\mathsf{proj}}f\). By Theorem 3.4, any monotone circuit computing \(f\) has depth \(\Omega(n)=\Omega((\log N)^{1/\varepsilon})\gg(\log N)^{i}\). Raz and Wigderson [13] observed that there exists a monotone function \(f\in\mathsf{NC}^{1}\setminus\mathsf{mNC}\). Using Lemma 3.6, we observe moreover that it's possible to obtain this separation with a monotone graph property. **Proposition 3.7**.: _We have \(\mathsf{NC}^{1}\cap\mathsf{Mono}\cap\mathsf{GraphProperties}\not\subseteq \mathsf{mNC}\)._ Proof.: Observing that \(\mathsf{L}\subseteq\mathsf{NC}^{2}\), we conclude from Theorem 3.5 and item (1) of Lemma 3.6 that there exists a monotone graph property \(f\) on \(N=2^{(\log n)^{2}}\) vertices such that \(f\in\mathsf{NC}^{1}\) and \(\mathsf{OddFactor}_{n}\leq_{m}^{\mathsf{proj}}f\). By Theorem 3.4, any monotone circuit computing \(f\) has depth \(\Omega(n)=\Omega(2^{\sqrt{\log N}})\), which implies \(f\not\in\mathsf{mNC}\). ### Efficient monotone padding for graph properties We will now prove Lemma 3.6. We first recall some low-depth circuits for computing threshold functions, which we will use to design a circuit for efficiently computing the adjacency matrix of induced subgraphs. **Theorem 3.8** ([14]).: _Let \(d>0\) be a constant. The function \(\operatorname{THR}_{(\log n)^{d},n}\) can be computed by an \(\operatorname{AC}^{0}\) circuit of size \(n^{o(1)}\) and depth \(d+O(1)\)._ **Theorem 3.9** ([1]).: _For every \(k\in[n]\), the function \(\operatorname{THR}_{k,n}\) can be computed by a circuit of depth \(O(\log n)\) and size \(n^{O(1)}\)._ **Lemma 3.10**.: _There exists a circuit \(C_{n}^{k}\) with \(\binom{n}{2}+n\) inputs and \(\binom{k}{2}\) outputs which, when given as input an adjacency matrix of a graph \(G\) on \(n\) vertices and a characteristic vector of a set \(S\subseteq[n]\) such that \(|S|\leq k\), outputs the adjacency matrix of the graph \(G[S]\), padded with isolated vertices when \(|S|<k\). The circuit has constant-depth and size \(n^{2+o(1)}\) when \(k=\operatorname{polylog}(n)\), and size \(n^{O(1)}\) and depth \(O(\log n)\) otherwise._ Proof.: Let \(\left\{x_{ij}\right\}_{i,j\in[n]}\) encode the adjacency matrix of \(G\). Let \(\alpha\in\{0,1\}^{n}\) be the characteristic vector of \(S\). Let \(i,j\in[k]\). Note that \(\{i,j\}\in E(G[S])\) if and only if there exists \(a,b\in[n]\) such that * \(\alpha_{a}\) is the \(i\)-th non-zero entry of \(\alpha\), * \(\alpha_{b}\) is the \(j\)-th non-zero entry of \(\alpha\), and * \(x_{ab}=1\) (i.e., \(a\) and \(b\) are connected in \(G\)). We first consider the case \(k=\operatorname{polylog}(n)\). In this case, the first two conditions can be checked with circuits of size \(n^{o(1)}\) using Theorem 3.8. Therefore, we can compute if \(i\) and \(j\) are adjacent using \(n^{2+o(1)}\) gates and constant depth. As there are at most \((\log n)^{O(1)}\) such pairs, we can output \(G[S]\) with at most \(n^{2+o(1)}\) gates. For any \(k\), the first two conditions can be checked with an \(\operatorname{NC}^{1}\) circuit by Theorem 3.9. Since there are at most \(n^{2}\) pairs \(i,j\), the entire adjacency matrix can be computed with a \(O(\log n)\)-depth and polynomial-size circuit. We are ready to prove Lemma 3.6. Proof of Lemma 3.6.: We first prove (1). Fix \(n\in\mathbb{N}\) and let \(N=2^{(\log n)^{i}}\). For a graph \(G\) on \(N\) vertices such that \(|E(G)|\leq\binom{n}{2}\), let \(G_{\mathsf{clean}}\) be the graph obtained from \(G\) by removing isolated vertices from \(G\) one-by-one, in lexicographic order, until one of the following two conditions are satisfied: (1) there are no more isolated vertices in \(G_{\mathsf{clean}}\), _or_ (2) \(G_{\mathsf{clean}}\) has exactly \(n\) vertices. Let \(g:\{0,1\}^{\binom{n}{2}}\to\{0,1\}\) be the monotone graph property defined as follows: \[g(G):=\left(|E(G)|>\binom{n}{2}\right)\vee(|V(G_{\mathsf{clean}})|>n)\vee(f(G_ {\mathsf{clean}})=1).\] Note that \(g\) accepts a graph \(G\) if and only if at least one of the following three conditions are satisfied: 1. \(G\) has at most \(\binom{n}{2}\) edges, \(G_{\mathsf{clean}}\) has exactly \(n\) vertices and \(f(G_{\mathsf{clean}})=1\), or 2. \(G\) has more than \(\binom{n}{2}\) edges, or 3. \(G_{\mathsf{clean}}\) has more than \(n\) vertices. We observe that the monotonicity of \(g\) follows from the monotonicity of \(f\). We also claim that \(g\) is a graph property. Indeed, the graph \(G_{\mathsf{clean}}\) is the same (up to isomorphism), irrespective of the order according to which the isolated vertices are removed from \(G\). Moreover, the function \(f\) is also a graph property. Because of this, all the three conditions above are preserved under isomorphisms. We first observe that \(f\) is a monotone projection of \(g\). Indeed, given a graph \(G\) on \(n\) vertices, we can easily construct by a monotone projection a graph \(G^{\prime}\) on \(N\) vertices and at most \(\binom{n}{2}\) edges such that \(f(G)=g(G^{\prime})\). We just let \(G^{\prime}\) have a planted copy of \(G\), and all other vertices are isolated. Then \(G^{\prime}{}_{\mathsf{clean}}=G\) (up to isomorphism) and \(g(G^{\prime})=f(G_{\mathsf{clean}})=f(G)\). We now show how to compute \(g\) in \(\mathsf{NC}^{1}\). Let \(\left\{x_{ij}\right\}_{i,j\in[N]}\) be the input bits of \(g\), corresponding to the adjacency matrix of a graph \(G\). The circuit computes as follows. 1. If \(|E(G)|>\binom{n}{2}\), accept the graph \(G\). 2. Compute the characteristic vector \(\alpha\in\left\{0,1\right\}^{N}\) of the set of all non-isolated vertices of \(G\). If \(|\alpha|_{1}>n\), accept the graph \(G\). 3. Compute \(G_{\mathsf{clean}}\) and output \(f(G_{\mathsf{clean}})\). Note that checking if \(|E(G)|>\binom{n}{2}\) can be done in \(\mathsf{NC}^{1}\) by Theorem 3.9. Moreover, for all \(i\in[N]\), we have \(\alpha_{i}=\bigvee_{j\in[N]}x_{ij}\), and therefore \(\alpha_{i}\) can be computed by a circuit of depth \(O(\log N)\) and \(O(N)\) gates. In total, the vector \(\alpha\) can be computed with \(O(N^{2})\) gates and \(O(\log N)\) depth. Finally, we can check if \(|\alpha|_{1}>n\) in \(\mathsf{NC}^{1}\) with a threshold circuit. For the final step, we compute \(G_{\mathsf{clean}}\). If \(|\alpha|_{1}=n\), note that \(G_{\mathsf{clean}}=G[\mathrm{supp}(\alpha)]\). When \(|\alpha|_{1}<n\), then \(G_{\mathsf{clean}}\) is \(G[\mathrm{supp}(\alpha)]\) padded with isolated vertices. We can therefore compute \(G_{\mathsf{clean}}\) with the circuit \(C_{N}^{n}\) of Lemma 3.10. Moreover, since \(f\in\mathsf{NC}^{i}\), we have that \(f\) can be computed by a circuit of size \(n^{O(1)}=N^{o(1)}\) and depth \(O((\log n)^{i})=O(\log N)\). Therefore, computing \(f(G_{\mathsf{clean}})\) can be done in \(\mathsf{NC}^{1}\). Overall, we get that \(g\in\mathsf{NC}^{1}\). In order to prove (2), it suffices to modify the proof above. The modification can be briefly described as follows. We let \(N=2^{n^{\varepsilon}}\). Every time Lemma 3.10 is applied, we use the \(\mathsf{AC}^{0}\) circuit instead of the \(\mathsf{NC}^{1}\) circuit, since \(n=\mathrm{polylog}(N)\). This ammounts to \(N^{2+o(1)}\) many gates with unbounded fan-in. Moreover, since by assumption \(f\in\mathsf{NL}\), applying Lemma 2.1 we obtain an \(\mathsf{AC}^{0}\) circuit for \(f\) of size \(2^{n^{\varepsilon/2}}=N^{o(1)}\), so we can compute \(f(G_{\mathsf{clean}})\) in constant depth with \(N^{o(1)}\) gates. Finally, for (3) it suffices to apply the same argument used for (2), replacing an application of Lemma 2.1 by an application of Lemma 3.2. ## 4 Non-Trivial Monotone Simulations and Their Consequences In contrast to Section 3, in this section we observe that a non-trivial simulation of \(\mathsf{AC}^{0}\) circuits by monotone circuits is possible. This follows from a refined version of the switching lemma proved by Rossman [14]. As a proof of concept, we use this simulation result to reprove a well-known \(\mathsf{AC}^{0}\) lower bound for \(\mathsf{Majority}\). In the second part of this section, we show that if much faster simulations are possible, then even stronger non-monotone circuit lower bounds follow. We also show that this implication is true even if the simulation only holds for _graph properties_. Monotone simulations for graph properties are motivated by a result of Rossman [14], which shows that very strong monotone simulations are possible for _homomorphism-preserving graph properties_. The lower bounds from monotone simulations are proved with the simulation result and padding argument used in the previous section (Lemmas 2.1 and 3.6). ### A non-trivial simulation for bounded-depth circuits The earliest monotone simulation result was proved for \(\mathsf{DNF}\)s by Quine [15]. **Theorem 4.1** (Quine [15]).: _For all \(s:\mathbb{N}\to\mathbb{N}\), we have \(\mathsf{DNF}[s]\cap\mathsf{Mono}\subseteq\mathsf{mDNF}[s]\)._ Proof.: If a given \(\mathsf{DNF}\) computes a monotone Boolean function, simply removing the negative literals continues to compute the same function. Let \(\mathsf{DT}_{\mathsf{size}}(f)\) denote the size of a smallest decision-tree computing \(f\). We will need a result obtained by Rossman [14]. **Theorem 4.2** ([14]).: _If \(f:\{0,1\}^{n}\to\{0,1\}\) is computable by an \(\mathsf{AC}^{0}\) circuit of depth \(d\) and size \(s\), then \(\mathsf{DT}_{\mathsf{size}}(f)=2^{(1-1/O(\log s)^{d-1})n}\)._ **Theorem 4.3**.: _Let \(s:\mathbb{N}\to\mathbb{N}\) and \(d\geq 1\). We have \(\mathsf{AC}^{0}_{d}[s]\cap\mathsf{Mono}\subseteq\mathsf{mSIZE}[t]\), where \(t=n\cdot 2^{n(1-1/O(\log s)^{d-1})}\). Moreover, this upper bound is achieved by monotone \(\mathsf{DNF}\)s of size \(t/n\)._ Proof.: Let \(f\) be a monotone function computable by an \(\mathsf{AC}^{0}\) circuit of depth \(d\) and size \(s\). By Theorem 4.2, there exists a decision tree of size \(2^{(1-1/O(\log s)^{d-1})n}\) computing \(f\). Therefore, there exists a \(\mathsf{DNF}\) of the same size computing \(f\), which can be taken to be monotone by by Theorem 4.1. This can be converted into a monotone circuit of size \(n\cdot 2^{(1-1/O(\log s)^{d-1})n}\). We observe that it is possible to immediately deduce an \(\mathsf{AC}^{0}\) lower bound for \(\mathsf{Majority}\) using this simulation theorem. Even though near-optimal lower bounds for \(\mathsf{Majority}\) have been known for a long time [14] and the proof of the main technical tool (Theorem 4.2) behind our simulation result is similar to the one used by [14], the argument below illustrates how a monotone simulation can lead to non-monotone circuit lower bounds. **Corollary 4.4**.: _Any depth-\(d\)\(\mathsf{AC}^{0}\) circuit computing \(\mathsf{Majority}\) has size \(2^{\Omega((n/\log n)^{1/(d-1)})}\)._ Proof.: Note that \(\mathsf{Majority}\) has \(\binom{n}{n/2}=\Omega(2^{n}/\sqrt{n})\) minterms. Therefore, any monotone \(\mathsf{DNF}\) computing \(\mathsf{Majority}\) has size at least \(\Omega(2^{n}/\sqrt{n})\). By Theorem 4.3, it follows that the size \(s\) of a depth-\(d\)\(\mathsf{AC}^{0}\) computing \(\mathsf{Majority}\) satisfies the following inequality: \[2^{n(1-1/O(\log s)^{d-1})}=\Omega(2^{n-\frac{1}{2}\log n}).\] From this equation we obtain \(s=2^{\Omega((n/\log n)^{1/(d-1)})}\). ### Non-monotone lower bounds from monotone simulations We now show that if monotone circuits are able to efficiently simulate non-monotone circuits computing monotone Boolean functions, then striking complexity separations follow. We also show a result of this kind for simulations of graph properties. We first prove a lemma connecting the simulation of \(\mathsf{AC}^{0}\) circuits with the simulation of \(\mathsf{NL}\) machines. **Lemma 4.5**.: _For all constants \(\varepsilon>0\) and \(C\geq 1\), if \(\mathsf{AC}^{0}\cap\mathsf{Mono}\subseteq\mathsf{mSIZE}[2^{O((\log n)^{C})}]\), then \(\mathsf{NL}\cap\mathsf{Mono}\subseteq\mathsf{mSIZE}[2^{o(n^{\varepsilon})}]\)._ Proof.: We prove the contrapositive. Suppose that there exists \(\varepsilon>0\) such that \(\mathsf{NL}\cap\mathsf{Mono}\not\subseteq\mathsf{mSIZE}[2^{o(n^{\varepsilon})}]\). This means that there exists a monotone function \(f\) such that \(f\in\mathsf{NL}\) and any monotone circuit computing \(f\) has size \(2^{\Omega(n^{\varepsilon})}\). Let \(\delta=\varepsilon/(2C)\) and let \(m=2^{n^{\delta}}\). Let \(g:\{0,1\}^{n}\times\{0,1\}^{m}\to\{0,1\}\) be the Boolean function defined as \(g(x,y)=f(x)\). Note that \(g\) is a function on \(N:=m+n=2^{\Theta(n^{\delta})}\) bits. By Lemma 2.1, there exists an \(\mathsf{AC}^{0}\) circuit computing \(f\) of size \(2^{n^{\delta}}=N^{O(1)}\). Moreover, any monotone circuit computing \(g\) has size \(2^{\Omega(n^{\varepsilon})}=2^{\Omega((\log N)^{\varepsilon/\delta})}=2^{ \Omega((\log N)^{2C})}\). Next, we recall the strongest known monotone circuit and formula lower bounds for a monotone function in \(\mathsf{NP}\). **Theorem 4.6** ([17]).: \(\mathsf{NP}\cap\mathsf{Mono}\not\subseteq\mathsf{mDEPTH}[o(n)]\)_._ **Theorem 4.7** ([10]).: \(\mathsf{NP}\cap\mathsf{Mono}\not\subseteq\mathsf{mSIZE}[2^{o(\sqrt{n}/\log n )}]\)_._ We are now ready to state and prove our first result regarding new complexity separations from monotone simulations. Recall that obtaining explicit lower bounds against depth-3 \(\mathsf{AC}^{0}\) circuits of size \(2^{\omega(n^{1/2})}\) is a major challenge in circuit complexity theory, while the best lower bound on the size of depth-4 \(\mathsf{AC}^{0}\) circuits computing a function in \(\mathsf{NP}\) is currently \(2^{\Omega(n^{1/3})}\)[12]. Moreover, no strict separation is known in the following sequence of inclusions of complexity classes: \(\mathsf{ACC}\subseteq\mathsf{TC}^{0}\subseteq\mathsf{NC}^{1}\subseteq \mathsf{L}\subseteq\mathsf{NL}\subseteq\oplus\mathsf{L}\subseteq\mathsf{NC}^{2}\). We show that efficient monotone simulations would bring new results in both of these fronts. (We stress that all lower bound consequences appearing below refer to separations against non-uniform circuits.)14 Footnote 14: In other words, all upper bounds are _uniform_, but the lower bounds hold even for _non-uniform_ circuits. Note that this is stronger than lower bounds for uniform circuits. **Theorem 4.8**.: _Let \(\mathcal{C}\) be a class of circuits. There exists \(\varepsilon>0\) such that the following holds:_ 1. _If_ \(\mathsf{AC}^{0}_{3}\cap\mathsf{Mono}\subseteq\mathsf{mNC}^{1}\)_, then_ \(\mathsf{NP}\not\subseteq\mathsf{AC}^{0}_{3}[2^{o(n)}]\)_._ 2. _If_ \(\mathsf{AC}^{0}_{4}\cap\mathsf{Mono}\subseteq\mathsf{mSIZE}[\mathsf{poly}]\)_, then_ \(\mathsf{NP}\not\subseteq\mathsf{AC}^{0}_{4}[2^{o(\sqrt{n}/\log n)}]\)_._ 3. _If_ \(\mathcal{C}\cap\mathsf{Mono}\subseteq\mathsf{mSIZE}[2^{O(n^{\varepsilon})}]\)_, then_ \(\mathsf{NC}^{2}\not\subseteq\mathcal{C}\)_._ 4. _If_ \(\mathsf{AC}^{0}\cap\mathsf{Mono}\subseteq\mathsf{mSIZE}[\mathsf{poly}]\)_, then_ \(\mathsf{NC}^{2}\not\subseteq\mathsf{NC}^{1}\)_._ Proof.: We will prove each item separately. Proof of (1).: Let us assume that \(\mathsf{AC}^{0}_{3}\cap\mathsf{Mono}\subseteq\mathsf{mNC}^{1}\). Let \(f\) be the function of Theorem 4.6. For a contradiction, suppose that \(f\in\mathsf{AC}^{0}_{3}[2^{o(n)}]\). Let \(\alpha:\mathbb{N}\to\mathbb{N}\) be such that \(\alpha(n)\to_{n}\infty\) and \(f\) has a depth-\(3\)\(\mathsf{AC}^{0}\) circuit of size \(2^{n/\alpha}\). Let \(m=2^{n/(10\cdot\alpha)}\) and let \(g:\{0,1\}^{n}\times\{0,1\}^{m}\to\{0,1\}\) be the function \(g(x,y)=f(x)\). Let \(N=n+m=(1+o(1))2^{n/(10\cdot\alpha)}\). Clearly, the function \(g\) has a depth-\(3\)\(\mathsf{AC}^{0}\) circuit of size \(2^{n/\alpha}=N^{O(1)}\). Since \(g\) is monotone, we conclude from the assumption that \(g\) is computed by a polynomial-size monotone formula. Now, since \(f(x)=g(x,1^{m})\), we obtain a monotone formula of size \(N^{O(1)}=2^{o(n)}\) for computing \(f\), which contradicts the lower bound of Theorem 4.6. Proof of (2).: Similar to the proof of item (1), but using Theorem 4.7 instead. Proof of (3).: Suppose that \(\mathsf{NC}^{2}\subseteq\mathcal{C}\). By Theorem 3.1, there exists a monotone function \(f\in\mathsf{NC}^{2}\) on \(n\) bits and a number \(\varepsilon>0\) such that \(f\notin\mathsf{mSIZE}[2^{o(n^{\varepsilon})}]\). Therefore, for any \(\delta>0\) such that \(\delta<\varepsilon\), we have \(f\notin\mathsf{mSIZE}[2^{O(n^{\delta})}]\). Since, by assumption, we have \(f\in\mathsf{NC}^{2}\subseteq\mathcal{C}\), we obtain \(\mathcal{C}\cap\mathsf{Mono}\not\subseteq\mathsf{mSIZE}[2^{O(n^{\delta})}]\). Proof of (4).: If \(\mathsf{NC}^{2}\subseteq\mathsf{NC}^{1}\), then, by item (3), we get \(\mathsf{NC}^{1}\cap\mathsf{Mono}\not\subseteq\mathsf{mSIZE}[2^{o(n^{ \varepsilon})}]\). From Lemma 4.5, we obtain \(\mathsf{AC}^{0}\cap\mathsf{Mono}\not\subseteq\mathsf{mSIZE}[\mathsf{poly}]\). As a motivation to the ensuing discussion, we recall a result of Rossman [14], who showed that any homomorphism-preserving graph property computed by \(\mathsf{AC}^{0}\) circuits is also computed by monotone \(\mathsf{AC}^{0}\) circuits. **Theorem 4.9** ([14]).: \(\mathsf{AC}^{0}\cap\mathsf{HomPreserving}\subseteq\mathsf{mDNF}[\mathsf{poly}]\)_._ This inspires the question of whether general graph properties can also be efficiently simulated by monotone circuits. We show that, if true, such simulations would imply strong complexity separations. Let us first recall an exponential monotone circuit lower bound for monotone graph properties, and we will be ready to state and prove our main result. **Theorem 4.10** ([1]).: _There exists \(\varepsilon>0\) such that \(\mathsf{NP}\cap\mathsf{Mono}\cap\mathsf{GraphProperties}\not\subseteq \mathsf{mSIZE}[2^{o(n^{\varepsilon})}]\)._ **Theorem 4.11**.: _Let \(\mathcal{C}\) be a class of circuits. The following holds:_ 1. _If_ \(\mathcal{C}\cap\mathsf{Mono}\cap\mathsf{GraphProperties}\subseteq\mathsf{mSIZE }[\mathsf{poly}]\)_, then_ \(\mathsf{L}\not\subseteq\mathcal{C}\)_._ 2. _If_ \(\mathcal{C}\cap\mathsf{Mono}\cap\mathsf{GraphProperties}\subseteq\mathsf{m DEPTH}[o(\sqrt{n})]\)_, where_ \(n\) _denotes the number of input bits, then_ \(\mathsf{L}\not\subseteq\mathcal{C}\)_._ 3. _If_ \(\mathsf{AC}^{0}\cap\mathsf{Mono}\cap\mathsf{GraphProperties}\subseteq \mathsf{mSIZE}[\mathsf{poly}]\)_, then_ \(\mathsf{NP}\not\subseteq\mathsf{NC}^{1}\)_._ Proof.: We will prove each item separately. Proof of (1).: Suppose that \(\mathsf{L}\subseteq\mathcal{C}\). By Theorem 3.4, the monotone graph property \(\mathsf{OddFactor}\) satisfies \(\mathsf{OddFactor}\notin\mathsf{mSIZE}[\mathsf{poly}]\). Moreover, we have the upper bound \(\mathsf{OddFactor}\in\mathsf{L}\) by Theorem 3.5. Since, by assumption, we have \(\mathsf{OddFactor}\in\mathsf{L}\subseteq\mathcal{C}\), we obtain \(\mathcal{C}\cap\mathsf{Mono}\cap\mathsf{GraphProperties}\not\subseteq \mathsf{mSIZE}[\mathsf{poly}]\). Proof of (2).: Suppose that \(\mathsf{L}\subseteq\mathcal{C}\). By Theorems 3.4 and 3.5, there exists a monotone graph property \(f\in\mathsf{L}\) such that \(f\notin\mathsf{mDEPTH}[o(\sqrt{n})]\). Since, by assumption, we have \(f\in\mathsf{L}\subseteq\mathcal{C}\), we obtain \(\mathcal{C}\cap\mathsf{Mono}\cap\mathsf{GraphProperties}\not\subseteq \mathsf{mDEPTH}[o(\sqrt{n})]\). Proof of (3).: Suppose that \(\mathsf{NP}\subseteq\mathsf{NC}^{1}\). By Theorem 4.10, there exists a monotone graph property \(f\in\mathsf{NC}^{1}\) such that \(\mathsf{mSIZE}(f)=2^{\Omega(n^{\varepsilon})}\) for some \(\varepsilon>0\). Let \(\delta=\varepsilon/2\). By Lemma 3.6 (Item 2), there exists a monotone graph property \(g\) on \(N=2^{n^{\delta}}\) vertices computed by an \(\mathsf{AC}^{0}\) circuit of size \(N^{2+o(1)}\) such that \(f\) is a monotone projection of \(g\). Theorem 4.10 implies that any monotone circuit computing \(f\) has size \(2^{\Omega(n^{\varepsilon})}=2^{\Omega((\log N)^{2})}=N^{\omega(1)}\). ## 5 Monotone Complexity of Constraint Satisfaction Problems In this section, we study the monotone complexity of Boolean-valued \(\mathsf{CSPs}\). Our goal is to classify which types of Boolean \(\mathsf{CSPs}\) are hard for monotone circuit size and monotone circuit depth, eventually proving Theorems 1.4 and 1.5. We will first spend some time recalling standard definitions and concepts in the theory of CSPs (Section 5.1), as well as a few results about CSPs that were proved in previous works [11, 1, 12, 13, 14] (Section 5.2). We will then prove Theorem 1.5 in Section 5.3, and we will finally prove Theorem 1.4 in Section 5.5 after proving some auxiliary results in Section 5.4. ### Definitions For a good introduction to the concepts defined below, we refer the reader to [1, 13]. We also refer the reader to Section 1.1.3 for the definition of the family of functions \(\mathsf{CSP-SAT}_{S}\), as well as the terms _constraint application, \(S\)-formula_ and _satisfiable formula_. We denote by \(p_{i}^{n}:\{0,1\}^{n}\to\{0,1\}\) the \(i\)-th _projection function_ on \(n\) variables, whose operation is defined as \(p_{i}^{n}(x)=x_{i}\). For a set of Boolean functions \(B\), we denote by \([B]\) the _closure_ of \(B\), defined as follows: a Boolean function \(f\) is in \([B]\) if and only if \(f\in B\cup\{\text{Identity}\}\) or if there exists \(g\in B\) and \(h_{1},\ldots,h_{k}\) such that \(f=g(h_{1},\ldots,h_{k})\), where each \(h_{i}\) is either a projection function or a function from \([B]\). We can equivalently define \([B]\) as the set of all Boolean functions that can be computed by circuits using the functions of \(B\) as gates. Note that \([B]\) necessarily contains an infinite number of Boolean functions, since \(p_{1}^{n}\in[B]\) for every \(n\in\mathbb{N}\); moreover, the constant functions are not necessarily in \([B]\). We say that \(B\) is a _clone_ if \(B=[B]\). A few prominent examples of clones are the set of all Boolean functions (equal to \([\{\wedge,\neg\}]\)), monotone functions (equal to \([\{\wedge,\vee,0,1\}]\)), and linear functions (equal to \([\{\oplus,1\}]\)). **Remark 5.1**.: _The set of all clones forms a lattice, known as Post's lattice, under the operations \([A]\sqcap[B]:=[A]\cap[B]\) and \([A]\sqcup[B]:=[A\cup B]\). From the next section onwards, we will refer to the clones defined in [13] (such as \(\mathrm{I}_{0}\), \(\mathrm{I}_{1}\), etc.), assuming the reader is familiar with them. For the unfamiliar reader, we refer to Appendix C and Figures 1 and 4, which contain all the definitions of the clones we will need, as well as the entire Post's lattice in graphical representation._ _To avoid confusion, we will always refer to clones with normal-Roman font (e.g., \(\mathrm{S}_{1},\mathrm{I}_{0}\), etc)._ Let \(S\) be a finite set of Boolean relations. We denote by \(\mathsf{CNF}(S)\) the set of all \(S\)-formulas. We denote by \(\mathsf{COQ}(S)\) the set of all relations which can be expressed with the following type of formula \(\varphi\): \[\varphi(x_{1},\ldots,x_{k})=\exists y_{1},\ldots,y_{\ell}\,\psi(x_{1},\ldots,x _{k},y_{1},\ldots,y_{\ell}),\] where \(\psi\in\mathsf{CNF}(S)\). The relations in \(\mathsf{COQ}(S)\) will also be referred as _conjunctive queries_ over \(S\). We denote by \(\langle S\rangle\) the set of relations defined as \(\langle S\rangle:=\mathsf{COQ}(S\cup\{=\})\). If \(S=\langle S\rangle\), we say that is a _co-clone_. We define \[\mathsf{CSP}=\left\{\mathsf{CSP}\text{-}\mathsf{SAT}_{S}:S\text{ is a finite set of relations}\right\}.\] We say that \(\mathsf{CSP}\text{-}\mathsf{SAT}_{S}\) is _trivial_ if \(\mathsf{CSP}\text{-}\mathsf{SAT}_{S}\) is a constant function. Let \(R\) be a \(k\)-ary Boolean relation and let \(f:\left\{0,1\right\}^{\ell}\to\left\{0,1\right\}\) be a Boolean function. For \(x\in R\) and \(i\in[k]\), we denote by \(x[i]\) the \(i\)-th bit of \(x\). **Definition 5.2**.: _We say that \(f\) is a polymorphism of \(R\), and \(R\) is an invariant of \(f\), if, for all \(x_{1},\ldots,x_{\ell}\in R\), we have_ \[(f(x_{1}[1],\ldots,x_{\ell}[1]),f(x_{2}[2],\ldots,x_{\ell}[2]),\ldots,f(x_{k} [k],\ldots,x_{\ell}[k]))\in R.\] _We denote the set of all polymorphisms of \(R\) by \(\mathsf{Pol}(R)\). For a set of relations \(S\), we denote by \(\mathsf{Pol}(S)\) the set of Boolean functions which are polymorphisms of all the relations of \(S\). For a set of Boolean functions, we denote by \(\mathsf{Inv}(B)\) the set of all Boolean relations which are invariant under all functions of \(B\) (i.e., \(\mathsf{Inv}(B)=\left\{R:B\subseteq\mathsf{Pol}(R)\right\}\))._ The following summarises the important facts about clones, co-clones and polymorphisms that are relevant to the study of CSPs [10]. **Lemma 5.3**.: _Let \(S\) and \(S^{\prime}\) be sets of Boolean relations and let \(B\) and \(B^{\prime}\) be sets of Boolean functions. We have_ * \(\mathsf{Pol}(S)\) _is a clone and_ \(\mathsf{Inv}(B)\) _is a co-clone;_ * _If_ \(S\subseteq S^{\prime}\)_, then_ \(\mathsf{Pol}(S^{\prime})\subseteq\mathsf{Pol}(S)\)_;_ * _If_ \(B\subseteq B^{\prime}\)_, then_ \(\mathsf{Inv}(B^{\prime})\subseteq\mathsf{Inv}(B)\)_;_ * \(\mathsf{COQ}(\mathsf{COQ}(S))=\mathsf{COQ}(S)\)_;_ * _If_ \(S\subseteq S^{\prime}\)_, then_ \(\mathsf{COQ}(S)\subseteq\mathsf{COQ}(S^{\prime})\)_;_ * \(\mathsf{Inv}(\mathsf{Pol}(S))=\langle S\rangle\)_;_ * \(\mathsf{Pol}(\mathsf{Inv}(B))=[B]\)_._ We now define different types of reductions. We say that a reduction is a _monotone OR-reduction_ if every bit of the reduction is either constant or can be computed by a monotone disjunction on the input variables. We write \(f\leq_{m}^{\mathsf{mOR}}g\) if there exists a many-one monotone OR-reduction from \(f\) to \(g\). We also write \(f\leq_{m}^{\mathsf{AC}^{0}}g\) if there exists a many-one \(\mathsf{AC}^{0}\) reduction from \(f\) to \(g\), and \(f\leq_{m}^{\mathsf{mNL}}g\) if there exists a many-one \(\mathsf{mNL}\) reduction from \(f\) to \(g\)15. Unless otherwise specified, every reduction we consider will generate an instance of polynomial size on the length of the input. Footnote 15: A many-one \(\mathsf{AC}^{0}\) (resp. \(\mathsf{mNL}\)) reduction is one in which each bit of the reduction is either constant or can be computed with a polynomial-size \(\mathsf{AC}^{0}\) circuit (resp. monotone nondeterministic branching program). Recall that a _monotone nondeterministic branching program_ is a directed acyclic graph \(G\) with two distinguished vertices \(s\) and \(t\), in which each edge \(e\) is labelled with an input function \(\rho_{e}\in\left\{1,x_{1},\ldots,x_{n}\right\}\). Given an input \(x\), the program accepts if there exists a path from \(s\) to \(t\) in the subgraph \(G_{x}\) of \(G\) in which an edge \(e\) appears if \(\rho_{e}(x)=1\). Finally, we denote by \(\mathsf{OR}^{k}\) and \(\mathsf{NAND}^{k}\) the \(k\)-ary OR and \(\mathsf{NAND}\) relations, respectively. ### Basic facts about \(\mathsf{CSP}\)-\(\mathsf{SAT}\) We state here basic facts about the \(\mathsf{CSP}\)-\(\mathsf{SAT}\) function. These facts are proved in the original paper of Schaefer [10], as well as in later papers [11, 1, 12, 13]. Lemma 5.4 below is one of the most important lemmas of this section and will be used many times. It states that \(\mathsf{Pol}(S)\) characterises the monotone complexity of \(\mathsf{CSP}\)-\(\mathsf{SAT}_{S}\), in the sense that the sets of relations with few polymorphisms give rise to the hardest instances of CSPs. A non-monotone version of this result was proved in [11, 1, Theorem 2.4], and we check in Appendix B.2 that their proofs also hold in the monotone case. **Lemma 5.4** (Polymorphisms characterise the complexity of CSPs [11, 1, Theorem 2.4]).: _If \(\mathsf{Pol}(S_{2})\subseteq\mathsf{Pol}(S_{1})\), then \(\mathsf{CSP}\)-\(\mathsf{SAT}_{S_{1}}^{n}\leq_{m}^{\mathsf{mNL}}\mathsf{CSP}\)-\(\mathsf{SAT}_{S_{2}}^{\mathsf{poly}(n)}\)._ Theorem 5.5 gives monotone circuit upper bounds for some instances of \(\mathsf{CSP}\)-\(\mathsf{SAT}_{S}\). Non-monotone variants of this upper bound were originally obtained in the seminal paper of Schaefer [10], and we again check that the monotone variants work in Appendix B.3. **Theorem 5.5** (Monotone version of the upper bounds for \(\mathsf{CSP}\)-\(\mathsf{SAT}\)[10, 1]).: _Let \(S\) be a finite set of relations. The following holds._ 1. _If_ \(\mathrm{E}_{2}\subseteq\mathsf{Pol}(S)\) _or_ \(\mathrm{V}_{2}\subseteq\mathsf{Pol}(S)\)_, then_ \(\mathsf{CSP}\)_-_\(\mathsf{SAT}_{S}\in\mathsf{mSIZE}[\mathsf{poly}]\)_._ 2. _If_ \(\mathrm{D}_{2}\subseteq\mathsf{Pol}(S)\)_, or_ \(\mathrm{S}_{00}\subseteq\mathsf{Pol}(S)\)_, or_ \(\mathrm{S}_{10}\subseteq\mathsf{Pol}(S)\)_, then_ \(\mathsf{CSP}\)_-_\(\mathsf{SAT}_{S}\in\mathsf{mNL}\)_._ Finally, we state here a result of [1], which classifies the _non-monotone_ complexity of \(\mathsf{CSP}\)-\(\mathsf{SAT}_{S}\) under \(\leq_{m}^{\mathsf{AC}^{0}}\) reductions. The classification of the complexity of \(\mathsf{CSP}\)-\(\mathsf{SAT}_{S}\) is based solely on \(\mathsf{Pol}(S)\). See Figure 1 for a graphical representation. **Theorem 5.6** (Refined classification of \(\mathsf{CSP}\) problems [1, Theorem 3.1]).: _Let \(S\) be a finite set of Boolean relations. The following holds._ * _If_ \(\mathrm{I}_{0}\subseteq\mathsf{Pol}(S)\) _or_ \(\mathrm{I}_{1}\subseteq\mathsf{Pol}(S)\)_, then_ \(\mathsf{CSP}\)_-_\(\mathsf{SAT}_{S}\) _is trivial._ * _If_ \(\mathsf{Pol}(S)\in\{\mathrm{I}_{2},\mathrm{N}_{2}\}\)_, then_ \(\mathsf{CSP}\)_-_\(\mathsf{SAT}_{S}\) _is_ \(\leq_{m}^{\mathsf{AC}^{0}}\)_-complete for_ \(\mathsf{NP}\)_._ * _If_ \(\mathsf{Pol}(S)\in\{\mathrm{V}_{2},\mathrm{E}_{2}\}\)_, then_ \(\mathsf{CSP}\)_-_\(\mathsf{SAT}_{S}\) _is_ \(\leq_{m}^{\mathsf{AC}^{0}}\)_-complete for_ \(\mathsf{P}\)_._ * _If_ \(\mathsf{Pol}(S)\in\{\mathrm{L}_{2},\mathrm{L}_{3}\}\)_, then_ \(\mathsf{CSP}\)_-_\(\mathsf{SAT}_{S}\) _is_ \(\leq_{m}^{\mathsf{AC}^{0}}\)_-complete for_ \(\oplus\mathsf{L}\)_._ * _If_ \(\mathrm{S}_{00}\subseteq\mathsf{Pol}(S)\subseteq\mathrm{S}_{00}{}^{2}\) _or_ \(\mathrm{S}_{10}\subseteq\mathsf{Pol}(S)\subseteq\mathrm{S}_{10}{}^{2}\) _or_ \(\mathsf{Pol}(S)\in\{\mathrm{D}_{2},\mathrm{M}_{2}\}\)_, then_ \(\mathsf{CSP}\)_-_\(\mathsf{SAT}_{S}\) _is_ \(\leq_{m}^{\mathsf{AC}^{0}}\)_-complete for_ \(\mathsf{NL}\)_._ * _If_ \(\mathsf{Pol}(S)\in\{\mathrm{D}_{1},\mathrm{D}\}\)_, then_ \(\mathsf{CSP}\)_-_\(\mathsf{SAT}_{S}\) _is_ \(\leq_{m}^{\mathsf{AC}^{0}}\)_-complete for_ \(\mathsf{L}\)_._ * _If_ \(\mathrm{S}_{02}\subseteq\mathsf{Pol}(S)\subseteq\mathrm{R}_{2}\) _or_ \(\mathrm{S}_{12}\subseteq\mathsf{Pol}(S)\subseteq\mathrm{R}_{2}\)_, then either_ \(\mathsf{CSP}\)_-_\(\mathsf{SAT}_{S}\in\mathsf{AC}^{0}\) _or_ \(\mathsf{CSP}\)_-_\(\mathsf{SAT}_{S}\) _is_ \(\leq_{m}^{\mathsf{AC}^{0}}\)_-complete for_ \(\mathsf{L}\)_._ ### A monotone dichotomy for \(\mathsf{CSP}\)-\(\mathsf{SAT}\) In this section, we prove Theorem 1.5. We first prove Part (1) of the theorem (the dichotomy for circuit size), and then we prove Part (2) of the theorem (the dichotomy for circuit depth). Figure 1: Graph of all closed classes of Boolean functions. The vertices are colored with the complexity of deciding CSPs whose set of polymorphisms corresponds to the label of the vertex. Trivial CSPs are those that correspond to constant functions. Every hardness result is proved under \(\leq_{m}^{\text{AC}^{0}}\) reductions. See Theorem 5.6 for details. A similar figure appears in [ABI\({}^{+}\)09, Figure 1]. Dichotomy for circuits.To prove the dichotomy for circuits, we first show that, for any set of relations \(S\) whose set of polymorphisms is contained in \(\mathrm{L}_{3}\), we can monotonically reduce 3-XOR-SAT to \(\mathsf{CSP\mbox{-}SAT}_{S}\). **Lemma 5.7**.: _Let \(S\) be a finite set of relations. If \(\mathsf{Pol}(S)\subseteq\mathrm{L}_{3}\), then 3-XOR-SAT\(\leq_{m}^{\mathsf{mNL}}\mathsf{CSP\mbox{-}SAT}_{S}\)._ Proof.: Inspecting Post's lattice (Figure 1), note that the only clones strictly contained in \(\mathrm{L}_{3}\) are \(\mathrm{L}_{2},\mathrm{N}_{2}\) and \(\mathrm{I}_{2}\). We will first show that the reduction holds for the case \(\mathsf{Pol}(S)=\mathrm{L}_{2}\) and then prove that the reduction also holds for the case \(\mathsf{Pol}(S)=\mathrm{L}_{3}\). Lemma 5.4 will then imply the cases \(\mathsf{Pol}(S)\in\{\mathrm{N}_{2},\mathrm{I}_{2}\}\), since \(\mathrm{I}_{2}\subseteq\mathrm{N}_{2}\subseteq\mathrm{L}_{3}\). It's not hard to check that, if \(\mathsf{Pol}(S)=\mathrm{L}_{2}\), then \(\mathsf{Pol}(S)\subseteq\mathsf{Pol}(\mbox{3-XOR-SAT})\) (it suffices to observe that bitwise XORing three satisfying assignments to a linear equation gives rise to a new satisfying assignment to the same equation). Therefore, from Lemma 5.4 we deduce that 3-XOR-SAT admits a reduction to \(\mathsf{CSP\mbox{-}SAT}_{S}\) in \(\mathsf{mNL}\). In order to prove the case \(\mathsf{Pol}(S)=\mathrm{L}_{3}\), we first prove the following claim. **Claim** ([1, Lemma 3.11]).: _Let \(S\) be a finite set of relations such that \(\mathsf{Pol}(S)=\mathrm{L}_{2}\). There exists a finite set of relations \(S^{\prime}\) such that \(\mathsf{Pol}(S^{\prime})=\mathrm{L}_{3}\) and \(\mathsf{CSP\mbox{-}SAT}_{S}^{n}\leq_{m}^{\mathsf{mProj}}\mathsf{CSP\mbox{-} SAT}_{S^{\prime}}^{n+1}\)._ Proof.: We describe the proof of Lemma 3.11 in [1] and observe that it gives a monotone reduction. For a relation \(R\in S\), let \(R^{\prime}=\{(\neg x_{1},\ldots,\neg x_{k}):(x_{1},\ldots,x_{k})\in R\}\). Let also \(S^{\prime}=\{R^{\prime}:R\in S\}\). It's not hard to check that \(\mathsf{Pol}(S^{\prime})=\mathrm{L}_{3}\), since \(S^{\prime}\) is an invariant of \(\mathrm{L}_{2}\) and \(\mathrm{N}_{2}\), and \(\mathrm{L}_{3}\) is the smallest clone containing both \(\mathrm{L}_{2}\) and \(\mathrm{N}_{2}\); moreover, if \(\rho\in\mathsf{Pol}(S^{\prime})\) and \(\rho\) is a Boolean function on at least two bits, then \(\rho\in\mathsf{Pol}(S)=\mathrm{L}_{2}\). Now let \(F\) be a instance of \(\mathsf{CSP\mbox{-}SAT}_{S}^{n}\). For every constraint \(C=R(x_{1},\ldots,x_{k})\) in \(F\), we add the constraint \(C^{\prime}=R^{\prime}(\alpha,x_{1},\ldots,x_{k})\) to the \(S^{\prime}\)-formula \(F^{\prime}\), where \(\alpha\) is a new variable. Note that \(F^{\prime}\) is a \(S^{\prime}\)-formula, defined on \(n+1\) variables, which is satisfiable if and only if \(F\) is satisfiable. Moreover, the construction of \(F^{\prime}\) from \(F\) can be done with a monotone projection. Since the case \(\mathsf{Pol}(S)=\mathrm{L}_{2}\) holds, the case \(\mathsf{Pol}(S)=\mathrm{L}_{3}\) now follows from Lemma 5.4 and the Claim. Finally, from Lemma 5.4 we conclude that the reduction also holds for the case \(\mathsf{Pol}(S)\in\{\mathrm{N}_{2},\mathrm{I}_{2}\}\), since \(\mathrm{I}_{2}\subseteq\mathrm{N}_{2}\subseteq\mathrm{L}_{3}\). **Theorem 5.8** (Dichotomy for monotone circuits).: _Let \(S\) be a finite set of relations. If \(\mathsf{Pol}(S)\subseteq\mathrm{L}_{3}\) then there exists a constant \(\varepsilon>0\) such that \(\mathsf{mSIZE}(\mathsf{CSP\mbox{-}SAT}_{S})=2^{\Omega(n^{\varepsilon})}\). Otherwise, we have \(\mathsf{mSIZE}(\mathsf{CSP\mbox{-}SAT}_{S})=n^{O(1)}\)._ Proof.: If \(\mathsf{Pol}(S)\subseteq\mathrm{L}_{3}\), the lower bound follows from the'moreover' part of Theorem 3.1, and Lemma 5.7. For the upper bound, we inspect Post's lattice (Figure 1). Observe that, if \(\mathsf{Pol}(S)\not\subseteq\mathrm{L}_{3}\), the following are the only possible cases: 1. \(\mathrm{I}_{0}\subseteq\mathsf{Pol}(S)\) or \(\mathrm{I}_{1}\subseteq\mathsf{Pol}(S)\). In both cases, any \(\mathsf{CNF}(S)\) is trivially satisfiable. 2. \(\mathrm{E}_{2}\subseteq\mathsf{Pol}(S)\) or \(\mathrm{V}_{2}\subseteq\mathsf{Pol}(S)\). In this case, \(\mathsf{CSP\mbox{-}SAT}_{S}\in\mathsf{mSIZE}[\mathsf{poly}]\) by Theorem 5.5. 3. \(\mathrm{D}_{2}\subseteq\mathsf{Pol}(S)\). In this case, \(\mathsf{CSP\mbox{-}SAT}_{S}\in\mathsf{mNL}\subseteq\mathsf{mSIZE}[\mathsf{ poly}]\) by Theorem 5.5. Figure 2: Illustration of Theorem 5.8. The vertices are colored with the _monotone circuit size_ complexity of deciding CSPs whose set of polymorphisms corresponds to the label of the vertex. **Remark 5.9**.: _We remark that the lifting theorem of [10] (which is an ingredient in the proof of Theorem 3.1) is only used to prove that the monotone complexity of \(\mathsf{CSP}\mbox{-}\mathsf{SAT}_{S}\) is exponential when \(\mathsf{Pol}(S)\subseteq\mathrm{L}_{3}\). If we only care to show a superpolynomial separation, then it suffices to apply the superpolynomial lower bound for CSPs with counting proved in [11, 12] using the approximation method. Indeed, we give an explicit proof in Appendix A. The same holds for the consequences of this theorem (see Theorem 5.19)._ Dichotomy for formulas.Define \(\mathsf{3}\mbox{-}\mathsf{Horn}\mbox{-}\mathsf{SAT}_{n}:\{0,1\}^{2n^{3}+n} \to\{0,1\}\) as \(\mathsf{3}\mbox{-}\mathsf{Horn}\mbox{-}\mathsf{SAT}_{n}=\mathsf{CSP}\mbox{-} \mathsf{SAT}_{\mathcal{H}^{3}}^{n}\), where \[\mathcal{H}^{3}=\{(\neg x_{1}\vee\neg x_{2}\lor x_{3}),(\neg x_{1}\vee\neg x _{2}\vee\neg x_{3}),(x)\}\,.\] The following is proved in [13, 10]. **Theorem 5.10** ([13, 10]).: _There exists \(\varepsilon>0\) such that \(\mathsf{3}\mbox{-}\mathsf{Horn}\mbox{-}\mathsf{SAT}\in\mathsf{mSIZE}[\mathsf{ poly}]\setminus\mathsf{mDEPTH}[o(n^{\varepsilon})]\)._ Proof sketch.: Since \(\mathrm{E}_{2}\subseteq\mathsf{Pol}(\mathcal{H}^{3})\) (see, e.g., [12, Lemma 4.8]), the upper bound follows from Theorem 5.5. The lower bound follows from a lifting theorem of [13, 10]. They show that the monotone circuit-depth of \(\mathsf{3}\mbox{-}\mathsf{Horn}\mbox{-}\mathsf{SAT}\) is at least the depth of the smallest Resolution-tree refuting a so-called _pebbling formula_. Since this formula requires Resolution-trees of depth \(n^{\varepsilon}\), the lower bound follows. Analogously to the previous section, we show that \(\mathsf{3}\mbox{-}\mathsf{Horn}\mbox{-}\mathsf{SAT}\) reduces to \(\mathsf{CSP}\mbox{-}\mathsf{SAT}_{S}\) whenever \(\mathsf{Pol}(S)\) is small enough, in a precise sense stated below. We then deduce the dichotomy for formulas with a similar argument. **Lemma 5.11**.: _Let \(S\) be a finite set of relations. If \(\mathsf{Pol}(S)\subseteq\mathrm{E}_{2}\) or \(\mathsf{Pol}(S)\subseteq\mathrm{V}_{2}\), then \(\mathsf{3}\mbox{-}\mathsf{Horn}\mbox{-}\mathsf{SAT}\leq_{m}^{\mathsf{mNL}}\)\(\mathsf{CSP}\mbox{-}\mathsf{SAT}_{S}\)._ Proof.: We first consider the case \(\mathsf{Pol}(S)\subseteq\mathrm{E}_{2}\). Note that \(\mathrm{E}_{2}\subseteq\mathsf{Pol}(\mathsf{3}\mbox{-}\mathsf{Horn}\mbox{-} \mathsf{SAT})\) (see, e.g., [12, Lemma 4.8]). Therefore, from Lemma 5.4 we deduce that \(\mathsf{3}\mbox{-}\mathsf{Horn}\mbox{-}\mathsf{SAT}\) admits a reduction to \(\mathsf{CSP}\mbox{-}\mathsf{SAT}_{S}\) in \(\mathsf{mNL}\). Now let \(\mathsf{3}\mbox{-}\mathsf{AntiHorn}\mbox{-}\mathsf{SAT}=\mathsf{CSP}\mbox{-} \mathsf{SAT}_{\mathcal{A}^{3}}\), where \(\mathcal{A}^{3}=\{(x_{1}\lor x_{2}\vee\neg x_{3}),(x_{1}\lor x_{2}\lor x_{3}),( \neg x)\}\). Observe that a \(\mathcal{H}^{3}\)-formula \(\varphi\) is satisfiable if and only if the \(\mathcal{A}^{3}\)-formula \(\varphi(\neg x_{1},\ldots,\neg x_{n})\) is satisfiable. Therefore, we have \(\mathsf{3}\mbox{-}\mathsf{Horn}\mbox{-}\mathsf{SAT}\leq_{m}^{\mathsf{mProj}} \mathsf{3}\mbox{-}\mathsf{AntiHorn}\mbox{-}\mathsf{SAT}\). Observing that \(\mathrm{V}_{2}\subseteq\mathsf{Pol}(\mathcal{A}^{3})\) (again, see e.g. [12, Lemma 4.8]), the result now follows from Lemma 5.4 and the previous paragraph. **Theorem 5.12** (Dichotomy for monotone formulas).: _Let \(S\) be a finite set of relations. If \(\mathsf{Pol}(S)\subseteq\mathrm{L}_{3}\), or \(\mathsf{Pol}(S)\subseteq\mathrm{V}_{2}\), or \(\mathsf{Pol}(S)\subseteq\mathrm{E}_{2}\), then there is a constant \(\varepsilon>0\) such that \(\mathsf{mDEPTH}(\mathsf{CSP}\mbox{-}\mathsf{SAT}_{S})=\Omega(n^{\varepsilon})\). Otherwise, we have \(\mathsf{CSP}\mbox{-}\mathsf{SAT}_{S}\in\mathsf{mNL}\subseteq\mathsf{mNC}^{2} \subseteq\mathsf{mDEPTH}[\log^{2}n]\)._ Proof.: We will first prove the lower bound. If \(\mathsf{Pol}(S)\subseteq\mathrm{L}_{3}\), the lower bound follows from Theorem 5.8. If \(\mathsf{Pol}(S)\subseteq\mathrm{V}_{2}\) or \(\mathsf{Pol}(S)\subseteq\mathrm{E}_{2}\), the lower bound follows from Theorem 5.10 and Lemma 5.11. By inspecting Post's lattice (Remark 5.1), we see that the remaining cases are: 1. \(\mathrm{I}_{0}\subseteq\mathsf{Pol}(S)\) or \(\mathrm{I}_{1}\subseteq\mathsf{Pol}(S)\). In both cases, any \(\mathsf{CNF}(S)\) is trivially satisfiable. 2. \(\mathrm{S}_{00}\subseteq\mathsf{Pol}(S)\), or \(\mathrm{S}_{10}\subseteq\mathsf{Pol}(S)\), or \(\mathrm{D}_{2}\subseteq\mathsf{Pol}(S)\). In all of those three cases, we have \(\mathsf{CSP}\mbox{-}\mathsf{SAT}_{S}\in\mathsf{mNL}\) by Theorem 5.5. Figure 3: Illustration of Theorem 5.12. The vertices are colored with the _monotone circuit depth_ complexity of deciding CSPs whose set of polymorphisms corresponds to the label of the vertex. ### Some auxiliary results In this section, we prove auxiliary results needed in the proof of a more general form of Theorem 1.4. In particular, we will prove that all \(\mathsf{CSP-SAT}_{S}\) which are in \(\mathsf{AC}^{0}\) are also contained in \(\mathsf{mAC}^{0}\subseteq\mathsf{mNC}^{1}\). Moreover, we show that, if \(\mathsf{CSP-SAT}_{S}\notin\mathsf{mNC}^{1}\), then \(\mathsf{CSP-SAT}_{S}\) is \(\mathsf{L}\)-hard under \(\leq_{m}^{\mathsf{AC}^{0}}\) reductions. We first observe that, when \(\mathsf{COQ}(S_{1})\subseteq\mathsf{COQ}(S_{2})\), there exists an efficient low-depth reduction from \(\mathsf{CSP-SAT}_{S_{1}}\) to \(\mathsf{CSP-SAT}_{S_{2}}\). This reduction, which will be useful in this section, is more refined than the one given by Lemma 5.4. A proof of the non-monotone version of this statement is found in [1, Proposition 2.3], and we give a monotone version of this proof in Appendix B.2. **Lemma 5.13** ([1, Proposition 2.3]).: _If \(\mathsf{COQ}(S_{1})\subseteq\mathsf{COQ}(S_{2})\), then there exists a constant \(C\in\mathbb{N}\) such that \(\mathsf{CSP-SAT}_{S_{1}}^{n}\leq_{m}^{\mathsf{mOR}}\mathsf{CSP-SAT}_{S_{2}}^ {Cn}\)._ Proof.: We defer the proof to Appendix B.2. We now recall some lemmas from [1], and prove a few consequences from them. We say that a set \(S\) of relations _can express equality_ if \(\{=\}\subseteq\mathsf{COQ}(S)\). **Lemma 5.14** ([1]).: _Let \(S\) be a finite set of relations. Suppose \(\mathrm{S}_{02}\subseteq\mathsf{Pol}(S)\) (\(\mathrm{S}_{12}\subseteq\mathsf{Pol}(S)\), resp.) and that \(S\) cannot express equality. Then there exists \(k\geq 2\) such that \(S\subseteq\mathsf{COQ}(\left\{\mathsf{OR}^{k},x,\neg x\right\})\) (\(S\subseteq\mathsf{COQ}(\left\{\mathsf{NAND}^{k},x,\neg x\right\})\), resp.)._ Proof.: Follows from the proof of Lemma 3.8 of [1]. **Lemma 5.15**.: _Let \(S\) be a finite set of relations such that \(\mathsf{Pol}(S)\subseteq\mathrm{R}_{2}\). If \(\mathrm{S}_{02}\subseteq\mathsf{Pol}(S)\) or \(\mathrm{S}_{12}\subseteq\mathsf{Pol}(S)\), and \(S\) cannot express equality, then \(\mathsf{CSP-SAT}_{S}\in\mathsf{mAC}_{3}^{0}\)._ Proof.: We write the proof in the case \(\mathrm{S}_{02}\subseteq\mathsf{Pol}(S)\). The other case is analogous. From Lemmas 5.13 and 5.14 and Items (iv) and (v) of Lemma 5.3, we get that there is a monotone OR-reduction from \(\mathsf{CSP-SAT}_{S}\) to \(\mathsf{CSP-SAT}_{\left\{\mathsf{OR}^{k},x,\neg x\right\}}\) for some \(k\). However, an \(\left\{\mathsf{OR}^{k},x,\neg x\right\}\)-formula is unsatisfiable iff there exists a literal and its negation as a constraint in the formula, or if there exists a disjunction in the formula such that every one of its literals appears negatively as a constraint. This condition can be easily checked by a polynomial-size monotone DNF. Composing the monotone DNF with the monotone OR-reduction, we obtain a depth-\(3\)\(\mathsf{AC}^{0}\) circuit computing \(\mathsf{CSP-SAT}_{S}\). **Lemma 5.16** ([1, Lemma 3.8]).: _Let \(S\) be a finite set of relations such that \(\mathsf{Pol}(S)\subseteq\mathrm{R}_{2}\). If \(\mathrm{S}_{02}\subseteq\mathsf{Pol}(S)\) or \(\mathrm{S}_{12}\subseteq\mathsf{Pol}(S)\), and \(S\) can express equality, then \(\mathsf{CSP-SAT}_{S}\) is \(\mathsf{L}\)-hard under \(\leq_{m}^{\mathsf{AC}^{0}}\) reductions._ **Lemma 5.17**.: _Let \(S\) be a finite set of relations. If \(\mathrm{S}_{02}\not\subseteq\mathsf{Pol}(S)\) and \(\mathrm{S}_{12}\not\subseteq\mathsf{Pol}(S)\), then \(\mathsf{CSP-SAT}_{S}\) is \(\mathsf{L}\)-hard or trivial._ Proof.: This follows by inspecting Post's lattice (Figure 1) and the classification theorem (Theorem 5.6). We may now prove the main result of this subsection. **Theorem 5.18**.: _We have \(\mathsf{CSP}\cap\mathsf{AC}^{0}\subseteq\mathsf{mAC}_{3}^{0}\). Moreover, if \(\mathsf{CSP-SAT}_{S}\notin\mathsf{mAC}_{3}^{0}\), then \(\mathsf{CSP-SAT}_{S}\) is \(\mathsf{L}\)-hard under \(\leq_{m}^{\mathsf{AC}^{0}}\) reductions._ Proof.: Let \(S\) be a finite set of relations. If \(\mathsf{CSP\text{-}SAT}_{S}\not\in\mathsf{mAC}^{0}_{3}\), then, by Lemma 5.15, at least one of the following cases hold: 1. \(\mathrm{S}_{02}\subseteq\mathsf{Pol}(S)\subseteq\mathrm{R}_{2}\) or \(\mathrm{S}_{12}\subseteq\mathsf{Pol}(S)\subseteq\mathrm{R}_{2}\), and \(S\) can express the equality relation; 2. \(\mathrm{S}_{02}\not\subseteq\mathsf{Pol}(S)\subseteq\mathrm{R}_{2}\) and \(\mathrm{S}_{12}\not\subseteq\mathsf{Pol}(S)\subseteq\mathrm{R}_{2}\). 3. \(\mathsf{Pol}(S)\not\subseteq\mathrm{R}_{2}\). Since \(\mathsf{CSP\text{-}SAT}_{S}\) is not trivial, we obtain that \(\mathsf{CSP\text{-}SAT}_{S}\) is \(\mathsf{L}\)-hard in the first two cases by Lemmas 5.16 and 5.17, and it's easy to check that \(\mathsf{CSP\text{-}SAT}_{S}\) is also \(\mathsf{L}\)-hard in the third case by inspecting Post's lattice (Figure 1) and the classification theorem (Theorem 5.6). Since \(\mathsf{L}\not\subseteq\mathsf{AC}^{0}\), this also implies that, if \(\mathsf{CSP\text{-}SAT}_{S}\in\mathsf{AC}^{0}\), then \(\mathrm{S}_{02}\subseteq\mathsf{Pol}(S)\subseteq\mathrm{R}_{2}\) or \(\mathrm{S}_{12}\subseteq\mathsf{Pol}(S)\subseteq\mathrm{R}_{2}\), and \(S\) cannot express the equality relation. Lemma 5.15 again gives \(\mathsf{CSP\text{-}SAT}_{S}\in\mathsf{mAC}^{0}_{3}\). ### Consequences for monotone circuit lower bounds via lifting We now prove a stronger form of Theorem 1.4. In the previous section, we showed that \(\mathsf{CSP}\cap\mathsf{AC}^{0}\subseteq\mathsf{mAC}^{0}\). In particular, this means that there does not exist a finite set of relations \(S\) such that \(\mathsf{CSP\text{-}SAT}_{S}\) separates \(\mathsf{AC}^{0}\) and \(\mathsf{mNC}^{1}\), a separation which we proved in Theorem 1.1. We will also observe that, if \(\mathsf{CSP\text{-}SAT}_{S}\notin\mathsf{mNC}^{2}\), then \(\mathsf{CSP\text{-}SAT}_{S}\) is \(\oplus\mathsf{L}\)-hard. **Theorem 5.19**.: _Let \(S\) be a finite set of Boolean relations._ 1. _If_ \(\mathsf{CSP\text{-}SAT}_{S}\notin\mathsf{mAC}^{0}_{3}\) _then_ \(\mathsf{CSP\text{-}SAT}_{S}\) _is_ \(\mathsf{L}\)_-hard under_ \(\leq_{m}^{\mathsf{AC}^{0}}\) _reductions._ 2. _If_ \(\mathsf{CSP\text{-}SAT}_{S}\notin\mathsf{mNC}^{2}\)_, then_ \(\mathsf{CSP\text{-}SAT}_{S}\) _is_ \(\oplus\mathsf{L}\)_-hard under_ \(\leq_{m}^{\mathsf{AC}^{0}}\) _reductions._ Proof.: Item (1) follows from Theorem 5.18. To prove item (2), suppose that \(\mathsf{mDEPTH}(\mathsf{CSP\text{-}SAT}_{S})=\omega(\log^{2}n)\). Then, by Theorem 5.12, we conclude that \(\mathsf{Pol}(S)\subseteq\mathrm{L}_{3}\), or \(\mathsf{Pol}(S)\subseteq\mathrm{V}_{2}\), or \(\mathsf{Pol}(S)\subseteq\mathrm{E}_{2}\). Theorem 5.6 implies that \(\mathsf{CSP\text{-}SAT}_{S}\) is \(\oplus\mathsf{L}\)-hard. **Further Discussion.** We recall the discussion of Section 1.1.3. We introduced and defined the functions \(\mathsf{CSP\text{-}SAT}_{S}\) in that section, as a way to capture monotone circuit lower bounds proved via lifting. This in particular captures the monotone function \(\mathsf{3\text{-}XOR\text{-}SAT}\), which was proved in [10] to require monotone circuit lower bounds of size \(2^{n^{\Omega(1)}}\) to compute, even though \(\oplus\mathsf{L}\)-machines running in polynomial-time can compute it. Theorem 5.19 proves that this separation between monotone and non-monotone circuit lower bounds cannot be improved by varying the set of relations \(S\), as we argue below. There are two ways one could try to find a function in \(\mathsf{AC}^{0}\) with large monotone complexity using a \(\mathsf{CSP\text{-}SAT}\) function. First, one could try to define a set of relations \(S\) such that \(\mathsf{CSP\text{-}SAT}_{S}\in\mathsf{AC}^{0}\), but the monotone complexity of \(\mathsf{CSP\text{-}SAT}_{S}\) is large. However, Item (1) of Theorem 5.19 proves that this is impossible, as any \(\mathsf{CSP\text{-}SAT}\) function outside of \(\mathsf{mAC}^{0}\) is \(\mathsf{L}\)-hard under simple reductions and, therefore, cannot be computed in \(\mathsf{AC}^{0}\). Secondly, one could try to be apply the arguments of Section 3, consisting of a padding trick and a simulation theorem. When \(S\) is the set of \(3\mathsf{XOR}\) relations, then indeed we obtain a function in \(\mathsf{AC}^{0}[\oplus]\) with superpolynomial monotone circuit complexity, as proved in Theorem 3.1. However, Item (2) of Theorem 5.19 proves that this is best possible, as any \(\mathsf{CSP\text{-}SAT}\) function which admits a superpolynomial monotone circuit lower bound must be \(\oplus\mathsf{L}\)-hard and, therefore, at least as hard as 3-XOR-SAT for non-monotone circuits. Item (2) also shows that even CSP-SAT functions with a \(\omega(\log^{2}n)\) monotone depth lower bound must be \(\oplus\mathsf{L}\)-hard, which suggests that the arguments of Section 3 applied to a CSP-SAT function are not able to prove the separation of Theorem 3.3. A caveat to these impossibility results is in order. First, we only study Boolean-valued CSPs here, though the framework of lifting can also be applied in the context of non-Boolean CSPs. It's not clear if non-Boolean CSPs exhibit the same dichotomies for monotone computation we proved in this section. We remark that Schaefer's dichotomy for Boolean-valued CSPs [10] has been extended to non-Boolean CSPs [14, 15]. Secondly, the instances of CSP-SAT generated by lifting do not cover the entirety of the minterms and maxterms of CSP-SAT. In particular, our results do not rule out the possibility that a clever interpolation of the instances generated by lifting may give rise to a function that is easier to compute by non-monotone circuits, and therefore bypasses the hardness results of Theorem 5.19. One example is the Tardos function [13]. A lifting theorem applied to a Pigeonhole Principle formula can be used to prove a lower bound on the size of monotone circuits that accept cliques of size \(k\) and reject graphs that are \((k-1)\)-colorable, for some \(k=n^{\varepsilon}\)[12, 11]. A natural interpolation for these instances would be the \(k\)-Clique function, which, being NP-complete, would be related to an NP-complete CSP-SAT. However, as proved by [13], there is a monotone function in P which has the same output behaviour over these instances.
2308.09536
The 19th International Workshop on Termination (WST 2023): Preface, Invited Talk Abstract, and Tool Descriptions
This report contains the proceedings of the 19th International Workshop on Termination (WST 2023), which was held in Obergurgl during August 24--25 as part of Obergurgl Summer on Rewriting (OSR 2023).
Akihisa Yamada, Benjamin Lucien Kaminski, Dieter Hofbauer, Fred Mesnard, Étienne Payet
2023-08-15T17:32:27Z
http://arxiv.org/abs/2308.09536v1
# 19th International Workshop on Termination ###### Abstract We present a new method for the pricing of a portfolio that [MISSING_PAGE_POST] ## Chapter 1 Preface This report contains the proceedings of the 19th International Workshop on Termination (WST 2023), which was held in Obergurgl during August 24-25 as part of Obergurgl Summer on Rewriting (OSR 2023). The Workshop on Termination traditionally brings together, in an informal setting, researchers interested in all aspects of termination, whether this interest be practical or theoretical, primary or derived. The workshop also provides a ground for cross-fertilization of ideas from the different communities interested in termination (e.g., working on computational mechanisms, programming languages, software engineering, constraint solving, etc.). The friendly atmosphere enables fruitful exchanges leading to joint research and subsequent publications. The 19th International Workshop on Termination continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), Edinburgh (2010), Obergurgl (2012), Bertinoro (2013), Vienna (2014), Obergurgl (2016), Oxford (2018), the virtual space (2021), and Haifa (2022). The WST 2023 program included an invited talk by Benjamin Lucien Kaminski on _Termination of Probabilistic Programs_. WST 2023 received 13 regular submissions and six abstracts for tool presentations, two of which were accompanied by a system description. After light reviewing the program committee decided to accept all submissions. The proceedings is published on arXiv. Each of the 13 regular submissions is made available as an arXiv article, and the two system descriptions are combined into this preface, due to the arXiv policy not to accept very short papers. I would like to thank the program committee members for their dedication and effort, and the organizers of OSR 2023 for the invaluable help in the organization. Innsbruck, August 2023 Akihisa Yamada ## Chapter 1 Organization ###### Contents * Preface * Organization * Invited Talk * Termination of Probabilistic Programs * _Benjamin Lucien Kaminski_ * Regular Papers * On Singleton Self-Loop Removal for Termination of LCTRSs with Bit-Vector Arithmetic * _Ayuka Matsumi, Naoki Nishida, Misaki Kojima, Donghoon Shin_. arXiv:2307.14094 * Generalizing Weighted Path Orders * _Teppei Saito, Nao Hirokawa_. arXiv:2307.13973 * Automated Complexity Analysis of Integer Programs via Triangular Weakly Non-Linear Loops (Short WST Version) * _Nils Lommen, Eleanore Meyer, Jurgen Giesl_. arXiv:2307.10061 * Proving Non-Termination by Acceleration Driven Clause Learning (Short WST Version) * _Florian Frohn, Jurgen Giesl_. arXiv:2307.09839 * Binary Non-Termination in Term Rewriting and Logic Programming * _Etienne Payet_. arXiv:2307.11549 * A Verified Efficient Implementation of the Weighted Path Order * _Rene Thiemann, Elias Wenninger_. arXiv:2307.14671 * Dependency Tuples for Almost-Sure Innermost Termination of Probabilistic Term Rewriting (Short WST Version) * _Jan-Christoph Kassing, Jurgen Giesl_. arXiv:2307.10002 * Automated Termination Proofs for C Programs with Lists (Short WST Version) * _Jera Hensel, Jurgen Giesl_. arXiv:2307.11024 * Higher-Order LCTRSs and Their Termination * _Liye Guo, Cynthia Kop_. arXiv:2307.13519 * H. Battles and A. Termination, Revisited * _Nao Hirokawa, Aart Middeldorp_. arXiv:2307.14036 * Old and B. Benchmarks for Relative Termination of String Rewrite Systems * _Dieter Hofbauer, Johannes Waldmann_. arXiv:2307.14149 Linear Termination over N is Undecidable _Fabian Mitterwallner, Aart Middeldorp, Rene Thiemann_ arXiv:2307.14805 Complexity Analysis for Call-by-Value Higher-Order Rewriting _Cynthia Kop, Deivid Vale_ arXiv:2307.13426 ### Tool Descriptions MultumNonMulta entering Term Rewriting _Dieter Hofbauer_ NTI+cTI: a Logic Programming Termination Analyzer _Fred Mesnard and Etienne Payet_ ## Chapter 1 Introduction The first part of this thesis is the study of the _local_ quantum field theory (QFT). The second part of this thesis is the study of the _local_ quantum field theory (QFT). The second part of this thesis is the study of the _local_ quantum field theory (QFT). The second part of this thesis is the study of the _local_ quantum field theory (QFT). The second part of this thesis is the study of the _local_ quantum field theory (QFT). The third part of this thesis is the study of the _local_ quantum field theory (QFT). # Termination of Probabilistic Programs Benjamin Lucien Kaminski Saarland University University College London ###### Abstract Unlike for ordinary programs, termination of probabilistic programs is more nuanced: A probabilistic program can terminate with probability 1 while still requiring an expected infinite number of computation steps until termination. We will explore the complexity landscape of probabilistic program termination and present proof rules for proving both almost-sure termination (i.e. termination with probability 1) as well as positive almost-sure termination (i.e. termination within finite expected time). Time permitting, we will furthermore dive into open problems on termination of weighted programs - a generalization of probabilistic programs where branches can be associated with more general weights from a semiring. Probabilistic Programming, Termination 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 202 2012 2012 2012 2012 202 2012 202 2012 2012 202 2012 202 202 202 2022 202 2022 202 202 202 202 2022 202 202 202 202 2022 202 2022 202 202 2022 202 202 202 2022 202 202 2022 2022 2022 222 2022 222 222 2222 222 222 222 222 222 222 22 222 222 222 222 222 222 222 222 222 222 222 222 2222 222 222 222 222 222 2222 222 222 2222 222 222 222 222 222 222 222 2222 222 222 222 222 222 2222 2222 2222 2222 222 222 2222 222 2222 2222 2222 222 2222 2222 222 2222 2222 2222 222 2222 2222 2222 2222 2222 22222 2222 2222 2222 22222 2222 22222 2222 2222 22222 2222 22222 22222 222222 222222 222222 22222 2222222 222222 2222222 2222222 2222222 2222222 222222222 222222222222 ## Chapter 1 Introduction The study of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_ properties of the _local_local_ properties of the _local_ ## 1 MultiNonMulta entering Term Rewriting ### Dieter Hofbauer ASW Saarland In this talk we report on recent attempts to generalize the tool MultumNonMulta (MnM) from string to term rewriting. Here, termination proofs are based on a reduction from term to string rewriting as described by Yamazaki [1], called _stringification_, combined with the concept of relative termination. Non-termination proofs rely on standard forward closures, enriched by initial substitutions, using MnM's data structures, which have already proven successful for string rewriting, see [2]. ### 2012 ACM Subject Classification Theory of computation \(\rightarrow\) Equational logic and rewriting; Theory of computation \(\rightarrow\) Rewrite systems ###### Acknowledgements. This work was supported by the National Science Foundation of China under grant number 10 # NTI+CTI: a Logic Programming Termination Analyzer Fred Mesnard LIM, university of La Reunion, France Etienne Payet Corresponding author ###### Abstract We describe NTI+CTI, our logic programming termination analyzer that takes part in the Termination Competition 2023. The tool is built from two separate components, NTI for _Non-Termination Inference_ and cTI for _constraint-based Termination Inference_, plus an overall main process. Termination, Non-termination, Logic Programming 1 Footnote 1: Corresponding author 2 Footnote 2: Corresponding author 3 Footnote 3: Source code available here: [https://github.com/FredMesnard/cTI](https://github.com/FredMesnard/cTI) ## 1 Nti NTI [9] is fully written in Java. It performs automated non-termination proofs of logic programs. It implements a technique that consists in unfolding [6] the program under analysis and in checking whether the produced unfolded clauses satisfy some non-termination criteria [11]. When a proof is successful, NTI provides an example of a non-terminating query. Two kinds of criteria are used. * The first kind [11] relies on an extension of the "is more general than" relation. It is able to detect infinite derivations that consist of the repeated application of the same sequence \(\omega\) of clauses, _i.e._, of the form \(Q_{0}\!\Rightarrow_{\omega}Q_{1}\!\Rightarrow_{\omega}\cdots\). If the body of an unfolded clause is more general than the head up to some predicate arguments in _neutral position_, then non-termination is detected; more precisely, every query obtained from replacing the neutral arguments of the head with ground terms is non-terminating. So, if such a non-terminating query belongs to the mode of interest then the proof is successful. * The second kind [10] is able to detect infinite derivations that rely on two sequences \(\omega_{1}\) and \(\omega_{2}\) of clauses, _i.e._, that have the form \(Q_{0}(\Rightarrow_{\omega_{1}}^{*}\circ\Rightarrow_{\omega_{2}})Q_{1}( \Rightarrow_{\omega_{1}}^{*}\circ\Rightarrow_{\omega_{2}})\cdots\). It consists in detecting pairs \((c_{1},c_{2})\) of unfolded clauses of a particular form. Intuitively, \(c_{1}\) and \(c_{2}\) are mutually recursive and, in \(c_{1}\), a context is removed from the head to the body while, in \(c_{2}\), it is added again. ## 2 cTI Termination analysis starts with applying termination inference as presented in [8]3. If the mode given in the model query of interest of the analyzed program implies the inferred termination condition, termination is ensured. This first analysis relies on the _term-size_ norm to abstract the logic program and on linear ranking functions, see e.g., [3] for a review. If necessary, termination inference is restarted using the same tool but by combining both the term-size norm and the _list-size_ norm, as proposed in [5]. Combining these two norms doubles the arity of each predicate, hence increases the analysis time so we use it in a second step. In case these first attemps fail, we switch to BinTerm, the termination analyzer we've built for Java bytecode termination analysis [12]. BinTerm includes various termination tests: linear and eventual ranking functions [2], multi-dimensional linear ranking [1] and the size-change principle [7]. BinTerm analyzes binary Constrained Horn Clauses. Here how we map the original model query and the original logic program to a binary Constrained Horn Clauses. The original logic program goes through a tabled left-to-right top-down mode analysis starting from the original model query of interest. An abstract numeric constraint logic program is built using the term-size norm. A numerical model is computed [4]. From these three pieces, a binary Constrained Horn Clauses program is created by a tabled left-to-right top-down interpreter, which keeps only the input arguments of the predicates. Finally, this binary Constrained Horn Clauses program is analyzed by BinTerm. Again, if necessary, a similar analysis is done by combining both the term-size and the list-size norms. At last resort, a left-to-right top-down meta-interpreter with occurs-check computes a time-bounded SLD tree for the most general query. If the tree is finite and because we deal with logic programs, any query from the set of concrete queries abstracted by the original model query terminates. Otherwise, the termination analyzer cannot conclude. ## 3 NTI+cTI The main process of our analyzer performs non-termination and termination analyses in parallel. It launches a thread that runs NTI and another thread that runs cTI. If one thread terminates successfully then the other one is stopped.
2308.11379
Colordag: An Incentive-Compatible Blockchain
We present Colordag, a blockchain protocol where following the prescribed strategy is, with high probability, a best response as long as all miners have less than 1/2 of the mining power. We prove the correctness of Colordag even if there is an extremely powerful adversary who knows future actions of the scheduler: specifically, when agents will generate blocks and when messages will arrive. The state-of-the-art protocol, Fruitchain, is an epsilon-Nash equilibrium as long as all miners have less than 1/2 of the mining power. However, there is a simple deviation that guarantees that deviators are never worse off than they would be by following Fruitchain, and can sometimes do better. Thus, agents are motivated to deviate. Colordag implements a solution concept that we call epsilon-sure Nash equilibrium and does not suffer from this problem. Because it is an epsilon-sure Nash equilibrium, Colordag is an epsilon Nash equilibrium and with probability (1 - epsilon) is a best response.
Ittai Abraham, Danny Dolev, Ittay Eyal, Joseph Y. Halpern
2023-08-22T12:08:20Z
http://arxiv.org/abs/2308.11379v1
# Colorado: An Incentive-Compatible Blockchain ###### Abstract We present _Colordag_, a blockchain protocol where following the prescribed strategy is, with high probability, a best response as long as all miners have less than \(1/2\) of the mining power. We prove the correctness of Colordag even if there is an extremely powerful adversary who knows future actions of the scheduler: specifically, when agents will generate blocks and when messages will arrive. The state-of-the-art protocol, Fruitchain, is an \(\varepsilon\)-Nash equilibrium as long as all miners have less than \(1/2\) of the mining power. However, there is a simple deviation that guarantees that deviators are never worse off than they would be by following Fruitchain, and can sometimes do better. Thus, agents are motivated to deviate. Colordag implements a solution concept that we call _\(\varepsilon\)-sure Nash equilibrium_ and does not suffer from this problem. Because it is an \(\varepsilon\)-sure Nash equilibrium, Colordag is an \(\varepsilon\)-Nash equilibrium **and** with probability \(1-\varepsilon\) is a best response. Game theory, incentives, blockchain 10.4230/LIPIcs.CVIT.2016.23 [1]0.4230/LIPIcs.CVIT.2016.23 [1]0.4230/LIPIcs.CVIT.2016.23 [2]0.4230/LIPIcs.CVIT.2016.23 [3]0.4230/LIPIcs.CVIT.2016.23 [4]0.4230/LIPIcs.CVIT.2016.23 [5]0.4230/LIPIcs.CVIT.2016.23 [6]0.4230/LIPIcs.CVIT.2016.23 [7]0.4230/LIPIcs.CVIT.2016.23 [8]0.4230/LIPIcs.CVIT.2016.23 [9]0.4230/LIPIcs.CVIT.2016.23 [10]0.4230/LIPIcs.CVIT.2016.23 [11]0.4230/LIPIcs.CVIT.2016.23 [12]0.4230/LIPIcs.CVIT.2016.23 [13]0.4230/LIPIcs.CVIT.2016.23 [14]0.4230/LIPIcs.CVIT.2016.23 [15]0.4230/LIPIcs.CVIT.2016.23 [16]0.4230/LIPIcs.CVIT.2016.23 [17]0.4230/LIPIcs.CVIT.2016.23 [18]0.4230/LIPIcs.CVIT.2016.23 [19]0.4230/LIPIcs.CVIT.2016.23 [20]0.4230/LIPIcs.CVIT.2016.23 [21]0.4230/LIPIcs.CVIT.2016.23 [22]0.4230/LIPIcs.CVIT.2016.23 [23]0.4230/LIPIcs.CVIT.2016.23 [24]0.4230/LIPIcs.CVIT.2016.23 [25]0.4230/LIPIcs.CVIT.2016.23 [26]0.4230/LIPIcs.CVIT.2016.23 [27]0.4230/LIPIcs.CVIT.2016.23 [28]0.4230/LIPIcs.CVIT.2016.23 [29]0.4230/LIPIcs.CVIT.2016.23 [30]0.4230/LIPIcs.CVIT.2016.23 [31]0.4230/LIPIcs.CVIT.2016.23 [32]0.4230/LIPIcs.CVIT.2016.23 [33]0.4230/LIPIcs.CVIT.2016.23 [34]0.4230/LIPIcs.CVIT.2016.23 [35]0.4230/LIPIcs.CVIT.2016.23 [36]0.4230/LIPIcs.CVIT.2016.23 [37]0.4230/LIPIcs.CVIT.2016.23 [38]0.4230/LIPIcs.CVIT.2016.23 [39]0.4230/LIPIcs.CVIT.2016.23 [40]0.4230/LIPIcs.CVIT.2016.23 [41]0.4230/LIPIcs.CVIT.2016.23 [42]0.4230/LIPIcs.CVIT.2016.23 [43]0.4230/LIPIcs.CVIT.2016.23 [44]0.4230/LIPIcs.CVIT.2016.23 [45]0.4230/LIPIcs.CVIT.2016.23 [46]0.4230/LIPIcs.CVIT.2016.23 [47]0.4230/LIPIcs.CVIT.2016.23 [48]0.4230/LIPIcs.CVIT.2016.23 [49]0.4230/LIPIcs.CVIT.2016.23 [40]0.4230/LIPIcs.CVIT.2016.23 [40]0.4230/LIPIcs.CVIT.2016.23 [41]0.4230/LIPIcs.CVIT.2016.23 [42]0.4230/LIPIcs.CVIT.2016.23 [43]0.4230/LIPIcs.CVIT.2016.23 [44]0.4230/LIPIcs.CVIT.2016.23 [44]0.4230/LIPIcs.CVIT.2016.23 [45]0.4230/LIPIcs.CVIT.2016.23 [46]0.4230/LIPIcs.CVIT.2016.23 [47]0.4230/LIPIcs.CVIT.2016.23 [48]0.4230/LIPIcs.CVIT.2016.23 [49]0.4230/LIPIcs.CVIT.2016.23 [49]0.4230/LIPIcs.CVIT.2016.23 [50]0.4230/LIPIcs.CVIT.2016.23 [60]0.4230/LIPIcs.CVIT.2016.23 [61]0.4230/LIPIcs.CVIT.2016.23 [62]0.4230/LIPIcs.CVIT.2016.23 [63]0.4230/LIPIcs.CVIT.2016.23 [64]0.4230/LIPIcs.CVIT.2016.23 [65]0.4230/LIPIcs.CVIT.2016.23 [66]0.4230/LIPIcs.CVIT.2016.23 [67]0.4230/LIPIcs.CVIT.2016.23 [68]0.4230/LIPIcs.CVIT.2016.23 [69]0.4230/LIPIcs.CVIT.2016.23 [70]0.4230/LIPIcs.CVIT.2016.23 [71]0.4230/LIPIcs.CVIT.2016.23 [72]0.4230/LIPIcs.CVIT.2016.23 [73]0.4230/LIPIcs.CVIT.2016.23 [74]0.4230/LIPIcs.CVIT.2016.23 [75]0.4230/LIPIcs.CVIT.2016.23 [76]0.4230/LIPIcs.CVIT.2016.23 [76]0.4230/LIPIcs.CVIT.2016.23 [77]0.4230/LIPIcs.CVIT.2016.23 [78]0.4230/LIPIcs.CVIT.2016.23 [79]0.4230/LIPIcs.CVIT.2016.23 [80]0.4230/LIPIcs.CVIT.2016.23 [81]0.4230/LIPIcs.CVIT.2016.23 [82]0.4230/LIPIcs.CVIT.2016.23 [83]0.4230/LIPIcs.CVIT.2016.23 [84]0.4230/LIPIcs.CVIT.2016.23 [85]0.4230/LIPIcs.CVIT.2016.23 [86]0.4230/LIPIcs.CVIT.2016.23 [87]0.4230/LIPIcs.CVIT.2016.23 [88]0.4230/LIPIcs.CVIT.2016.23 [89]0.4230/LIPIcs.CVIT.2016.23 [89]0.4230/LIPIcs.CVIT.2016.23 [90]0.4230/LIPIcs.CVIT.2016.23 [91]0.4230/LIPIcs.CVIT.2016.23 [92]0.4230/LIPIcs.CVIT.2016.23 [93]0.4230/LIPIcs.CVIT.2016.23 [94]0.4230/LIPIcs.CVIT.2016.23 [95]0.4230/LIPIcs.CVIT.2016.23 [96]0.4230/LIPIcs.CVIT.2016.23 [97]0.4230/LIPIcs.CVIT.2016.23 [98]0.4230/LIPIcs.CVIT.2016.23 [99]0.4230/LIPIcs.CVIT.2016.23 [90]0.4230/LIPIcs.CVIT.2016.23 [90]0.4230/LIPIcs.CVIT.2016.23 [91]0.4230/LIPIcs.CVIT.2016.23 [92]0.4230/LIPIcs.CVIT.2016.23 [93]0.4230/LIPIcs.CVIT.2016.23 [94]0.4230/LIPIcs.CVIT.2016.23 [95]0.4230/LIPIcs.CVIT.2016.23 [96]0.4230/LIPIcs.CVIT.2016.23 [97]0.4230/LIPIcs.CVIT.2016.23 [98]0.4230/LIPIcs.
2302.03726
A potential site for wide-orbit giant planet formation in the IM Lup disk
The radial transport, or drift, of dust has taken a critical role in giant planet formation theory. However, it has been challenging to identify dust drift pile ups in the hard-to-observe inner disk. We find that the IM Lup disk shows evidence that it has been shaped by an episode of dust drift. Using radiative transfer and dust dynamical modeling we study the radial and vertical dust distribution. We find that high dust drift rates exceeding 110 M_earth/Myr are necessary to explain both the dust and CO observations. Furthermore, the bulk of the large dust present in the inner 20 au needs to be vertically extended, implying high turbulence alpha_z > 10^{-3} and small grains (0.2-1 mm). We suggest that this increased level of particle stirring is consistent with the inner dust-rich disk undergoing turbulence triggered by the vertical shear instability. The conditions in the IM Lup disk imply that giant planet formation through pebble accretion is only effective outside 20 au. If such an early, high turbulence inner region is a natural consequence of high dust drift rates, then this has major implications for understanding the formation regions of giant planets including Jupiter and Saturn.
Arthur Bosman, Johan Appelgren, Edwin A. Bergin, Michiel Lambrechts, Anders Johansen
2023-02-07T19:43:39Z
http://arxiv.org/abs/2302.03726v1
# A potential site for wide-orbit giant planet formation in the IM Lup disk ###### Abstract The radial transport, or drift, of dust has taken a critical role in giant planet formation theory. However, it has been challenging to identify dust drift pile ups in the hard-to-observe inner disk. We find that the IM Lup disk shows evidence that it has been shaped by an episode of dust drift. Using radiative transfer and dust dynamical modeling we study the radial and vertical dust distribution. We find that high dust drift rates exceeding 110 \(M_{\oplus}\) Myr\({}^{-1}\) are necessary to explain both the dust and CO observations. Furthermore, the bulk of the large dust present in the inner 20 au needs to be vertically extended, implying high turbulence (\(\alpha_{z}\gtrsim 10^{-3}\)) and small grains (0.2-1 mm). We suggest that this increased level of particle stirring is consistent with the inner dust-rich disk undergoing turbulence triggered by the vertical shear instability. The conditions in the IM Lup disk imply that giant planet formation through pebble accretion is only effective outside 20 au. If such an early, high turbulence inner region is a natural consequence of high dust drift rates, then this has major implications for understanding the formation regions of giant planets including Jupiter and Saturn. Protoplanetary disks-- Exoplanet formation 0000-0002-4882-8879]Arthur D. Bosman 0000-0002-4880-7888]Johan Appelgren 0000-0002-4880-7888]Edwin A. Bergin 0000-0002-4880-7888]Michiel Lambrechts 0000-0002-4880-7888]Anders Johansen ## 1 Introduction The formation of giant planets, like Jupiter in our own solar system, has long been a challenge for planet formation models. The gas-dominated mass budget of these planets implies an early formation of the planet. This is, within 1-3 Myr of the formation of the host star while abundant gas is still present in the proto-planetary disk (e.g. Haisch et al., 2001). The main bottleneck in the creation of a giant planet is the formation of the core, which has to grow massive enough to start runaway gas accretion (e.g. Pollack et al., 1996). Traditional oligarchic models of giant planet core formation models have difficulties with reaching this critical mass within the time-scale necessary for the accumulation of a massive atmosphere (see, e.g. Johansen and Bitsch, 2019). This limits the formation of giant planets relatively close to the star (inner few au). An alternative method of core formation, pebble accretion, in which a core grows by the rapid accretion of millimeter-centimeter sized solids (pebbles) has slowly been gaining traction (e.g. Johansen and Lambrechts, 2017; Ormel, 2017; Drazkowska et al., 2022). This method has the advantage that giant planet cores can be built quickly (\(<\)1 Myr) and at a far wider range of radii (Lambrechts and Johansen, 2012; Bitsch et al., 2019; Johansen and Bitsch, 2019). However, it requires that millimeter-to-centimeter sized solids (pebbles) to be rapidly transported radially inward through the disk mid-plane, a process know as pebble drift (Weidenschilling, 1977). This process has long been proposed to be present in proto-planetary disks, but little to no observational evidence was present. However, models of efficient drift generally result in a pile-up of dust in the inner disk which should be observable (e.g. Birnstiel et al., 2012; Pinte and Laibe, 2014). The advent of ALMA has significantly changed the landscape of dust physics and planet formation. High resolution observations are finding structures in many disks at large (\(>10\) au) radii (Andrews et al., 2018; Huang et al., 2018)that are most likely planet induced. This implies that massive planets can form quickly at large radii (Zhang et al., 2018; Teague et al., 2019). However, other origins for the disk structures have been suggested (Rabago and Zhu, 2021; Hu et al., 2022). Furthermore ALMA has been able to confirm the existence of pebbles in a thin layer near the disk mid-plane (e.g. Pinte et al., 2016; Villenave et al., 2022). These observations lent strong credence to pebble accretion as a major planet formation pathway. Direct evidence of strong pebble drift as a robust physical process that should take place in protoplanetary disks has so far been missing. It is suggested that the ratio between gas and dust disk size, if suitably large, \(R_{\rm gas}/R_{\rm dust}>3\) could be a signpost of drift. However this is only seen in a small subset of disks (\(\sim\) 15%, e.g. Trapman et al., 2019), indicating that drift happens only in a small fraction of disks, or, more likely, that radial drift of dust can leave a remnant radially-extended dust disk behind. Further evidence for radial drift can be found in the transport of ices on pebbles surfaces. When the pebbles drift, from the cold outer disk to the warmer inner regions, they bring with them an ice mantle. When the pebble reaches warmer regions, species in the ice sublimate, enriching the gas (e.g. Cuzzi and Zahnle, 2004). An enhancement in CO gas has been found around the temperature that CO should sublimate in the HD 163296 disk, which has been attributed to dust drift (Zhang et al., 2020). Finally, a relation has been found between the water emission coming from the inner most regions, and the sizes of pebble disks, a possible proxy for pebble drift efficiency (Banzatti et al., 2020). While these papers all give evidence for drift happening, they do not give a quantitative pebble drift rate, and so can not comment on the effect of drift on planet formation. Recent observations of the young (\(\sim\) 1 Myr) IM Lup disk have provided more direct evidence (Mawet et al., 2012; Cleeves et al., 2018; Bosman et al., 2021; Sierra et al., 2021). Analysis of the inner \(\sim\)20 au imply that this region contains a surface density of large dust that is 10-100 times higher than expected from an extrapolation of the outer disk. This is a smoking gun for efficient pebble drift, and would imply pebble drift rates \(>\) 40 \(M_{\oplus}\) Myr\({}^{-1}\). In Bosman et al. (2021) we showed that an inner disk pile-up is consistent with the CO isotopologue data. Here we take a deeper look at the total mass concentrated in the inner disk and the implication these observations have on planet formation inside 20 au. ## 2 Methods To extract the physical conditions in the inner IM Lup disk, we combine a thermo-chemical model (Bruderer et al., 2012; Bruderer, 2013) with a physical setup developed for IM Lup (Zhang et al., 2021). We then compare our observational findings with a disk evolution model that includes radial dust drift (Appelgren et al., 2020). From this model we simulate CO isotopologue observations that we compare to observations (Law et al., 2021). The base of the model is the gas and dust structure from Zhang et al. (2021). The full model setup is detailed in Appendix A and the surface densities used are shown in Fig. 1. The model contains two dust components, small dust (0.005 \(\mu\)m-1\(\mu\)m which follows the gas distribution, and a large dust component (0.005 \(\mu\)m-1000\(\mu\)m) which has a varying scale-height scaling. The inner disk of IM Lup shows a flux depression in CO isotopologues (Fig. 2, grey line; Law et al., 2021). Especially C\({}^{18}\)O stood out, as the wings of the C\({}^{18}\)O line profile showed that the flux from the inner 20 au was suppressed below the detection limit. A fit to the CO isotopologue emission with a decrease in the CO abundance required an excessively low CO abundance (Zhang et al., 2021) with the two \(J=\)2-1 and \(J=\)1-0 C\({}^{18}\)O line predicting CO columns that are an order of magnitude apart. As such, in our model we assume a Figure 1: Surface densities of gas and dust of IM Lup. The solid grey lines show the gas and dust from the model from Zhang et al. (2021), which is the baseline for our DALI models. The dash-dotted grey line shows the minimal amount of dust required if purely 0.2 mm particles are responsible for the 1.3 mm dust emission. In the inner 40 au, the dust is optically thick and thus the derived surface density is uncertain. Dark and light blue lines show the predictions of our drift model for 1 mm, assuming a global gas-to-dust ratio of 100, and 0.2 mm grains, assuming a global gas-to-dust ratio of 1000, respectively (see App. B). Both models are taken at 0.6 Myr, the time at which the 1 mm grains have a large pile-up in the inner 20 au. The dark red arrow shows the limit for 400 \(M_{\oplus}\) within 20 au, as derived previously (Bosman et al., 2021). Our dust model predicts that we can create an inner disk pile-up with significantly mass in the inner disk with 1 mm grains, while just 10% of the total dust mass in 0.2 mm grains leaves enough opacity in the outer disk to explain the continuum emission further out in the disk. constant CO abundance, and instead use dust opacity to lower the inner disk line flux. As discussed in Bosman et al. (2021), suppression of line emission requires the large dust grains to be significantly vertically extended, so that the line photons can be scattered and absorbed by the dust. Therefore our models assume that the large dust is fully vertically extended, that is the dust scale-height is the same as the gas scale-height within 30 au. To get a physically motivated dust surface density, a dust evolution model tuned to the IM Lup disk is run for 1 Myr (see Appendix B for the full model setup). The resulting dust pile-up is used a physically motivated guide for the radial dust density structure of the inner disk. For the outer disk we use the observationally derived dust surface density (Zhang et al., 2021). We take this dust surface density as our base, no drift, dust model. These dust surface densities are not accurate in the inner optically thick regions of the disk. Here we replace the image derived dust surface density by the 1 millimeter grain drift model prediction (Appendix B). This model contains \(\sim\) 540 M\({}_{\oplus}\) within the inner 20 au. We also use inner disk dust surface densities where the surface density is scaled up and down by a factor two to capture the uncertainties in the dust drift models. While we use the dust and gas surface density of the Zhang et al. (2021) model in our models, we do not expect perfect agreement between our model and the data that was fitted by the Zhang et al. (2021) model. This is due to the vertically extended dust that we include in the inner 30 au in all or our models. This strongly changes the radiation field in the entirety of the disk. This will affect the dust temperature, and thus the continuum flux, as well as strongly affect the gas temperature and thus the line flux of \({}^{13}\)CO, as well as \({}^{18}\)CO at larger radii. A miss match in absolute flux between the model and the observations is thus expected. This should not impact the masses derived for the inner disk. ## 3 Results Figure 2 shows the comparison between the C\({}^{18}\)O model simulation and observations. The base model, without any drift, exhibits a centrally peaked CO emission profile inside 20 au. However, the data requires a strong dip in this region. Such a dip in the inner \(\sim\)20 au is only present in the simulation if the inner disk dust surface density is at least 10 g cm\({}^{-2}\), or a gas-to-dust ratio \(<\)10. At that point the dust \(\tau=1\) at 1.3 mm sits around or above the \(\tau=1\) surface for the C\({}^{18}\)O \(J=\)2-1 line emission, allowing the dust to absorb the line photons. This corresponds to a mass reservoir of at least 540 \(M_{\oplus}\) within 20 au. The model with 1080 \(M_{\oplus}\) show behavior closer to that of the observations. The comparison between the model and observations for \({}^{13}\)CO also suggest that higher dust surface densities are required (Fig. 6). However, for our further derivations we will use the lower limit of 540 \(M_{\oplus}\). The dust content in the inner disk expected from the gas surface density and a gas-to-dust ratio of 100 - without the presence of any dust transport - is \(\sim 20\)\(M_{\oplus}\). Therefore, most of the mass currently inferred in the inner disk had to be have been transported into the inner 20 au from the outer disk. With the current age of IM Lup of \(<\) 1 Myr (Mawet et al., 2012), this would require a dust drift rate of \(>540\)\(M_{\oplus}\) Myr\({}^{-1}\). ### Dust mass in the inner disk A dust size distribution with a higher opacity would require less mass, while a dust size distribution that is less vertically extended would require more mass. To explore this we calculate the dust mass required for dust distributions. For simplicity we describe the vertical extend of the millimeter-opacity and mass dominating large dust can be modeled with a single \(H_{d}/H_{g}\). Our standard model uses the "large grain" dust opacity from Birnstiel et al. (2018). This assumes a dust size distribution between 0.005 \(\mu\)m and 1 mm, with a power-law size distribution with a slope of -3.5. Opacities were also calculated assuming the largest grains are 0.5 or 3 Figure 2: Comparison between the observed C\({}^{18}\)O radial profiles (Law et al., 2021) (grey) and simulated observations. A model without drift is shown in black. Colored lines show models with an increased surface density in the inner disk due to drift. The 540 \(M_{\oplus}\) model is the 1 mm drift model shown in Fig. 1, the others are the same model scale up or down a factor of two in surface density. An inner disk mass of more than 540 \(M_{\oplus}\) within 20 au is required to have an inner disk depression. mm as well as under the extreme assumption when all dust is 0.2 mm in size. The latter assumption provides a strict lower limit, as 0.2 mm grains give the maximum extinction per unit dust mass at 1.3 mm. We also calculated the mass required if the dust was more vertically settled, by requiring the dust \(\tau=1\) surface at 1.3 mm to be at the same height as in our \(H_{d}/H_{g}=1\) model. Figure 3 show the required dust masses under these assumptions. These dust masses are compared to the maximal amount of dust that can be present in the full IM Lup disk. The gas mass in the IM Lup disk model is 0.2 \(M_{\odot}\), about 20% of the stellar mass (Zhang et al., 2021). This is on the upper end of the mass the disk can have before it gets dynamically unstable (Toomre, 1964). Assuming a standard ISM gas-to-dust ratio of 100, that corresponds to about 600 M\({}_{\oplus}\) of refractories, or \(\sim 1000\) M\({}_{\oplus}\) of total solids assuming all H\({}_{2}\)O and CO is frozen out. For grains size distributions that include grains larger than 5 mm or single grains size populations smaller than 0.15 mm, the required mass to efficiently extinct the CO emission is greater than the total available mass. Figure 3 also shows that the large dust must be significantly lofted into the disk surface layers, in the optimal case of pure 0.2 mm grains, the dust scale-height is \(>0.75\) times the gas scale height. In the cases that there are larger grains in the inner disk, the many opacity providing grains of \(\sim\) 0.2 mm sizes need to be lofted to at least 90% of the gas scale height. Neither case is close to the expected thin mid-plane layer of grown dust that is seen in the outer regions of disks (e.g. Dullemond et al., 2018). The extended vertical distribution of grains implies that the dust is being mixed up by some form of turbulence. We can derive a turbulent \(\alpha_{z}\) from the particles Stokes number (St) and the required \(H_{d}/H_{g}\) using \[\frac{H_{d}}{H_{g}}=\sqrt{\frac{\alpha_{z}}{\mathrm{St}+\alpha_{z}}} \tag{1}\] (Johansen et al., 2014). To get a strict lower limit on \(\alpha\), we take the Stokes number of the opacity dominant 0.2 mm grains. This reveals that for the grain size distributions, high levels of \(\alpha\) are required, \(\alpha_{z}>>10^{-3}\), to keep the inner disk mass reservoir under the total available disk dust mass. In the extreme case of all dust mass confined to 0.2 mm grains, \(\alpha_{z}\) down to \(2\times 10^{-4}\) is possible, but this scenario is highly unlikely. A grain distribution out to grains of 3 mm seem to be ruled out, on the condition that \(\alpha_{z}\gg 10^{-2}\). ### Turbulent implications The CO isotopologues are implying that \(\gg 100\) M\({}_{\oplus}\) of dust is piled-up within the inner 20 au of the disk. This requires significant dust drift rates \(\gg 100\) M\({}_{\oplus}\) Myr\({}^{-1}\). Our dust dynamical models show that enough dust can be transported into the inner disk if dust sizes are Figure 3: Left: Required dust mass to suppress the C\({}^{18}\)O in the inner 20 au of IM Lup as function of the dust scale-height. Crosses assume a dust size distribution from 0.005 \(\mu\)m to the listed size with a powerlaw slope of -3.5. Squares assume all the dust is a single size, 0.2 mm, which yields the highest opacity at 1.3 millimeter. Blue vertical line shows the maximal total refractory mass in the IM Lup disk. Right: Same as on the left, but the H\({}_{d}\)/H\({}_{g}\) converted into a vertical mixing \(\alpha\) assuming the dust of 0.2 mm size is dominating the opacity and needs to be elevated to the listed H\({}_{d}\)/H\({}_{g}\)(Johansen et al., 2014). Points with arrows take the mass for H\({}_{d}\)/H\({}_{g}\) = 1, and show the alpha for H\({}_{d}\)/H\({}_{g}\) = 0.99, as an infinite \(\alpha\) is required for H\({}_{d}\)/H\({}_{g}\) = 1. A high \(\alpha>0.001\) is required when a full dust size distribution is assumed. Only under the assumption that the dust is 0.2 mm sized will \(\alpha<10^{-3}\) solutions be possible. around 1 millimeter. This particle size is consistent with the fragmentation limit in disks with low turbulent stirring, where \(\alpha\) is \(\lesssim 10^{-3}\)(Birnstiel et al., 2010). Such a low degree of particle turbulence is consistent with dust observations of the outer parts of protoplanetary disks (Pinte et al., 2022). In the inner disk however, the dust needs to be efficiently vertically distributed. This requires a high levels of vertical turbulence (\(\alpha_{z}>10^{-3}\)). The MRI, and other (non-ideal) MHD effects can stir up the dust particles (Riols and Lesur, 2018; Yang et al., 2018), however, it is not clear, why this would lead to such a strong discrepancy in particle scale-heights as seen in IM Lup. It is thus more likely that there is another process that increases the vertical turbulence and elevates grains to the heights required by the observations. A prime candidate for driving large-scale vertical gas motions is the vertical shear instability (VSI, Urpin and Brandenburg, 1998; Nelson et al., 2013). This instability is triggered when the stabilizing vertical buoyancy forces are overcome by vertical shear in the disk, which requires gas cooling timescales that are shorter than orbital timescales, \(t_{\rm cool}\ll t_{\rm orbit}\)(Lin and Youdin, 2015; Fukuhara et al., 2021). Figure 4 shows the relation between the midplane cooling time and the critical cooling time for the VSI to operate in more detail. Here we use standard assumptions that follow Lin and Youdin (2015) and a VSI unstable wavelength of \(\lambda_{x}/H=0.1\) (see Appendix C for details and a further parameter exploration). The requirement for sufficiently short cooling times is typically not met in the very outer optically-thin parts of the disks (\(\gtrsim 50\,\)au). However, the exact location of this outer boundary of the VSI active region depends sensitively on the dust opacity, as \(t_{\rm cool}\propto\kappa^{-1}\)(Lin and Youdin, 2015). For our disk model we find that the VSI cooling criterion is satisfied out to \(20\,\)au, using a nominal Rosseland mean dust opacities (Bell and Lin, 1994). A factor two change in opacity can lead to a factor two change in outer boundary radius. Furthermore, fig. 4 shows the resulting particle scale height of mm-sized grains, when using a conservative vertical turbulent stirring rate of \(\alpha_{\rm z}=2.5\times 10^{-3}\) in the VSI-prone region. This vertical stirring \(\alpha_{\rm z}\) is based on global numerical simulations of VSI-unstable disks (Flock et al., 2017). The vertical stirring by the VSI gas motions puffs up the pebble layer, such that millimeter size grains are elevated up to the gas scale height over nearly the entirety of the VSI-prone region. Outside this radius, we have here assumed lower vertical particle stirring, \(\alpha_{\rm z}=1\times 10^{-4}\), more in line with outer disk non-ideal MHD particle diffusion (Riols and Lesur, 2018) and the observed low particle scale heights at wide orbits in other protoplanetary disks (Pinte et al., 2016; Villenave et al., 2022). ## 4 Discussion ### Implications for planet formation The observations of IM Lup imply a split between two regions of the disk, an inner region (\(\lesssim 20\) au) where dust is piled-up and vertically extended distributed, and an outer disk (\(\gtrsim 20\) au) where dust is efficiently being transported in through radial drift (see Fig. 5). Efficient drift in the outer disk implies relatively high Stokes numbers (\(>0.01\)) (Weidenschilling, 1977; Birnstiel et al., 2010) and high \(\mathrm{St}/\alpha\)(e.g. Birnstiel et al., 2012), which should lead to well settled grains (Johansen et al., 2014). The inner disk dust pile-up implies that at least 110 \(M_{\oplus}\), and more likely \(\gg 110\)\(M_{\oplus}\) has been transported from the outer disk into the inner 20 au within the 1 Myr age of the IM Lup disk (Mawet et al., 2012). This scenario is ideal for the formation of giant Figure 4: Top: The dimensionless cooling time \(t_{\rm cool}\Omega_{\rm k}\), assuming a VSI wavelength mode of \(\lambda/H=0.1\). Below a critical cooling time (red), the disk cools quickly enough that the VSI can become active. The sensitivity of the VSI-active region on opacity (\(\kappa_{\rm KB}\)) is illustrated by increasing (dashed) or decreasing (dotted) the opacity by a factor two. Bottom: The effect of VSI turbulence in the inner disk on the scale-height of millimeter-sized particles. The gas scale height is shown in blue, while the mm-dust scale height is shown in black for our nominal opacity. Also here the sensitivity on the opacity is shown with dashed and dotted curves. planet cores at larger (\(>20\) au) radii (Bitsch et al., 2019; Johansen and Bitsch, 2019). In contrast, the high pebble scale-heights, implying low St or high \(\alpha\), required to fit the observations will have an adverse effect on the efficiency of pebble accretion, quenching giant planet formation in the majority of the inner 20 au region (Lambrechts and Johansen, 2014). The Stokes numbers for the dust in the inner disk must be small to impact the CO emission. For example, 1 mm particles at 20 au have a St \(=6\times 10^{-4}\). This implies that even with the large dust surface densities, and high metallicity in the inner 20 au, the streaming instability might not be triggered (Carrera et al., 2015; Yang et al., 2017; Li and Youdin, 2021). If a stage like this is common for massive protoplanetary disks, it could explain the formation of planets at large radii invoked to explain dust and gas structures in a variety of older proto-planetary disks (e.g. Zhang et al., 2018; Pinte et al., 2022). In fact, IM Lup is proposed to have a forming planet out at 117 au (e.g. Zhang et al., 2018). An inner disk with high turbulence and small pebbles as a result of strong pebble drift would also predicts that giant planet formation due to pebble-accretion is strongly suppressed in the inner disk. Formation of giant planets this far out in the disk would have massive impact on the composition of the material that they would accrete. Especially the pebbles that are forming the core would bring in large amount of very volatile ices like CO, N\({}_{2}\), Ar, Kr, and Xe, which would be absent from the pebble ice in the warmer inner regions. Enhancements of these species have been observed in the atmosphere of Jupiter in the solar system, implying that Jupiter formed far from its current location and that the early solar system might have gone through a state similar to what we see IM Lup in now (Oberg and Wordsworth, 2019; Bosman et al., 2019). Interactions between (forming) planets and their natal disks is expected to cause significant migration of planets. Planets forming at large radii (\(>50\) au) can end up very close to the star. Planets formed far out can thus be the progenitors of some or all of the giant planets on close orbits we see now. Efficient migration is also required to match the proto-planetary disk mass and size distribution with the observed giant exoplanet distribution (Mulders et al., 2021). ### Alternative scenarios A drop in CO isotopologue flux can have other causes than the dust pile-up proposed here. Zhang et al. (2021) modeled the CO flux with very low CO abundances in the inner disk. However, derivations from the \(J=\)2-1 and \(J=\)1-0 C\({}^{18}\)O lines disagree on the CO abundance by an order of magnitude. As both these lines come from the same isotopologue, this cannot be reconciled with any chemical explanation. However, if the dust is absorbing line emission then the wavelength dependent opacity of the dust can naturally explain why between the 1.3 mm \(J=\)2-1 and 3 mm \(J=\)1-0 C\({}^{18}\)O lines are differently impacted. The radial profiles by Law et al. (2021) show, however that all robustly detected lines, across all observed species share the inner disk depression. This is not naturally explained by a low CO abundance, but is expected from a puffed up inner disk pile-up. A reduced gas surface density in the inner disk would be able to explain the decrease line flux. The clear drop in the \(J=\)2-1 \({}^{13}\)CO line would then imply that this line becomes optically thin. At a conservative inner disk CO abundance of \(10^{-5}\) w.r.t. H\({}_{2}\), this implies a gas surface density \(<0.06\) g cm\(-2\), or a gas surface density decrease of at least 3 orders of magnitude (e.g. Bosman et al., 2021). This is inconsistent with the high gas accretion rate \(\sim 10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\) measured for IM Lup (Alcala et al., 2017). We therefore conclude that the inner disk dust pile-up is consistent with a larger range of observations than alternative scenarios. ### Finding more drift heavy disks The leaves the question on how common the formation of a dust pile-up with vertically extended dust is. If it is the VSI that is triggered by strong dust drift and subsequent inner disk pile-up, the process might be universal to disks with large dust drift rates. To test this however, one needs to find other disks in the same state as IM Lup. Analysing the CO isotopologue emission in the inner disk would be a good way of determining inner disk dust pile-ups. However, observations with the Figure 5: Schematic of the dust (orange) and gas (blue) distribution in the IM Lup disk. A planet image has been put in the approximate position of a proposed proto-planet around 100 au (Pinte et al., 2020; Verrios et al., 2022). sensitivity and resolution as those available for IM Lup are too time consuming to do for a large sample of disks. As such some pre-selection is required. In multi-wavelength continuum observations, the inner disk looks different from the outer disk, with the inner disk having a flatter spectral slope than the outer disk. This could be caused by the dust pile-up, increasing the optical depth, or the higher turbulence, changing the grain size distribution and thus optical properties (Sierra et al., 2021). As such this could be a marker of extreme drift. Episodes of extreme drift are short lived, constrained by the dust mass reservoir of the disk. As such a search should be focused towards young disks, in particular those still embedded in their natal envelope, known as Class I disks. Observations probing the inner \(\sim 30\) au regions are rare and a direct counter part of the IM Lup data does not exists for any of these sources. However, Harsono et al. (2020) aimed to detect water vapor originating from the inner regions of these Class I sources. Contrary to expectations, water vapor was not detected. If the inner disks of these Class I objects is shaped by the same processes as IM Lup, then the abundant, lofted dust would suppress the water emission from the inner 20 au, similar to the suppression in the inner disk C\({}^{18}\)O emission from IM Lup. Finally, IM Lup stands out in another inner disk tracer, its mid-infrared spectrum. It is one of only a handful of sources that shows just CO\({}_{2}\) without the H\({}_{2}\)O, HCN and C\({}_{2}\)H\({}_{2}\) that are commonly seen in the same part of the spectrum (Salyk et al., 2011; Bosman et al., 2017). This could be caused by drift, as well as abundant small grains in the surface of the inner disk and as such could be a signpost of a recent episode of drift and strong stirring in the inner disk. Strong drift is expected to enrich the inner disk with water, which is expected to suppress the abundance of HCN and C\({}_{2}\)H\({}_{2}\) and increase the abundance of CO\({}_{2}\). Abundant grains in the surface layers limit the region where mid-infrared molecular lines can be generated. This leaves only the top low density region of the disk above the layer of elevated optically thick dust. In this region the density is too low to excite any of the infrared transitions collisionally. The 15\(\mu\)m bending mode of CO\({}_{2}\) can be still be excited by infrared continuum photons and would thus still be observable (Bosman et al., 2017). If this is indeed the case then the bending mode of water at 6.5 \(\mu\)m might also be visible in emission, even though the pure rotational lines at 12-30 \(\mu\)m are not observed as these transitions can also be directly excited by continuum photons (Bosman et al., 2022). ## 5 Conclusions The IM Lup system poses an interesting case study for planet formation through pebble accretion. The inner 20 au shows a strong enhancement of large dust that requires a continuous or very recent (\(<1.0\) Myr) massive pebble flux, with pebble drift rates significantly higher than 110 \(M_{\oplus}\) Myr\({}^{-1}\). These conditions allow for the fast formation of giant planet cores. In the inner 20 au however the dust has to be vertically extended to impact the line emission. A vertically extended dust distribution is predicted to greatly slow down the formation of giant planet cores through pebble accretion. The more settled regions outside 20 au are still conducive to giant planet formation. This would naturally lead to giant planets forming at large radii, such as those that are leaving imprints in dust and gas in many older protoplanetary disks. If the vertically extended dust is a natural consequence of the dust evolution in a drift dominated disk, as the VSI predicts, then it would be expected that all giant planets, including Jupiter, formed their core at radii \(>20\) au. ## Acknowledgments ADB and EAB acknowledge support from NSF Grant#1907653 and NASA grant XRP 80NSSC20K0259. J.A. acknowledges the Swedish Research Council grant (2018-04867, PI A. Johansen). M.L. acknowledges funding from the European Research Council (ERC Starting Grant 101041466-EXODOSS). ## Appendix A Thermochemical Model To predict the CO isotopologue emission we used the thermochemical code DALI (Bruderer et al., 2012; Bruderer, 2013). This code allows us to take the physical conditions from the dynamical models, as well as stellar parameters, and calculate temperature and chemical abundance over the 2D model. We can then raytrace the temperature and abundance structure to calculate the emission. The input stellar spectrum used is a stellar model with added UV (Zhang et al., 2021). The disk structure is based on the IM Lup structure of Zhang et al. (2021), disk parameters are given in Table 1. For the dust optical properties we use the small (0.005-1\(\mu\)m) and large (0.005-1000\(\mu\)m) populations from Zhang et al. (2021) (see also Birnstiel et al., 2018). The elemental abundances used in our model are shown in Table. 2. Contrary to Zhang et al. (2021) we assume a constant CO depletion factor of 100, the value measured at 100-150 au. To make a proper comparison with the data (Law et al., 2021), the image cubes from DALI are post-processed. The individual channels in the cube are convolved with a circular gaussian with a FWHM of 0.15", before the channels are summed and a integrated intensity map is made. From this map a radial profile is extracted using a 30 degree wedge around the semi-major axes of the disk. We ray-traced the \(J\)=2-1 transitions of \({}^{13}\)CO and C\({}^{18}\)O. The C\({}^{18}\)O traces the deepest into the disk of these three lines and is thus the most sensitive to dust in the inner disk (see Fig. 2). \({}^{13}\)CO traces higher up near the disk surface. These radial profiles are shown in Fig. 6. While the \({}^{13}\)CO is over predicted by the model. The behavior of the \({}^{13}\)CO still mirrors that of the C\({}^{18}\)O, in the fact that, for the models with \(>450M_{\oplus}\) within 20-30 au, a dip in the emission profile can be seen. As such the \({}^{13}\)CO emission supports our finding out of the C\({}^{18}\)O emission. ## Appendix B Dust Evolution Model We ran a disk evolution model similar to Appelgren et al. (2020), which includes disk formation, viscous evolution, and radial drift of dust, assuming different dust sizes. The formation of the disk is modeled starting from the gravitational collapse of an over-dense Bonnor-Ebert sphere (Bonnor, 1956; Ebert, 1957). During disk formation the largest radius at which material lands on the disk is the maximum centrifugal radius, given by the following equation \[R_{\rm c}=\frac{\Omega_{0}^{2}r_{\rm cf}\left(t\right)^{4}}{GM\left(r_{\rm cf }\right)}.\] (B1) Here, \(\Omega_{0}\) is the solid rotation rate of the cloud core, \(r_{\rm cf}\) the radius of the outwards expanding collapse front, and \(M\left(r_{\rm cf}\right)\) the total mass inside the collapse front radius. For a molecular cloud core of a given mass, changing the centrifugal radius effectively sets its angular momentum. Different values for \(R_{\rm c}\) therefore result in different disk sizes and masses. The speed at which dust particles drift depends on their Stokes numbers, with a maximum drift rate at a Stokes number around unity. In the Epstein drag regime the Stokes number is set by the following equation \[\tau_{\rm s}=\frac{\sqrt{2\pi}\rho_{\rm d}a_{\rm d}}{\Sigma_{\rm g}},\] (B2) where \(\rho_{\rm d}\) is the material density of the dust particles, \(a_{\rm p}\) the size of the dust particles, and \(\Sigma_{\rm g}\) is the gas surface density. \begin{table} \begin{tabular}{l c c} \hline \hline parameter & value & explanation \\ \hline \(M_{\star}\) & 1.1 \(M_{\odot}\) & Stellar mass \\ \(L_{\star}\) & 2.57 \(L_{\odot}\) & Stellar luminosity \\ \(M_{\rm gas}\) & 0.2 \(M_{\odot}\) & disk gas mass \\ \(M_{\rm dust,large}\) & 0.002 \(M_{\odot}\) & large dust mass \\ \(M_{\rm dust,small}\) & \(2\times 10^{-4}\)\(M_{\odot}\) & small dust mass \\ \(R_{\rm crit}\) & 100 au & critical radius \\ \(\Sigma_{\rm c}\) & 28.4 g cm\({}^{-2}\) & gas surface density at \(R_{\rm c}\) \\ \(\gamma\) & 1 & gas surface density slope \\ \(h_{\rm c}\) & 0.1 & gas scale height at critical radius \\ \(\psi\) & 0.17 & flaring angle \\ \hline \end{tabular} \end{table} Table 1: Thermochemical modeling parameters \begin{table} \begin{tabular}{l|c} \hline \hline Element & Abundance w.r.t. H \\ \hline H & 1.0 \\ He & \(7.59\times 10^{-2}\) \\ C & \(1.35\times 10^{-6}\) \\ N & \(2.14\times 10^{-5}\) \\ O & \(2.88\times 10^{-6}\) \\ Mg & \(4.17\times 10^{-9}\) \\ Si & \(7.94\times 10^{-8}\) \\ S & \(1.91\times 10^{-8}\) \\ Fe & \(4.27\times 10^{-9}\) \\ \hline \end{tabular} \end{table} Table 2: Elemental abundances w.r.t H Figure 6: Same as Fig. 2 but for \({}^{13}\)CO instead of C\({}^{18}\)O. Again, \(>\)540 M\({}_{\oplus}\) is required to create an inner disk depression. The mass of the dust disk is determined not only by the mass and angular momentum of the cloud core, but also by the assumed dust-to-gas ratio of the cloud core. We will keep the dust-to-gas ratio fixed to the nominal ISM value of Z=0.01. We ran the disk evolution model for a grid of centrifugal radii and dust sizes. The gas disk in IM Lup extends out to 1200 au (Zhang et al., 2021). Because of the large size of the gas disk, the grid of centrifugal radii ranged from 150 au to 300 au in steps of 50 au. These large values of the centrifugal radius, together with a viscous \(\alpha\) parameter of \(\alpha_{\nu}=10^{-2}\), ensures that the gas disks is able to expand out to about 1000 au within 1 Myr, which is the estimated age of IM Lup. We ran the model with fixed particle sizes of 0.1, 0.2, 0.5, 1, and 3 mm. From this grid of models, the case which best fits IM Lup had a centrifugal radius of 250 au, 1-mm-sized dust grains. The age of this disk when it best matched IM Lup was 0.59 Myr. This selection was based on the model piling up sufficient dust (\(\sim\) 500 M\({}_{\oplus}\)) within the inner 20 au (Bosman et al., 2021; Sierra et al., 2021), while having a gas radius which extends to about 1000 au, and an age of about 1 Myr or less when this pile-up occurs. To explore the dependency of our preferred model to key parameters we explored variations on the total mass of solids and the radial extent of the inner disk dust pile-up. The resulting surface densities are shown in Fig. 7. The choice of centrifugal radius has a very minor effect on the disk evolution. A smaller centrifugal radius delays the dust pile-up very slightly. Particles sizes smaller than 0.5 mm do not lead to a significant pile-up in the inner disk within 1 Myr (column 1, Figure 7). An increased particle size results in earlier pile-ups, such that for the 3 mm-sized dust, the pile-up occurs just when the disk has finished forming. These disks would represent object still embedded in their natal envelope and are therefore rejected. Models with 0.5 or 1 mm-sized dust display similar pile-ups, but the 0.5 mm models pile-up about 0.3 Myr later. We finally verify that the gas accretion rate onto the host star, regulated by the disk mass and chosen \(\alpha_{n}u\) value, is consistent with the young age of IM Lup. The nominal disk results in a sufficiently high accretion rate of \(1.2\times 10^{-7}\)\(M_{\odot}\)/yr, which, given uncertainties, moderately exceeds inferred values in IM Lup of \(\sim 10^{-8}M_{\odot}\)/yr Alcala et al. (2017). ## Appendix C Vsi In this Appendix we explore how the radial range of the VSI-prone region depends on various model assumptions, with the aim to demonstrate that in a plausible parameter regime the VSI could be an explanation for the high particle scale heights inferred in the inner disk of IM Lup. We directly follow here the work of Lin and Youdin (2015). As discussed in the main text, our results strongly depend on the assumed dust opacity. We therefore show our cooling time results for different Rosseland mean dust opacities, with a temperature dependency following Bell and Lin (1994) and an MRN-dust distribution (Savvidou et al., 2020). Then, by reducing, or increasing, the opacity by a factor 2, we illustrate the trend when considering, respectively, sub-solar or super-solar mass fractions of sub-10 \(\mu\)m grains. The opacity-dependency of the cooling times as function of orbital radius can be seen in Fig. 9. An important caveat is that the dust distribution and resulting opacities are not well constrained for IM Lup, and may also be different in the inner and outer disk. A further exploration of the effects of the dust growth and disk evolution can be found in Fukuhara et al. (2021), who find the VSI suppressed in sub-solar dust-to-gas environments. Future work could thus aim to link the opacity to a modeled particle size distribution and local dust-to-gas ratio. In the inner optically-thick part of the disk, the cooling time depends on the length scale of the fastest growing VSI-mode. We explore different values, with modes between \(\lambda_{x}/H=0.05,0.5,1\) in Fig. 9, in line with the typical parameter range explored in VSI studies \(\lambda_{x}/H\sim\mathcal{O}(0.5)\)(e.g. Pfeil and Klahr, 2019). For the largest scale VSI modes, comparable to the gas scale height, we find the VSI to be nearly fully suppressed, with the exception of a small region around 10 AU. We also illustrate how the VSI-region with high particle scale height is reduced for the \(\lambda_{x}/H=0.5\)-case (Fig. 8), compared to the \(\lambda_{x}/H=0.1\)-case in the main text (Fig. 4). Fig. 8 illustrates that the inner edge of the VSI region is now located withing the inner few AU of the disk. Such a close-in inner VSI edge in the particle scale height would not be observable with the CO observations presented here. In practice, it is not clear how to determine the fastest growing VSI mode for the IM Lup disk model, as other sources of turbulence could damp small-scale modes. Lin and Youdin (2015) provide a heuristic argument to determine a minimal growth scale by requiring the growthrate to exceed the viscous timescale \(t_{\rm visc}\approx\lambda^{2}/\alpha_{\rm damp}c_{s}H\) on that scale. However, the appropriate value for \(\alpha_{\rm damp}\) remains uncertain, as \(\alpha_{\rm damp}\) is not necessarily equal to the vertical particle \(\alpha_{z}\) or the ad-hoc \(\alpha_{\nu}\) to evolve the disk in time (Appendix B). Indeed, simulations of VSI turbulence with the streaming instability (Schafer and Johansen, 2022), or disk turbulence under non-ideal MHD conditions (Cui and Bai, 2020), show a complex interplay. With these caveats in mind, Fig. 9 shows results for a viscous cut-off, with \(\alpha_{\rm damp}=10^{-4}\) and \(\alpha_{\rm damp}=10^{-3}\) (dashed lines). Larger vales of \(\alpha_{\rm damp}\) would drive scales towards the local gas scale height and suppress the VSI on global scales.
2309.01243
The Normal Distributions Indistinguishability Spectrum and its Application to Privacy-Preserving Machine Learning
Differential Privacy (DP) (and its variants) is the most common method for machine learning (ML) on privacy-sensitive data. In big data analytics, one often uses randomized sketching/aggregation algorithms to make processing high-dimensional data tractable. Intuitively, such ML algorithms should provide some inherent privacy, yet most existing DP mechanisms do not leverage or under-utilize this inherent randomness, resulting in potentially redundant noising. The motivating question of our work is: (How) can we improve the utility of DP mechanisms for randomized ML queries, by leveraging the randomness of the query itself? Towards a (positive) answer, our key contribution is (proving) what we call the NDIS theorem, a theoretical result with several practical implications. In a nutshell, NDIS is a closed-form analytic computation for the (varepsilon,delta)-indistinguishability-spectrum (IS) of two arbitrary normal distributions N1 and N2, i.e., the optimal delta (for any given varepsilon) such that N1 and N2 are (varepsilon,delta)-close according to the DP distance. The importance of the NDIS theorem lies in that (1) it yields efficient estimators for IS, and (2) it allows us to analyze DP-mechanism with normally-distributed outputs, as well as more general mechanisms by leveraging their behavior on large inputs. We apply the NDIS theorem to derive DP mechanisms for queries with normally-distributed outputs--i.e., Gaussian Random Projections (RP)--and for more general queries--i.e., Ordinary Least Squares (OLS). Compared to existing techniques, our new DP mechanisms achieve superior privacy/utility trade-offs by leveraging the randomness of the underlying algorithms. We then apply the NDIS theorem to a data-driven DP notion--in particular relative DP introduced by Lu et al. [S&P 2024]. Our method identifies the range of (varepsilon,delta) for which no additional noising is needed.
Yun Lu, Malik Magdon-Ismail, Yu Wei, Vassilis Zikas
2023-09-03T19:07:31Z
http://arxiv.org/abs/2309.01243v3
# Privacy-Utility Tradeoff of OLS with Random Projections ###### Abstract We study the differential privacy (DP) of a core ML problem, linear ordinary least squares (OLS), a.k.a. \(\ell_{2}\)-regression. Our key result is that the _approximate LS algorithm_ (ALS) (Sarlos, 2006), a randomized solution to the OLS problem primarily used to improve performance on large datasets, also preserves privacy. ALS achieves a better privacy/utility tradeoff, without modifications or further noising, when compared to alternative private OLS algorithms which modify and/or noise OLS. We give the first _tight_ DP-analysis for the ALS algorithm and the standard Gaussian mechanism (Dwork et al., 2014) applied to OLS. Our methodology directly improves the privacy analysis of (Blocki et al., 2012) and (Sheffet, 2019)) and introduces new tools which may be of independent interest: (1) the exact spectrum of \((\epsilon,\delta)\)-DP parameters ("DP spectrum") for mechanisms whose output is a \(d\)-dimensional Gaussian, and (2) an improved DP spectrum for random projection (compared to (Blocki et al., 2012) and (Sheffet, 2019)). All methods for private OLS (including ours) assume, often implicitly, restrictions on the input database, such as bounds on leverage and residuals. We prove that such restrictions are necessary. Hence, computing the privacy of mechanisms such as ALS must estimate these database parameters, which can be infeasible in big datasets. For more complex ML models, DP bounds may not even be tractable. There is a need for blackbox DP-estimators (Lu et al., 2022) which empirically estimate a data-dependent privacy. We demonstrate the effectiveness of such a DP-estimator by empirically recovering a DP-spectrum that matches our theory for OLS. This validates the DP-estimator in a nontrivial ML application, opening the door to its use in more complex nonlinear ML settings where theory is unavailable. Introduction Linear ordinary least squares (OLS) (a.k.a. \(\ell_{2}\)-regression) is a fundamental inference tool in data analysis, while also being a building block for more sophisticated models like deep-networks. To protect the privacy of sensitive data, prior work has considered _differentially private_ (DP) versions of OLS. Privacy in DP is quantified by two parameters: \(\epsilon\) (intuitively, the privacy leakage), and \(\delta\) (intuitively, the probability of failure to achieve \(\epsilon\)-DP, i.e., DP with parameter \(\epsilon\)). Naturally, the smaller the privacy parameters, the better; but as expected there is an inherent trade-off between the above two parameters, making an optimal choice a non-trivial task. To achieve \((\epsilon,\delta)\)-DP for OLS, previous works proposed modifications to the OLS algorithm. The simplest one follows the classical DP paradigm of adding noise to the OLS output drawn from a given distributions (e.g., Gaussian). More sophisticated DP mechanisms add dummy entries to the input data followed by a (randomized) OLS algorithm (Bowman et al., 2002; Dwork et al., 2003; Dwork et al., 2004; Dwork et al., 2005). Both the above methods end up modifying (implicitly or explicitly noising) the outcome of an existing OLS-algorithm and as such, inherently introduce errors, reducing the _utility_ of the output/mechanism. For brevity we refer to the class of all such algorithms as privatized OLS. The starting point of our work is the observation that existing (non-privatized) OLS algorithms already use randomness to scale the problem to huge data sets while provably preserving the utility of their output. Thus, a natural question is: "Do _existing_ randomized OLS algorithms have good _inherent_ DP, without further privatization?" We answer this question in the affirmative. We show that the mainstream randomized OLS algorithm in (Dwork et al., 2004), which we call _approximate least squares_ (ALS), achieves privacy-utility trade-offs that are better than explicitly privatized OLS algorithms in (Bowman et al., 2002; Dwork et al., 2004; Dwork et al., 2005). ALS solves the regression problem using Gaussian random projections into \(r\)-dimensions. The accuracy of the solution depends on an embedding parameter \(r\).1 The inherent privacy of ALS inherits from the randomness in the random projection. By arguing from the asymptotic normality of the ALS-estimator, we give the _first_ tight privacy analysis of ALS, leveraging our main technical tool in Lemma 2 which gives an exact and easy method to compute privacy analysis of high-dimensional Gaussian mechanisms. This Lemma may be of independent interest since Gaussian mechanisms are widely used in privacy analysis. Indeed, we apply Lemma 2 to give the first general tight privacy analysis of a wide class of privatized OLS algorithms whose outputs are Normal or asymptotically Normal. This includes all the prior privatized OLS mechanisms in (Bowman et al., 2002; Dwork et al., 2004; Dwork et al., 2005). In addition, Gaussian random projections are also a fundamental tool in data analysis, used as a preprocessing step in several learning tasks [1, 2, 9, 15, 16]. Lemma 4 and 5 directly give a tight (exact) privacy analysis of Gaussian random projections which replaces the relatively loose bounds in [3, 14]. We demonstrate the theory on real data. We study the privacy-utility trade-offs of ALS as well as the prior privatized OLS algorithms. Since our analysis is tight (exact), we can give detailed comparisons of the DP-spectrum of ALS and the prior privatized OLS algorithms, which would not be possible using the looser bounds from prior work. On the real datasets, ALS achieves better privacy-utility trade-offs than previous works. We then address two issues when dealing with \((\epsilon,\delta)\)-DP in OLS and machine learning in general. First, we observe that all existing privacy statements (including ours) for private OLS assume (implicitly or explicitly) a restriction on properties of the input database. Our proof shows that the DP of ALS depends on the _leverage_ of data points and the _optimal error-residuals_ for the specific regression task being considered [4, 11]. This means an exact ALS privacy statement depends on the specific database, i.e., cannot be generic. Thus, we restrict our (tight) privacy statements to databases with bounds on the leverage/residuals. In particular, in this work, we make a privacy claim for ALS, given the specific database in the application at hand. In [10], the term _relative_ DP was coined to refer to the above non-generic privacy statement, a terminology which is relevant here. A natural question is whether restricting to (some form of) relative-DP is necessary and/or whether it is an artifact of (non-privatized) ALS. Surprisingly, we prove that such restrictions are necessary for differentially private OLS, independent of the algorithm. In particular we prove in Corollary 1 that a class of queries, including private OLS, are impossible without restricting the input database or allowing arbitrarily bad utility. This means that even existing privatized OLS algorithms cannot in fact achieve generic DP (i.e., DP which applies to any database of some size). Their guarantees are with respect to bounds on sensitivity which depend on parameters like singular values, leverages and residuals. These bounds must hold over the full set of databases to which the DP is applied [3, 13, 14]. The need to restrict the database leads to the next issue. To compute an explicit DP statement (e.g., for ALS or prior work on privatized OLS), the necessary parameters must estimated, in this case leverages and residuals. Unfortunately, computing these parameters can be costly for large datasets. Even worse, for more general ML models, DP-estimates may be intractable. This calls for a different methodology for estimating the actual privacy achieved by such mechanisms when applied on real-world massive datasets. To this direction, we showcase an alternative (practical) method of _empirically_ computing relative-DP. We validate this method on (randomized) OLS mechanisms (with or without privatization) where we are able to recover DP-spectra that are comparable to the theoretical analysis. Specifically, we use the blackbox _DP-estimator_ in [10] to directly estimate optimal \((\epsilon,\delta)\) pairs that are achieved by the mechanism at hand, without the need for the data-dependent parameters necessary to apply the theory. To our knowledge, our results are the first validation of a blackbox DP-estimator in a complex ML task, opening the door to the use of such blackbox empirical estimators in more complex ML tasks. We summarize our contributions as follows: 1. Lemma 3: First tight privacy analysis for the ALS algorithm. 2. Proposition 1: First tight privacy analysis for the Gaussian mechanism applied to OLS, which also leads to improvements to the privacy bounds of previous work [(3; 13; 14)]. 3. Corollary 1/Lemma 7: Impossibility theorem proving the necessity of restricting the input database to get tight privacy for OLS (and other models). 4. Section 5: Validation on real-world data comparing privacy/utility trade-ffs between ALS and prior privatized OLS and demonstrating the power of blackbox empirical DP-estimators. ### Related Work The most common way to privatize OLS is via the Gaussian mechanism, a fundamental and generic DP mechanism [(6)] that adds multivariate Gaussian noise to the output. This simple method achieves a good, and importantly, tuneable privacy/utility trade-off, but requires running standard OLS on the whole data which makes it impractical for large datasets [(3; 6)]. The Gaussian mechanism has been used to privatize OLS, also in conjunction with random projections, [(13; 14)]. We give the first tight privacy analysis of the classic Gaussian mechanism for all these OLS algorithms, including ALS, making explicit the detailed dependence on database parameters like leverages, residuals, etc. Prior analysis uses loose bounds on the sensitivity. Blocki et al. [(3)] initiated the more sophisticated methodology of using random projections to achieve differential privacy. They showed that standard Gaussian random projection can preserve privacy if the \(D\)'s least singular value is large enough and each row's norm is bounded. More interestingly, they showed (by employing the DP sequential composition theorem), that the random projection can also preserve privacy for OLS. More recently Sheffet [(13; 14)] studied the specific \(r\)-fold composition of the random projection and improved on previous works' analysis, giving a stronger privacy statement for random projection. For private OLS, the (non-privatized) OLS algorithm [(7)] is applied after the random projection. This step can be considered as a _post-processing_ of the random projection, which preserves DP [(5)]. We note that while a privacy statement can be obtained in this way, the statement may not be tight, as applying OLS may actually _augment_ privacy rather than simply preserve it. In addition, this line of work modifies the input database (with the addition of extra, possibly outlier data) to have certain desirable properties before the random projection, which usually has a detrimental effect on utility. In contrast, we offer a significantly more rigorous privacy analysis, by directly studying the ALS algorithm's privacy (rather than relying on DP's immunity to post-processing). Further, our Lemma 5 directly gives tight (exact) analysis of Gaussian random projections replacing the prior results. This is of general interest because random projections are used as a preprocessing step in many applications [1; 2; 9; 15; 16]. ## 2. Background Differentially Private OLS.. The setting for OLS algorithms is as follows: The algorithm is given as input a database, represented as a matrix \(D=[B,b]\), where \(B\) is the feature matrix, and \(b\) is the label vector. The goal is to compute a vector \(x_{\text{opt}}\) which minimizes the \(\ell_{2}\) distance between \(Bx_{\text{opt}}\) and \(b\). In certain algorithms (such as ALS), randomness is used to achieve better performance, at the cost of a slightly less optimal solution. We call this randomized solution \(\tilde{x}_{\text{opt}}\). In fact, for private OLS, randomness is necessary. An OLS algorithm is _differentially private (DP)_, if for any database \(D^{\prime}\) which is \(D\) but one row is zeroed out, the distance between the distributions of \(x_{\text{opt}}\) and \(x_{\text{opt}}^{\prime}\) (solution for \(D^{\prime}\)) is close. The pairs of such "similar" databases \(D\) and \(D^{\prime}\) are known as _neighbors_. Definition 1 (Neighboring Databases (for Private OLS)).: _For fixed \(n,d\), let \(\mathcal{X}=\mathbb{R}^{n\times d}\). A pair of databases \(D,D^{\prime}\in\mathcal{X}\) is neighboring, denoted \(D\simeq D^{\prime}\) if \(D^{\prime}\) can be obtained by replacing exactly one row in \(D\) with a row of zeros._ An algorithm is DP if its output distribution for any \(D\) is similar to its output distribution for any of \(D\)'s neighbors. This similarity is quantified by parameters \(\epsilon\) and \(\delta\). Definition 2 (Differential Privacy (DP) [5]).: _An algorithm \(\mathcal{M}:=\mathcal{X}\mapsto\mathcal{O}\) is \((\epsilon,\delta)\)-DP if for all subset \(\mathcal{S}\subseteq\mathcal{O}\) and for all neighboring databases \(D\simeq D^{\prime}\):_ \[\Pr[\mathcal{M}(D)\in\mathcal{S}]\leq\epsilon^{\epsilon}Pr[\mathcal{M}(D^{ \prime})\in\mathcal{S}]+\delta,\] _and_ \[\Pr[\mathcal{M}(D^{\prime})\in\mathcal{S}]\leq\epsilon^{\epsilon}Pr[\mathcal{M }(D)\in\mathcal{S}]+\delta.\] _where the probability space is over the coin flips of \(\mathcal{M}\). If \(\delta=0\), we say that \(\mathcal{M}\) is \(\epsilon\)-DP._ Differential Privacy on Neighboring Database Pairs.An equivalent way (by rearranging the inequalities in the definition) to describe DP which is used in our results, is that for any \(\epsilon\), an algorithm \(\mathcal{M}\) satisfies \((\epsilon,\delta)\)-DP for \[\delta\geq\max_{D\sim D^{\prime}\text{or}D^{\prime}\sim D}\max_{\mathcal{S} \subseteq\mathcal{O}}\Pr[\mathcal{M}(D)\in\mathcal{S}]-\epsilon^{\epsilon} \Pr[\mathcal{M}(D^{\prime})\in\mathcal{S}]\] In fact, when \(\geq\) is replaced by \(=\) in the inequality above, \(\delta\) is the _optimal_\(\delta\) possible, given \(\mathcal{M}\) and \(\epsilon\). Following [10], we can also define the optimal \(\delta\) for a particular pair of databases \(D\), \(D^{\prime}\). Note that since \(\delta\) is a probability, it is capped between \([0,1]\). Definition 3 (**Optimal**\(\delta\)).: _Let \(\mathcal{M}\) be an algorithm, \(D,D^{\prime}\) be a pair of databases, and \(\epsilon\in\mathbb{R}_{\geq 0}\) be a privacy parameter. We say the privacy parameter \(\delta_{D,D^{\prime}}\) is optimal (minimal) with respect to the tuple \((\mathcal{M}(D),\mathcal{M}(D^{\prime}),\epsilon)\) if_ \[\delta_{D,D^{\prime}}=\max(\max_{\mathcal{S}\subseteq O}\Pr[\mathcal{M}(D) \in\mathcal{S}]-e^{\epsilon}\Pr[\mathcal{M}(D^{\prime})\in\mathcal{S}],0).\] In our work, we prove tight privacy statements which depend on the properties of \(D\). Specifically, we compute \(\max_{D^{\prime}:D\sim D^{\prime}}\delta_{D,D^{\prime}}\) as a function of the leverage and residual of \(D\). _Approximate Least Squares (ALS) Algorithm._ The approximate least squares (ALS) algorithm (Fig. 1) is designed to improve the OLS algorithm's efficiency when the input database has a large number of records \(n\). ALS achieves this by first reducing the size of the database from \(n\times d\) to \(r\times d\) (\(r<<n\)), via random projection. Then, ALS computes the (optimal) OLS solution for the projected database. Since the projected database preserves most of the information about the input databases, the ALS's output is a good approximate OLS solution for the original database. _Notation._ We use standard notation from the machine learning literature. Let \(D\in\mathbb{R}^{n\times d}\) be a matrix representing a database. We use \(D_{(i)}\) to denote the matrix's \(i\)-th row. We use \(D^{(j)}\) to denote \begin{table} \begin{tabular}{l l} \hline **Symbol** & **Description** \\ \hline \(n\) & number of individual records \\ \(d-1\) & number of features each record has \\ \(B\) & \(n\times(d-1)\) feature matrix \\ \(b\) & \(n\times 1\) label vector, which corresponds to individual records \\ \([B,b]\) & \(n\times d\) matrix, represents the input database for algorithms \\ \([B^{\prime},b^{\prime}]\) & neighbor database of \([B,b]\) (Def. 1) \\ \(P\) & \(B(B^{T}B)^{-1}B^{T}\), the projection matrix of \(B\) \\ \(x_{\text{opt}}\) & \((B^{T}B)^{-1}B^{T}b\), _(deterministic) optimal solution_ according to the \\ & OLS algorithm for database [B, b] \\ \(\tilde{x}_{\text{opt}}\) & the randomized OLS solution, which is the output of ALS algorithm \\ \(w\) & \((I_{n}-P)b\), the _error vector_ (vector orthogonal to \(B\)’s column space) \\ \hline \end{tabular} \end{table} Table 1. Notation Table the matrix's \(j\)-th column. We use \((D)_{i,j}\) to denote the matrix's entry at \(i\)-th row and \(j\)-th column. We denote the column space of matrix \(D\) as \(\mathbf{colspan}(D)\). We describe other notations we use in Table 1. ## 3. Privacy of ALS In this section, we analyze the privacy of the approximate least squares algorithm (ALS). First, in Sec. 3.1, we prove the output of ALS is asymptotically normally distributed. Then, in Sec. 3.2, we prove an exact DP spectrum (i.e., privacy parameter \(\delta\) as a function of parameter \(\epsilon\)) for the multivariate Gaussian distribution--a result which may be of independent interest. Combining these two, we prove the privacy of ALS (Lemma 3). Our results from Sec. 3.2 can also extend to improve privacy bounds for the Gaussian mechanism applied to OLS (Prop. 2), as well as privatized (OLS) algorithms based on random projection. ### Normality Analysis Since the exact distribution of the ALS output (Algorithm 1) is complex and might not be computable, we derive the asymptotic distribution (in terms of \(r\)) of it. We will later do the differential privacy analysis on the asymptotic distribution, and we will use our black-box estimator to show that the privacy of the exact distribution and the asymptotic distribution tightly match even with a reasonably small \(r\). This means that the asymptotic behavior (with no surprise) kicks in at an early of \(r\) and we can use it to capture well the DP-spectrum of Algorithm 1. _Lemma 1_ (Distribution of the Approximate Least Squares Algorithm).: The Algorithm 1's output is asymptotically normally distributed with mean \(\mathbf{x}_{\text{opt}}\) and covariance matrix \(\frac{\|\mathbf{w}\|_{2}^{2}(B^{T}B)^{-1}}{r}\). That is \[\tilde{\mathbf{x}}_{\text{opt}}\xrightarrow{d}\mathcal{N}(\mathbf{x}_{\text{opt}}, \frac{\|\mathbf{w}\|_{2}^{2}(B^{T}B)^{-1}}{r}).\] The proof of Lemma 1 is in the appendix. We give a computation sketch here to help the reader read the proof. 1. We first compute the difference between \(\tilde{\mathbf{x}}_{\text{opt}}\) and \(\mathbf{x}_{\text{opt}}\). That is \[\tilde{\mathbf{x}}_{\text{opt}}-\mathbf{x}_{\text{opt}}=(\frac{1}{r}(\Pi B)^{T}\Pi B) ^{-1}\frac{1}{r}(\Pi B)^{T}\eta,\] where \(r,\Pi,B\) are defined in the Algorithm 1 and \(\eta\) is a multi-variate Gaussian distribution, whose parameters can be computed from \(B,b\). See appendix for the description of \(\eta\). 2. We then show the first term of the difference \((\frac{1}{r}(\Pi B)^{T}\Pi B)^{-1}\) converges almost surely to the constant \((B^{T}B)^{-1}\) as \(r\) increases, and the second term \(\frac{1}{r}(\Pi B)^{T}\eta\) converges in distribution to multivariate Gaussian distribution \(\mathcal{N}(\overrightarrow{0},\frac{\|\mathbf{w}\|_{2}^{2}B^{T}B}{r})\) as \(r\) increases. We also note that some of the facts used in this computation is summarized in the appendix. 3. We then compute the asymptotic distribution of the difference between \(\tilde{x}_{\mathsf{opt}}\) and \(x_{\mathsf{opt}}\), which is a multivariate normal distribution. Since we know \(x_{\mathsf{opt}}\) is a constant when \(B,b\) are fixed, and the shift of a multivariate normal distribution is still a multivariate normal distribution, we finally have the asymptotic (in \(r\)) distribution of \(\tilde{x}_{\mathsf{opt}}\). ### DP Spectrum of Multi-variate Gaussians Lemma 1 shows that the Algorithm 1's output is asymptotically normally distributed. In this section, we derive a more general DP Spectrum result for a pair of multi-dimensional normal random variables. We could use this result, together with Lemma 1, to derive the privacy statement for Algorithm 1. We could also use it to compute the exact DP-spectrum for the standard Gaussian mechanism (Proposition 2). Lemma 2 (DP Spectrum of Multi-dimensional Gaussians; proof in appendix).: Let \(X\sim\mathcal{N}(\mu_{1},\Sigma_{1}),Y\sim\mathcal{N}(\mu_{2},\Sigma_{2})\) be two multidimensional normal distributions, where the covariance matrix \(\Sigma_{1},\Sigma_{2}\in\mathbb{R}^{d\times d}\) and mean vector \(\mu_{1},\mu_{2}\in\mathbb{R}^{d}\) for some positive integer \(d\). Let \(UAU^{T}\) be the Singular Value Decomposition (SVD) of the matrix \(I_{d}-\Sigma_{1}^{1/2}\Sigma_{2}^{-1}\Sigma_{1}^{1/2}\), let \(b=-U^{T}\Sigma_{1}^{1/2}\Sigma_{2}^{-1}(\mu_{1}-\mu_{2})\), and let \(c=\epsilon+\ln\sqrt{\frac{\det\left(\Sigma_{1}\right)}{\det\left(\Sigma_{2} \right)}}-\frac{1}{2}(\mu_{1}-\mu_{2})^{T}\Sigma_{2}^{-1}(\mu_{1}-\mu_{2})\). Let \(Z\sim\mathcal{N}(\overrightarrow{\mathbf{0}},I_{d})\) be a standard \(d\)-dimensional Gaussian. Then, \[\delta(\mu_{1},\mu_{2},\Sigma_{1},\Sigma_{2})=\mathbb{E}\left(\max\{0,1-\exp \left(\frac{1}{2}Z^{T}AZ+b^{T}Z+c\right)\}\right)\] is optimal with respect to the tuple \((X,Y,\epsilon)\). ### Privacy of ALS The lemma below proves the privacy of ALS. By plugging in the parameters in this lemma, we can obtain the full expression as a function of the database's leverage and error-residual (see appendix). Lemma 3 (Privacy of ALS).: Fix \(\epsilon>0\) and let \([B,b]\) be the database. Then \(\delta\) is given by \[\delta_{ALS}=\max_{i}\left\{\max(\delta(\mu,\mu_{i},\Sigma,\Sigma_{i}),\delta( \mu_{i},\mu,\Sigma_{i},\Sigma)\right\},\] where \(\delta(\cdot,\cdot,\cdot,\cdot)\) is defined in Lemma 2, \(i\) is an index which ranges over the rows in \([B,b]\), \([B^{\prime},b^{\prime}]\) is a neighboring database (Def. 1) with row \(i\) set to zero and the parameters \(\mu,\mu_{i},\Sigma,\Sigma_{i}\) are computed by Lemma 1. Proof.: The estimators \(x_{opt}\) and \(x_{opt}^{\prime}\) are asymptotically normal with mean and covariances given by \(\mu,\mu_{i},\Sigma,\Sigma_{i}\) (by Lemma 1). The lemma now follows by applying Lemma 2 and then taking the max over the index \(i\). ### Extending Results to Achieve Tight Privacy Analysis for Gaussian Mechanism and Random Projection We can extend our techniques to improve the privacy bounds for the Gaussian mechanism (applied to OLS), as well as the Gaussian random projection. Proposition 1 (Proof in appendix).: Let neighbors (Def. 1) \([B,b]\simeq[B^{\prime},b^{\prime}]\) have full column rank, i.e., \(\mathbf{rank}([B,b])=\mathbf{rank}([B^{\prime},b^{\prime}])=d\). Let \(\mathcal{M}\) be a mechanism which takes as input a dataset \([B,b]\) and outputs \(x_{\text{opt}}+X\), where \(X\sim\mathcal{N}(\overrightarrow{\mathbf{0}},\sigma^{2}I_{d})\). Then \[\delta=\frac{1-\exp{(\epsilon)}}{2}+\frac{1}{2}(\mathsf{erf}\left(-\frac{ \epsilon}{\sqrt{2}c}+\frac{c}{\sqrt{8}}\right)-\exp{(\epsilon)}\,\mathsf{erf} \left(-\frac{\epsilon}{\sqrt{2}c}-\frac{c}{\sqrt{8}}\right)),\] where \(c=\frac{\left|w_{(i)}\right|}{|\sigma|}\frac{\left\|(B^{T}B)^{-1}\sigma\right\| _{2}}{\left(1-(P)_{\iota\iota}\right)}\), is the optimal delta with respect to the tuple \((\mathcal{M},[B,b],[B^{\prime},b^{\prime}],\epsilon)\). The following is an instantiation of Proposition 1. It analyzes the DP-spectrum of the Gaussian mechanism [5] which adds standard multivariate normal random variable, where each coordinate is with variance \(2\ln{(1.25/\delta)}\frac{s^{2}}{\epsilon^{2}}\), and \(s\) is the sensitivity measured by Euclidean norm of the difference between \(x_{\text{opt}}-x^{\prime}_{\text{opt}}\). Proposition 2 ().: Let neighbors (Def. 1) \([B,b]\simeq[B^{\prime},b^{\prime}]\) have full column rank, i.e., \(\mathbf{rank}([B,b])=\mathbf{rank}([B^{\prime},b^{\prime}])=d\). Let \(\mathcal{M}_{\epsilon,\delta}\) be the Gaussian mechanism which takes as input a dataset \([B,b]\) and outputs \(x_{\text{opt}}+X\), where \(X\sim\mathcal{N}(\overrightarrow{\mathbf{0}},\sigma^{2}I_{d})\), \(\sigma^{2}=2\ln{(1.25/\delta)}\frac{s^{2}}{\epsilon^{2}}\), \(s=\left\|x_{\text{opt}}-x^{\prime}_{\text{opt}}\right\|_{2}\). Then, \[\delta^{\prime}(\epsilon^{\prime})=\frac{1-\exp{(\epsilon^{\prime})}}{2}+ \frac{1}{2}(\mathsf{erf}\left(-\frac{\epsilon^{\prime}}{\sqrt{2}c}+\frac{c}{ \sqrt{8}}\right)-\exp{(\epsilon^{\prime})}\,\mathsf{erf}\left(-\frac{\epsilon^ {\prime}}{\sqrt{2}c}-\frac{c}{\sqrt{8}}\right)),\] where \(c=\frac{\epsilon}{\sqrt{2\ln{1.25/\delta}}}\), is the optimal delta with respect to the tuple \((\mathcal{M}_{\epsilon,\delta},[B,b],[B^{\prime},b^{\prime}],\epsilon^{\prime})\). Lemma 4 (Privacy of Random linear Combination).: Let \(\mathcal{M}_{RLC}\) be the random linear combination algorithm, that takes as input a \(n\times d\) real matrix \(D\) with full column rank \(\mathbf{rank}(D)=d\), samples a column vector \(g\) from standard \(n\)-dimensional normal distributions and outputs \(D^{T}g\). Fix \(\epsilon>0\) and let \(D\) be the database. Then \(\delta\) is given by \[\delta_{RLC}=\max_{i}\left\{\max(\delta(\overrightarrow{\mathbf{0}}, \overrightarrow{\mathbf{0}},D^{T}D,D^{\prime T}_{i}D^{\prime}_{i}),\delta( \overrightarrow{\mathbf{0}},\overrightarrow{\mathbf{0}},D^{\prime T}_{i}D^{ \prime}_{i},D^{T}D)\right\},\] where \(\delta(\cdot,\cdot,\cdot,\cdot)\) is defined in Lemma 2, \(i\) is an index which ranges over the rows in \(D\), \(D^{\prime}_{i}\) is a neighboring database (Def. 1) with row \(i\) set to zero. Lemma 5 (Privacy of Random Projection).: Let \(\mathcal{M}_{RP}\) be the random projection algorithm, that takes as input a \(n\times d\) real matrix \(D\) with full column rank \(\mathbf{rank}(D)=d\), samples a \(n\times r\) Gaussian matrix \(G\), where each of entry is a standard normal random variable, and outputs \(D^{T}G\). Fix \(\epsilon>0\) and let \(D\) be the database. Then \(\delta\) is given by \[\delta_{RP}=\max_{i}\left\{\max(\delta(\overrightarrow{\mathbf{0}},\overrightarrow {\mathbf{0}},\Sigma,\Sigma_{i}),\delta(\overrightarrow{\mathbf{0}}, \overrightarrow{\mathbf{0}},\Sigma_{i},\Sigma)\right\},\] where \(\delta(\cdot,\cdot,\cdot,\cdot)\) is defined in Lemma 2, \(i\) is an index which ranges over the rows in \(D\), \(D^{\prime}_{i}\) is a neighboring database (Def. 1) with row \(i\) set to zero. \(\Sigma\) is a \(rd\times rd\) block diagonal matrix with blocks \(D^{T}D\), \(\Sigma_{i}\) is a \(rd\times rd\) block diagonal matrix with blocks \(D^{\prime T}D^{\prime}\). ## 4. Impossibility of Private OLS Without Restricting Input In this section, we prove that for all functions with unbounded sensitivity (Def. 4) including OLS, it is impossible to both preserve privacy _and_ any form of utility (Def. 5). In other words, privacy statements that depend on the input database properties (e.g., via its leverage/residual as in our work) are necessary in order to preserve privacy and utility. Definition 4 (Function with unbounded sensitivity).: _A function \(f:\mathbb{R}^{n_{1}\times m_{1}}\mapsto\mathbb{R}^{n_{2}\times m_{2}}\) has unbounded sensitivity if for any \(a>0\), there exists an neighboring pair \(D\simeq D^{\prime}\) and \(i\in[n_{2}],j\in[m_{2}]\) such that \(\left|f(D)_{i,j}-f(D^{\prime})_{i,j}\right|>a\)._ Definition 5 (Useful Private Algorithm).: _Let \(\mathcal{M}\) be a (privatized) algorithm computing a function \(f\). Let \(g\) be a utility function, where \(g(D)\) outputs a non-negative real number measuring the distance between \(\mathcal{M}(D)\) and \(f(D)\). Then \(\mathcal{M}\) is \(A\)-useful with respect to \(g\), if for every database \(D\) in \(f\)'s domain, \(g(\mathcal{M}(D),f(D))\leq A\)._ Lemma 6 shows for every \(f\) with unbounded sensitivity, no \(\mathcal{M}\) can achieve both privacy and utility guarantees. Here, we measure utility with by the mean square error--in other words, private \(\mathcal{M}\) for such functions will have an unbounded mean square error! _Lemma 6_ (Proof in Appendix).: Let \(f:\mathbb{R}^{n\times m}\mapsto\mathbb{R}\) be a function with unbounded sensitivity. Let \(\mathcal{M}:\mathbb{R}^{n\times m}\mapsto\mathbb{R}\) be an algorithm (for \(f\)) with input a database \(D\in\mathbb{R}^{n\times m}\). Let \(g(D)=\mathbb{E}\left((\mathcal{M}(D)-f(D))^{2}\right)\) be the utility function. If \(\mathcal{M}\) satisfies \((\epsilon,\delta)\)-DP for \(\epsilon\in\mathbb{R}_{\geq 0},\delta\in[0,\frac{1}{2}]\), then it is not \(A\)-useful with respect to utility function \(g\), for every \(A>0\). Intuitively, Lemma 7 holds since to achieve privacy, the output distribution on neighboring databases must be close; meanwhile, a large sensitivity means that \(f\)'s outputs on neighbors are far apart. This conflict means that utility inherently suffers when maintaining privacy. The following corollary extends the above lemma to functions with matrix-value outputs. Corollary 1.: _Let \(f:\mathbb{R}^{n_{1}\times m_{1}}\mapsto\mathbb{R}^{n_{2}\times m_{2}}\) be a function with unbounded sensitivity. Let \(\mathcal{M}:\mathbb{R}^{n_{1}\times m_{1}}\mapsto\mathbb{R}^{n_{2}\times m_{2}}\) be an algorithm (computing \(f\)) that takes as input a database \(D\in\mathbb{R}^{n_{1}\times m_{1}}\), and outputs a matrix-valued random variable. Let \(g\) be the utility function outputting the variance of Frobenius norm of \(\mathcal{M}(D)-f(D)\), i.e., \(Var\left(\left\|\mathcal{M}(D)-f(D)\right\|_{F}\right)\). If \(\mathcal{M}\) satisfies \((\epsilon,\delta)\)-DP for some fixed and bounded \(\epsilon\in\mathrm{R}_{\geq 0},\delta\in[0,\frac{1}{2}]\), then it is not \(A\)-useful, with respect to utility function \(g\), for every \(A>0\)._ The following lemma shows that indeed, OLS has unbounded sensitivity, and thus Cor. 1 holds. Lemma 7 (Proof in Appendix).: Let \(f([B,b])=(B)^{\dagger}b=(B^{T}B)^{-1}B^{T}b\) be the ordinary least square (OLS) query for over-constrained systems (\(B^{T}B\) has full rank). Then for every database \(D=[B,b]\) in the domain, and every \(a>0\), there is a database \(D^{\prime}\) such that \(D^{\prime}\simeq D\) (Def. 1) and \(\left\|f(D)-f(D^{\prime})\right\|_{\infty}>a\). ## 5. Comparison and Evaluation In Section 5, we compare the privacy/error tradeoff between ALS and previous privatized OLS algorithms. We see that ALS achieves strictly better privacy/error tradeoffs than previous work. In light of our impossibility result Cor. 1, we also propose an alternative method for computing the privacy parameters of private OLS algorithms. We showcase the first use of _DP-estimators_ to empirical compute the DP of private OLS algorithms on real-world databases. We choose the DP-estimator of (Barb and Kelle, 2017) for its provable accuracy bounds as a function of sample size. The match of the estimator output and theoretical results serve as compelling evidence that in practical settings, the DP for private OLS algorithms can be estimated accurately. The datasets we use in this section are the Cancer Database2 with \(12\) features and \(2809\) records, and the Flight Database3 with \(2\) features and \(327346\) records. To run the DP-estimator (Barb and Kelle, 2017) we used a training set size of \(10^{7}\) and a testing set size of \(10^{6}\). Footnote 2: [https://data.world/nrippner/cancer-linear-regression-model-tutorial](https://data.world/nrippner/cancer-linear-regression-model-tutorial) Footnote 3: [https://rpubs.com/salmaeng/linear_regression](https://rpubs.com/salmaeng/linear_regression) We plot the privacy/utility trade-off of ALS, Gaussian LS (Gaussian mechanism applied to OLS), and RP (random projection) LS (see Appendix). We measure privacy by fixing \(\epsilon\) and plotting the \(\delta\). We measure utility by the error \(Var\left(\left\|\mathcal{M}(D)-f(D)\right\|_{F}\right)\) (_smaller_ this number is, the better), which is the sum of the variance of each coordinate between the \(\mathcal{M}\)'s output and the true query result. Discussion of the empirical results.We observe that ALS has the best privacy-utility trade-off in both cases. We also observe that the larger the dataset, the better privacy and utility for ALS and Gaussian LS. Intuitively, this is because the influence of each record is reduced when the database size increases. However, we also observe that RP LS's utility is not dependent on the database size, since it always introduces some outlier records to increase privacy. ## 6. Conclusion and Future Work We studied the privacy guarantees of existing privacy preserving OLS algorithms, as well as ALS, and provided the first tight theoretical analysis of their privacy, improving on both the toolbox and the results of prior work. Our results show that while ALS was not constructed originally with privacy in mind, it (and systems that utilize ALS) already achieves state-of-the-art privacy--"for free" (i.e., without further privatization). We further observe that all DP-promising OLS-based Figure 3. Empirical estimates vs. Theoretical result showing that the theoretical DP-spectrum is recovered from the empirical blackbox estimator of the DP-spectrum on real data. The empirical estimate (red dots) converge to the theoretical estimate (blue) from below, as can be evidenced on the flight data. The convergence is from below because the estimated classifier is always worse than the Bayes optimal classifier. The error bar depends on how many training points are used for computing an estimate to the Bayes optimal classifier and the number of test points used in evaluating the classifier. In both cases, the more points the better. When \(\delta\) is small (more privacy) the error in the estimate is more visually pronounced, and more training samples ought to be used. Figure 2. Comparison of privacy/utility tradeoff. ALS without further privatization achieves the best utility privacy tradeoff. All three curves are from our tight (exact) analysis of the DP-spectrum based on Lemma 1. algorithms rely on a (potentially inefficiently computable) set of parameters. We demonstrate how to estimate the privacy of such methods using DP-estimators. We believe that the use of DP-estimators shows a promising future direction for computing the (inherent or privatization-induced) privacy of more complex machine learning tasks.
2310.18671
NNLO Matrix-Element Corrections in VINCIA
We report on a new formalism for parton showers whose fixed-order expansion can be corrected through next-to-next-to-leading order (NNLO) in QCD. It is the first such formalism we are aware of that has no dependence on any auxiliary scales or external resummations and which is fully differential in all of the relevant phase spaces. Since the shower acts as the phase-space generator, the dominant singularity structures are encoded by construction and the method can generate unweighted events with very high efficiency without any significant initialisation time. We argue that the the method should be capable of achieving (at least) NNLO+NNDL accuracy for the shower evolution variable and use hadronic Z decays as a specific example.
Peter Skands, Christian T. Preuss
2023-10-28T10:53:23Z
http://arxiv.org/abs/2310.18671v1
# NNLO Matrix-Element Corrections in VINCIA ###### Abstract: We report on a new formalism for parton showers whose fixed-order expansion can be corrected through next-to-next-to-leading order (NNLO) in QCD. It is the first such formalism we are aware of that has no dependence on any auxiliary scales or external resummations and which is fully differential in all of the relevant phase spaces. Since the shower acts as the phase-space generator, the dominant singularity structures are encoded by construction and the method can generate unweighted events with very high efficiency without any significant initialisation time. We argue that the the method should be capable of achieving (at least) NNLO+NNDL accuracy for the shower evolution variable and use hadronic \(Z\) decays as a specific example. Introduction The presence of infrared (IR) poles in amplitudes with partons that can become soft and/or collinear complicates making precise predictions in theories with massless gauge bosons (such as QED and QCD). Although the resulting IR singularities can be treated consistently and cancel order by order in the relevant gauge coupling(s), they leave a legacy in physical observables in the form of logarithms of scale ratios. If significant scale hierarchies are present in the process or observables at hand, these logarithms counteract the naive coupling-power suppression of higher-order terms. This reduces the effective accuracy of fixed-order calculations for multi-scale problems. This is a concern for ongoing experimental and phenomenological studies, e.g. at the LHC, where ever-more complex final states are being targeted -- and accurately measured -- with multiple resolved objects each of which defines an intrinsic scale, and/or for observables sensitive to substructure. It also applies to differential observables that cover a wide range of scales over their domain(s), which are often well described by fixed-order perturbation theory in hard tails while log-enhanced terms affect the bulk/peak of the differential distributions. To give a schematic example, an NNLO QCD calculation of a cross section with a jet veto would include the following terms: \[\overbrace{F_{0}}^{\rm LO}\ \ +\ \overbrace{\alpha_{s}(L^{2}+L+F_{1})}^{ \rm NLO}\ \ +\ \overbrace{\alpha_{s}^{2}(L^{4}+L^{3}+L^{2}+L+F_{2})}^{\rm NNLO}\, \tag{1}\] where \(\alpha_{s}\) is the QCD coupling constant, \(F_{i}\) denote non-log terms at each order and \(L^{m}\) in this example represents terms proportional to powers of logs of the jet-veto scale to a scale characteristic of the Born-level hard process. If the scales are such that \(\alpha L^{2}\sim 1\) then all terms \(\alpha_{s}^{n}L^{2n}\) would be of order unity, invalidating any fixed-order truncation of the series. For less extreme hierarchies, the consequence is a reduction of the effective relative accuracy of the truncation. At face value, fixed-order calculations are therefore always most accurate for single-scale problems, while their effective accuracy for processes/observables with scale hierarchies is reduced. The applicability of perturbation theory can be extended to multi-scale problems by _resumming_ the log-enhanced terms to all orders, now using a logarithmic order counting in which a rate like that in eq. (1) is (re)expressed, here shown schematically up to NNLO+N4DL accuracy: \[\left(\overbrace{F_{0}}^{\rm LO}\ \ +\ \overbrace{\alpha_{s}F_{1}}^{\rm NLO }\ +\ \overbrace{\alpha_{s}^{2}F_{2}}^{\rm NNLO}\right)\ \ \times\ \exp\left(\overbrace{-\alpha_{s}L^{2}}^{\rm DL}\ \ -\overbrace{-\alpha_{s}L^{2}}^{\rm NDL}\ \ -\overbrace{-\alpha_{s}^{2}L^{3}}^{\rm NNDLL}\ \ -\overbrace{-\alpha_{s}^{2}L^{2}-\alpha_{s}^{3}L^{4}}^{\rm NNDLL}\ \right.\] (2) \[\ Several general resummation methods exist, which operate at different levels of inclusiveness; here we focus only on the most exclusive one, parton showers. The requirement of full exclusivity comes at a cost: this is generally the hardest method to reach high formal accuracy with. One might ask why, then, pursue this method, when complementary, more inclusive methods, are available, which can reach better accuracy than showers do? There are at least four strong reasons for this: 1. **Universality.** Given a starting scale, a parton-shower algorithm can be applied to _any_ parton configuration. This means that, once a shower algorithm has been defined and encoded in a Monte-Carlo implementation, it can be applied to almost any conceivable process type, within and beyond the SM, with little or no additional manpower. This is the basis of the near-ubiquitous applicability of general-purpose shower Monte Carlos (GPMCs) in HEP. 2. **Efficiency.** Since shower algorithms are based directly on the dominant singularity structures of radiative corrections, they are highly efficient in producing unweighted events in the (Born + \(n\))-parton phase spaces. This happens by construction, without significant inititalization time. This property also underpins so-called forward-branching phase-space generators [1, 2]. 3. **Fully differential final states.** While inclusive resummation methods typically require a separate dedicated calculation for each specific observable, shower algorithms produce fully-differential exclusive final states, on which _any_ observable can be evaluated. Thus, one calculation can suffice to make predictions for any number of observables. 4. **The IR cutoff** of the shower algorithm, combined with the fact that all-orders corrections have been included above it, makes it possible to interface the perturbative calculation consistently with explicit and detailed dynamical models of hadronization, such as string or cluster fragmentation. This in turn also enables embedding the calculation within a more complete modelling framework, including detailed simulations of experimental fiducial and efficiency effects, and making the calculation accessible to the full suite of collider-physics phenomenology study tools, again a main reason for the wide use of GPMCs in HEP. These properties, together with the increasing phenomenological relevance of multi-scale problems in general, make it interesting to embed fixed-order calculations systematically within shower calculations, in a general and efficient way. Up to NLO accuracy, this is relatively straightforward [3, 4, 5, 6, 7, 8, 9, 10]. Beyond NLO, however, there is _ab initio_ a problem. At best, current parton showers achieve NLL resummation accuracy [11, 12, 13, 14]. Comparing eqs. (1) and (2), we see that the \(\alpha_{s}^{2}L^{2}\) and \(\alpha_{s}^{2}L\) pieces in eq. (1) are associaed with NNDL and N3DL terms in eq. (2), respectively; these are categorised as NLL and NNLL respectively if one employs "Caesar-style" log counting [15]. This makes it impossible to write down matching equations in which the log-enhanced terms are all on the shower side. To circumvent this issue, previous NNLO matching approaches [16, 17, 18, 19] have utilised analytically calculated Sudakov factors to supplement the parton-shower ones. This is probably the best one can do with current showers but does have the drawback that the accuracy in shower-dominated phase-space regions is not improved. There is also the need to calculate separate analytical Sudakov factors. And subtleties associated with the fact that they (and their resummation variables) are not completely identical to their equivalents on the shower side, though this difference can be made at least formally subleading by making suitable resummation-variable and scale choices. Returning to eqs. (1) and (2), a fully self-contained embedding of an NNLO calculation in a parton-shower framework would appear to require an N3DL accurate shower algorithm (and N3LO calculations, which are also beginning to emerge on the phenomenological scene, would then require N6DL showers). This is not realistic to shoot for, and is also not strictly necessary. Instead, we aim for a consistent shower that exponentiates the full \(\mathcal{O}(\alpha_{s}^{2})\) pole structure of the NNLO fixed-order matrix elements. This is sufficient to enable a fully differential matching, where all poles that appear on the fixed-order side also appear on the shower side. If the \(\mathcal{O}(\alpha_{s}^{3})\) soft anomalous dimension is also included, we argue that the shower Sudakov factor contains all terms required for NNDL accuracy on the shower evolution variable. By construction, the method also exponentiates the N3DL \(\alpha_{s}^{2}L\) term, but the other N3DL coefficients are not included. We note that for modest scale hierarchies, characterised by \(\alpha_{s}L^{2}<1\), the relative importance of the \(\alpha_{s}^{3}L\) and \(\alpha_{s}^{2}L\) cofficients swap places, hence in such regions we would still expect our partial N3DL resummation to represent a systematic improvement over NNDL. Below, we describe the ingredients that are needed to accomplish this, based on refs. [20, 21, 22, 23]. ## 2 Phase-Space Generation and NNLO Matching In a conventional fixed-order calculation, each of the \((\text{Born}+m)\)-parton phase spaces are generated separately. In a shower-style algorithm, instead all events start out as Born-level events, and all higher multiplicities are produced by the shower branching process. The unitarity of the shower generates a Sudakov-weighting of exclusive cross sections, which at each higher multiplicity comes multiplied by the kernel(s) of the relevant branchings. If the shower algorithm is sufficiently tractable, these weights can be expanded and matched to any given fixed order [24]. This has been worked out for final-state antenna showers at both tree level [24, 25, 26] and at one loop [20]. Here, we focus on a MEC/POWHEG-style multiplicative matching procedure. For this to work, it is obviously necessary that the shower algorithm is able to populate all of the relevant phase spaces, with no "dead zones". This is not true of conventional strongly-ordered parton showers1. E.g., a \(p_{\perp}\)-ordering condition will typically cut out part of the \((\text{Born}+2)\)-parton phase space [24]. Footnote 1: This can in principle be circumvented by modifying the shower ordering variable (e.g., virtuality-ordering can trivially be seen to cover all of phase space), but at least for LL branching kernels we are discouraged from doing so, for the reasons elaborated on in ref. [24], and it also appears to lead to the wrong resummation structure [20, 21]. Another option was “smooth ordering” [24], but again we believe this would lead to an undesirable resummation structure. A path to a robust approach can be found by analysing the propagator structure of the amplitudes that contribute to the regions that are cut out by strong ordering. These phase-space points are characterised by having no strong hierarchy in the propagator virtualities. Intuitively, they should therefore be thought of not as resulting from iterated (ordered) \(n\to n+1\) splittings, but as direct (single-scale) \(n\to n+2\) splittings. They are also associated with qualitatively different terms in both the fixed-order and logarithmic expansions than the points in the ordered region are; the unordered region only borders on double-unresolved limits of the fixed-order matrix elements, and hence integrals over it should also only contribute to \(\alpha_{s}^{2}L^{2}\) (NNDL) and \(\alpha_{s}L\) (N3DL) coefficients. This is consistent with conventional showers being able to reach up to ND this region, but we suspect it would not be possible to reach accuracy higher than NDL without some form of dedicated treatment of the unordered/double-unresolved region of phase space. Our solution [21] is to add new "direct" \(2\to 4\) branchings, based on a Sudakov-style 6D phase-space sampler with an \(\alpha_{s}^{2}/p_{\perp}^{4}\) kernel. In a sector-shower context [22, 25], we divide the \((\text{Born}+2)\)-parton phase-space cleanly into an unordered sector to be populated by the direct \(2\to 4\) sampler, and an ordered sector populated by iterated \(2\to 3\) ones. (In a global shower, one would instead sum over \(2\to 3\) and \(2\to 4\) contributions, when formulating the matching conditions.) The specific criterion we use to decide which sector we are in is the following: in an \(m\)-parton configuration, find the smallest (colour-ordered) 3-parton \(p_{\perp}\) resolution scale. Tentatively perform that clustering, using antenna kinematics. Now again find the smallest 3-parton \(p_{\perp}\) resolution scale, denoted \(\hat{p}_{\perp}\). If \(\hat{p}_{\perp}>p_{\perp}\), the \((m\to m-1)\)-parton clustering is ordered; otherwise it is unordered. Unordered Part:to realise the direct \(2\to 4\) sampler, we make use of the iterated (exact) \(2\to 3\) antenna phase-space factorisation, and define a _trial_\(2\to 4\) Sudakov factor as follows: \[-\ln\hat{\Delta}_{2\to 4}(p_{\perp 0}^{2},p_{\perp}^{2})=\int_{0}^{p_{ \perp 0}^{2}}\mathrm{d}p_{\perp 1}^{2}\int_{p_{\perp}^{2}}^{p_{\perp 0}^{2}} \mathrm{d}p_{\perp 2}^{2}\overbrace{\Theta(p_{\perp 2}^{2}-p_{\perp 1}^{2})} \int\mathrm{d}y_{1}\mathrm{d}y_{2}\,\hat{a}_{2\to 4}\;, \tag{3}\] where a simple choice for the trial function \(\hat{a}_{2\to 4}\) is proportional to \(C_{A}^{2}\,\alpha_{s}^{2}(p_{\perp 2}^{2})/p_{\perp 2}^{4}\). We then exploit the definition of the unordered region to swap the order of the integrations, \[\Longrightarrow\quad-\ln\hat{\Delta}_{2\to 4}(p_{\perp 0}^{2},p_{ \perp}^{2}) = \int_{p_{\perp}^{2}}^{p_{\perp 0}^{2}}\mathrm{d}p_{\perp 2}^{2} \underbrace{\int_{0}^{p_{\perp 2}^{2}}\mathrm{d}p_{\perp 1}^{2}}_{\text{ Unordered}\cdot p_{\perp 1}<p_{\perp 2}}\int\mathrm{d}y_{1} \mathrm{d}y_{2}\,\hat{a}_{2\to 4}\;. \tag{4}\] Details of the trial-generation procedure are given in ref. [21]. Unweighted ME-corrected events are generated by accepting trial branchings with a tree-level second-order MEC ratio, \[P_{2\to 4}^{\text{MEC}}\;=\;\frac{|M_{\text{Born}+2}^{(0)}|^{2}}{\hat{a}_{2 \to 4}|M_{\text{Born}}^{(0)}|^{2}}\;, \tag{5}\] where subscripts denote multiplicities and the superscript indicates relative loop order. For hadronic \(Z\) decay, the physical \(2\to 4\) Sudakov factor generated by the matched shower is then: \[-\ln\Delta_{2\to 4}(m_{Z}^{2},p_{\perp}^{2})\;=\;\int_{p_{\perp}^{2}}^{m_{ Z}^{2}}\mathrm{d}p_{\perp 2}^{2}\int_{0}^{p_{\perp 2}^{2}}\mathrm{d}p_{\perp 1}^{2} \int\mathrm{d}y_{1}\mathrm{d}y_{2}\;\frac{|M_{\text{q}\text{q}\text{g}\text{g} \text{g}}^{(0)}|^{2}}{|M_{q\bar{q}}^{(0)}|^{2}}\;. \tag{6}\] After a \(2\to 4\) trial branching is accepted, the pure shower evolution can simply be continued, starting from the \(p_{\perp}\) scale of the accepted \(2\to 4\) branching. Ordered Part:in the ordered part of the nested phase spaces, the first \(2\to 3\) branching receives a standard first-order (tree-level) MEC, augmented by a second-order (one-loop) correction, \[P_{2\to 3}^{\text{MEC}}\;=\;\overbrace{\frac{|M_{\text{Born}+1}^{(0)}|^{2}}{ \hat{a}_{2\to 3}|M_{\text{Born}}^{(0)}|^{2}}}^{\text{Tree-Level 2 $\to 3$ MEC}}\overbrace{\left(1\;+\;\bar{w}_{\text{Born}+1}^{\text{ NLO}}\;-\;\bar{w}_{\text{Born}}^{\text{ NLO}}\;-\;\bar{w}_{2\to 3}^{\text{ Sudakov}}\;-\;\frac{\alpha_{s}}{2\pi}\,\frac{\beta_{0}}{2} \ln\frac{\mu_{\text{FO}}^{2}}{\mu_{\text{PS}}^{2}}\right)}^{\text{One-Loop Corrections}}^{\text{One-Loop Corrections}}\;. \tag{7}\] The one-loop corrections are defined so that the second-order shower expansion will match the NNLO real-virtual coefficient [23]. The fixed-order weights \(\tilde{w}^{\rm NLO}_{\rm Born+1}\) and \(\tilde{w}^{\rm NLO}_{\rm Born}\) are each IR finite, \[|M^{(0)}_{\rm Born+m}|^{2}\,\tilde{w}^{\rm NLO}_{\rm Born+m}\ =\ 2{\rm Re}\big{[}M^{(1)}_{\rm Born+m}M^{(0)*}_{\rm Born+m}\big{]}\,+\, \int_{0}^{\rm P^{2}_{\perp,m}}{\rm d}\Phi_{+1}\,|M^{(0)}_{\rm Born+m+1}|^{2}\, \tag{8}\] and \(\tilde{w}^{\rm Sudakov}_{2\to 3}\) is the first-order expansion of the \(2\to 3\) shower Sudakov weight [20], \[\tilde{w}^{\rm Sudakov}_{2\to 3}\ =\ \ -\ \int_{p^{2}_{\perp}}^{p^{2}_{\perp 0}}{ \rm d}\Phi_{+1}\frac{|M^{(0)}_{\rm Born+1}|^{2}}{|M^{(0)}_{\rm Born}|^{2}}. \tag{9}\] The last term in eq. (7) matches the parton-shower and fixed-order renormalisation-scale choices. The canonical choice for coherent showers is \(\mu_{R}\propto p_{\perp}\) augmented by the so-called "CMW factor", \(\kappa_{\rm CMW}\)[11], which absorbs the 2-loop cusp anomalous dimension, \[\mu_{\rm PS}^{2}=\kappa_{\rm CMW}^{2}p_{\perp}^{2}\ \,\ \ \kappa^{2}=\exp(K/\beta_{0})\ \,\ \ K=\frac{67C_{A}}{18}-\frac{\pi^{2}}{6}-\frac{10n_{F}T_{R}}{9}\ \,\ \ \beta_{0}=\frac{11C_{A}-4T_{R}n_{F}}{3}. \tag{10}\] Putting it all together, the \(2\to 3\) Sudakov factor for hadronic \(Z\) decay becomes: \[-\ln\Delta_{2\to 3}\,(m_{Z}^{2},p_{\perp}^{2}) = \int_{p_{\perp}^{2}}^{m_{Z}^{2}}{\rm d}p_{\perp 1}^{2}{\rm d}y_{1} \left(\,\frac{|M^{(0)}_{q\bar{q}g}|^{2}}{|M^{(0)}_{q\bar{q}}|^{2}}\,\bigg{[} \,1\,-\overbrace{\frac{\alpha_{s}}{\pi}}^{\stackrel{{\stackrel{{ \stackrel{{\stackrel{{\stackrel{{\stackrel{ \stackrel{{\stackrel{\stackrel{\stackrel{{\stackrel \cdot ## 3 Argument for NNDL Accuracy in the Shower \(\mathbf{p_{\perp}}\) Evolution Variable Let us be a bit more definite about what exactly the combined Sudakov factor corresponds to. Specifically, consider a jet clustering algorithm that corresponds to the inverse of the sector-shower branching algorithm. Since we use dipole-antenna kinematics and ARIADNE \(p_{\perp}\)[27] as our sector-resolution variable, this is known as the ARCLUS algorithm [28], suitably extended to incorporate inverses of our new direct \(2\to 4\) branchings. We call this ARCLUS 2. For a global shower, this jet algorithm would have to be defined in a stochastic way, to allow for the multiple histories that can contribute to each phase-space point. But since a sector shower is bijective the corresponding inverse algorithm is in our case a conventional deterministic jet clustering algorithm, producing a unique clustering sequence for each event. The rate of events that will pass an ARCLUS-2 jet veto at a scale \(p_{\perp}\) is: \[k^{\text{NNLO}}\,|M_{\text{Born}}^{(0)}|^{2}\,\Delta_{2\to 3}(m_{Z}^{2},\,p_{ \perp}^{2})\,\Delta_{2\to 4}(m_{Z}^{2},p_{\perp}^{2}). \tag{13}\] We shall assume that the NNLO matching ensures that \(k^{\text{NNLO}}\) matches the coefficients \(F_{i}\) in eq. (2). Before considering the log terms in the Sudakov factors, we first ask whether further shower evolution could in principle lead to violations of the jet veto, e.g., via recoil effects from subsequent branchings. If so, that would invalidate eq. (13). In a global shower setup, this question is nontrivial since at least some of the shower histories would involve scales higher than the veto scale, and because recoils that increase the resolution scale are not explicitly forbidden. In a sector shower setup, however, neither of these complications are present, hence the above equation is exact. Assuming the order-\(\alpha_{s}\) log terms to be guaranteed by the integral over the tree-level matrix-element ratio, \(|M_{q\bar{q}\bar{s}}|^{2}/M_{q\bar{q}}|^{2}\) and the remaining \(\alpha_{s}^{2}L^{3}\) NDL coefficient via the CMW factor, the question of NNDL accuracy on eq. (13) boils down to whether the remaining terms in the combined shower Sudakov produce the correct \(\alpha_{s}^{2}L^{2}\) coefficient in eq. (2). We then rely on extending the CMW prescription to match the 3-loop cusp anomalous dimension to get the \(\alpha_{s}^{3}L^{4}\) piece. Terms proportional to \(\alpha_{s}^{2}L^{2}\) arise in quite a few places in eqs. (11) and (6). Many of these are analytically tractable, e.g. using the expressions in [20, 29]; the most challenging are the ones from the 4-parton phase space. We have not completed a full analysis of this structure yet and hence are not in a position to _prove_ NNDL accuracy. However, since all of the relevant ME poles are clearly exponentiated in eqs. (11) and (6), with the matching to fixed order eliminating double-counting of non-singular coefficients (like \(\alpha_{s}/\pi\)), we believe there is good reason to expect that the method we have proposed is capable of achieving (at least) NNDL accuracy. For clarity and completeness, we emphasise that we are only making this statement about an observable that corresponds to the shower-evolution variable itself. We also note that we have here neglected subtleties that arise at subleading colour. ## Acknowledgments We are grateful to L. Scyboz and to B. El-Menoufi for helpful comments on the draft. This work was supported by the Australian Research Council Discovery Project DP220103512 "Tackling the Computational Bottleneck in Precision Particle Physics".